Researchers used different methods such as image processing and machine learning techniques in addition to medical instruments such as Placido disc, Keratoscopy, Pentacam;to help diagnosing variety of diseases that affect the eye. Our paper aims to detect one of these diseases that affect the cornea, which is Keratoconus. This is done by using image processing techniques and pattern classification methods. Pentacam is the device that is used to detect the cornea’s health; it provides four maps that can distinguish the changes on the surface of the cornea which can be used for Keratoconus detection. In this study, sixteen features were extracted from the four refractive maps along with five readings from the Pentacam software. The classifiers utilized in our study are Support Vector Machine (SVM) and Decision Trees classification accuracy was achieved 90% and 87.5%, respectively of detecting Keratoconus corneas. The features were extracted by using the Matlab (R2011 and R 2017) and Orange canvas (Pythonw).
After the outbreak of COVID-19, immediately it converted from epidemic to pandemic. Radiologic images of CT and X-ray have been widely used to detect COVID-19 disease through observing infrahilar opacity in the lungs. Deep learning has gained popularity in diagnosing many health diseases including COVID-19 and its rapid spreading necessitates the adoption of deep learning in identifying COVID-19 cases. In this study, a deep learning model, based on some principles has been proposed for automatic detection of COVID-19 from X-ray images. The SimpNet architecture has been adopted in our study and trained with X-ray images. The model was evaluated on both binary (COVID-19 and No-findings) classification and multi-class (COVID-19, No-findings
... Show MoreIn this paper, the process of comparison between the tree regression model and the negative binomial regression. As these models included two types of statistical methods represented by the first type "non parameter statistic" which is the tree regression that aims to divide the data set into subgroups, and the second type is the "parameter statistic" of negative binomial regression, which is usually used when dealing with medical data, especially when dealing with large sample sizes. Comparison of these methods according to the average mean squares error (MSE) and using the simulation of the experiment and taking different sample
... Show MoreBackground: Pyogenic granuloma is a hyperplastic benign tumor. The most common intra-oral site is marginal gingiva. It is often occurred in the second decade of life, it has a strong tendency to recur after simple excision.
The aim of study: to evaluate the therapeutic advantages of diode laser (810-980 nm) in intraoral Pyogenic granuloma management.
Materials and method: A total of 28 patients (14 men and 14 females) were enrolled in this study and had their pyogenic granuloma surgically removed using a diode laser. All of the patients were given local anesthetic and went through the identical surgical procedure (cartridge containing 1 percent lidocaine with epinephrine 1:
... Show MoreImage steganography is undoubtedly significant in the field of secure multimedia communication. The undetectability and high payload capacity are two of the important characteristics of any form of steganography. In this paper, the level of image security is improved by combining the steganography and cryptography techniques in order to produce the secured image. The proposed method depends on using LSBs as an indicator for hiding encrypted bits in dual tree complex wavelet coefficient DT-CWT. The cover image is divided into non overlapping blocks of size (3*3). After that, a Key is produced by extracting the center pixel (pc) from each block to encrypt each character in the secret text. The cover image is converted using DT-CWT, then the p
... Show MoreIn this study, ultraviolet (UV), ozone techniques with hydrogen peroxide oxidant were used to treat the wastewater which is produced from South Baghdad Power Station using lab-scale system. From UV-H2O2 experiments, it was shown that the optimum exposure time was 80 min. At this time, the highest removal percentages of oil, COD, and TOC were 84.69 %, 56.33 % and 50 % respectively. Effect of pH on the contaminants removing was studied in the range of (2-12). The best oil, COD, and TOC removal percentages (69.38 %, 70 % and 52 %) using H2O2/UV were at pH=12. H2O2/ozone experiments exhibited better performance compared to
... Show MoreEmbedding an identifying data into digital media such as video, audio or image is known as digital watermarking. In this paper, a non-blind watermarking algorithm based on Berkeley Wavelet Transform is proposed. Firstly, the embedded image is scrambled by using Arnold transform for higher security, and then the embedding process is applied in transform domain of the host image. The experimental results show that this algorithm is invisible and has good robustness for some common image processing operations.
Steganography is the art of secret communication. Its purpose is to hide the presence of information, using, for example, images as covers. The frequency domain is well suited for embedding in image, since hiding in this frequency domain coefficients is robust to many attacks. This paper proposed hiding a secret image of size equal to quarter of the cover one. Set Partitioning in Hierarchal Trees (SPIHT) codec is used to code the secret image to achieve security. The proposed method applies Discrete Multiwavelet Transform (DMWT) for cover image. The coded bit stream of the secret image is embedded in the high frequency subbands of the transformed cover one. A scaling factors ? and ? in frequency domain control the quality of the stego
... Show MoreThere are many images you need to large Khoznah space With the continued evolution of storage technology for computers, there is a need nailed required to reduce Alkhoznip space for pictures and image compression in a good way, the conversion method Alamueja
Researchers have increased interest in recent years in determining the optimum sample size to obtain sufficient accuracy and estimation and to obtain high-precision parameters in order to evaluate a large number of tests in the field of diagnosis at the same time. In this research, two methods were used to determine the optimum sample size to estimate the parameters of high-dimensional data. These methods are the Bennett inequality method and the regression method. The nonlinear logistic regression model is estimated by the size of each sampling method in high-dimensional data using artificial intelligence, which is the method of artificial neural network (ANN) as it gives a high-precision estimate commensurate with the dat
... Show More