Significant advances in the automated glaucoma detection techniques have been made through the employment of the Machine Learning (ML) and Deep Learning (DL) methods, an overview of which will be provided in this paper. What sets the current literature review apart is its exclusive focus on the aforementioned techniques for glaucoma detection using the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines for filtering the selected papers. To achieve this, an advanced search was conducted in the Scopus database, specifically looking for research papers published in 2023, with the keywords "glaucoma detection", "machine learning", and "deep learning". Among the multiple found papers, the ones focusing on ML and DL techniques were selected. The best performance metrics obtained using ML recorded in the reviewed papers, were for the SVM, which achieved accuracies of 98.31%, 98.61%, 96.43%, 96.67%, 95.24%, and 98.60% in the ACRIMA, REFUGE, RIM-ONE, ORIGA-light, DRISHTI-GS, and sjchoi86-HRF databases, respectively, employing the REFUGE-trained model, while when deploying the ACRIMA-trained model, it attained accuracies of 98.92%, 99.06%, 98.27%, 97.10%, 96.97%, and 96.36%, in the same databases, respectively. The best performance metrics obtained utilizing DL recorded in the reviewed papers, were for the lightweight CNN, with an accuracy of 99.67% in the Diabetic Retinopathy (DR) and 96.5% in the Glaucoma (GL) databases. In the context of non-healthy screening, CNN achieved an accuracy of 99.03% when distinguishing between GL and DR cases. Finally, the best performance metrics were obtained using ensemble learning methods, which achieved an accuracy of 100%, specificity of 100%, and sensitivity of 100%. The current review offers valuable insights for clinicians and summarizes the recent techniques used by the ML and DL for glaucoma detection, including algorithms, databases, and evaluation criteria.
As result of exposure in low light-level are images with only a small number of
photons. Only the pixels in which arrive the photopulse have an intensity value
different from zero. This paper presents an easy and fast procedure for simulating
low light-level images by taking a standard well illuminated image as a reference.
The images so obtained are composed by a few illuminated pixels on a dark
background. When the number of illuminated pixels is less than 0.01% of the total
pixels number it is difficult to identify the original object.
The liver diseases can define as the tumor or disorder that can affect the liver and causes deformation in its shape. The early detection and diagnose of the tumor using CT medical images, helps the detector to specify the tumor perfectly. This search aims to detect and classify the liver tumor depending on the use of a computer (image processing and textural analysis) helps in getting an accurate diagnosis. The methods which are used in this search depend on creating a binary mask used to separate the liver from the origins of the other in the CT images. The threshold has been used as an early segmentation. A Process, the watershed process is used as a classification technique to isolate the tumor which is cancer and cyst.
 
... Show MoreThis research aims to utilize a complementarity of field excavations and laboratory works with spatial analyses techniques for a highly accurate modeling of soil geotechniques properties (i.e. having lower root mean square error value for the spatial interpolation). This was conducted, for a specified area of interest, firstly by adopting spatially sufficient and well distributed samples (cores). Then, in the second step, a simulation is performed for the variations in properties when soil is contaminated with commonly used industrial material, which is white oil in our case. Cohesive (disturbed and undisturbed) soil samples were obtained from three various locations inside Baghdad University campus in AL-J
... Show MoreThe reservoir characteristics of the Pre-Santonian Eze-Aku sandstone were assessed using an integrated thin section petrography and SEM Back-Scattered Electron (BSE) imaging methods. Fresh outcrop data were collected in the Afikpo area (SE Nigeria). Twenty-eight representative samples from the different localities were analysed to obtain mineralogical and petrographical datasets germane for reservoir characterisation. Thin section petrography indicates that the sandstones are medium-grained, have an average Q90F10L0 modal composition, and are classified as mainly sub-arkose. The sandstones on SEM reveal the presence of cement in the form of quartz overgrowths, authigenic clays and feldspar. From epoxy-sta
... Show MoreA medical- service platform is a mobile application through which patients are provided with doctor’s diagnoses based on information gleaned from medical images. The content of these diagnostic results must not be illegitimately altered during transmission and must be returned to the correct patient. In this paper, we present a solution to these problems using blind, reversible, and fragile watermarking based on authentication of the host image. In our proposed algorithm, the binary version of the Bose_Chaudhuri_Hocquengham (BCH) code for patient medical report (PMR) and binary patient medical image (PMI) after fuzzy exclusive or (F-XoR) are used to produce the patient's unique mark using secret sharing schema (SSS). The patient’s un
... Show MoreIn this paper, an algorithm for reconstruction of a completely lost blocks using Modified
Hybrid Transform. The algorithms examined in this paper do not require a DC estimation
method or interpolation. The reconstruction achieved using matrix manipulation based on
Modified Hybrid transform. Also adopted in this paper smart matrix (Detection Matrix) to detect
the missing blocks for the purpose of rebuilding it. We further asses the performance of the
Modified Hybrid Transform in lost block reconstruction application. Also this paper discusses
the effect of using multiwavelet and 3D Radon in lost block reconstruction.
This research aims to find how three different types of mouthwashes affect the depth of artificial white spot lesions. Teeth with various depths of white spot lesions were immersed in either splat mouthwash, Biorepair mouthwash, Sensodyne mouthwash, or artificial saliva (control)twice daily for one minute for 4 weeks and 8 weeks at 37°C. After this immersion procedure, lesion depth was measured using a diagnosed pen score. A one-way analysis of variance, Dunnett T3 and Tukey's post hoc α = .05 were used to analyze the testing data. Splat mouthwash enhanced the WSL remineralization and made the lowest ΔF compared with other mouthwashes in shallow and deep enamel after 4 and 8 weeks of treatment. In the repair groups, after 4 weeks
... Show MoreIn all applications and specially in real time applications, image processing and compression plays in modern life a very important part in both storage and transmission over internet for example, but finding orthogonal matrices as a filter or transform in different sizes is very complex and importance to using in different applications like image processing and communications systems, at present, new method to find orthogonal matrices as transform filter then used for Mixed Transforms Generated by using a technique so-called Tensor Product based for Data Processing, these techniques are developed and utilized. Our aims at this paper are to evaluate and analyze this new mixed technique in Image Compression using the Discrete Wavelet Transfo
... Show MoreImage contrast enhancement methods have been a topic of interest in digital image processing for various applications like satellite imaging, recognition, medical imaging, and stereo vision. This paper studies the technique for image enhancement utilizing Adaptive Histogram Equalization and Weighted Gamma Correction to cater radiometric condition and illumination variations of stereo image pairs. In the proposed method, the stereo pair images are segmented together with weighted distribution into sub-histograms supported with Histogram Equalization (HE) mapping or gamma correction and guided filtering. The experimental result shows the experimented techniques outperform compare with the original image in ev
... Show MoreReal life scheduling problems require the decision maker to consider a number of criteria before arriving at any decision. In this paper, we consider the multi-criteria scheduling problem of n jobs on single machine to minimize a function of five criteria denoted by total completion times (∑), total tardiness (∑), total earliness (∑), maximum tardiness () and maximum earliness (). The single machine total tardiness problem and total earliness problem are already NP-hard, so the considered problem is strongly NP-hard.
We apply two local search algorithms (LSAs) descent method (DM) and simulated annealing method (SM) for the 1// (∑∑∑
... Show More