Significant advances in the automated glaucoma detection techniques have been made through the employment of the Machine Learning (ML) and Deep Learning (DL) methods, an overview of which will be provided in this paper. What sets the current literature review apart is its exclusive focus on the aforementioned techniques for glaucoma detection using the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines for filtering the selected papers. To achieve this, an advanced search was conducted in the Scopus database, specifically looking for research papers published in 2023, with the keywords "glaucoma detection", "machine learning", and "deep learning". Among the multiple found papers, the ones focusing on ML and DL techniques were selected. The best performance metrics obtained using ML recorded in the reviewed papers, were for the SVM, which achieved accuracies of 98.31%, 98.61%, 96.43%, 96.67%, 95.24%, and 98.60% in the ACRIMA, REFUGE, RIM-ONE, ORIGA-light, DRISHTI-GS, and sjchoi86-HRF databases, respectively, employing the REFUGE-trained model, while when deploying the ACRIMA-trained model, it attained accuracies of 98.92%, 99.06%, 98.27%, 97.10%, 96.97%, and 96.36%, in the same databases, respectively. The best performance metrics obtained utilizing DL recorded in the reviewed papers, were for the lightweight CNN, with an accuracy of 99.67% in the Diabetic Retinopathy (DR) and 96.5% in the Glaucoma (GL) databases. In the context of non-healthy screening, CNN achieved an accuracy of 99.03% when distinguishing between GL and DR cases. Finally, the best performance metrics were obtained using ensemble learning methods, which achieved an accuracy of 100%, specificity of 100%, and sensitivity of 100%. The current review offers valuable insights for clinicians and summarizes the recent techniques used by the ML and DL for glaucoma detection, including algorithms, databases, and evaluation criteria.
Background A prospective clinical study was
performed to compare the efficacy of the use of lowmolecular-
weight heparin group (enoxparin group)
with control group in the prevention of deep-vein
thrombosis after total knee arthroplasty.
Aim of the study: to assess the prevalence of DVT
after total knee arthroplasty and evaluate the
importance of the use of low molecular weight
heparin in the prevention of this DVT.
Methods Thirty-three patients undergoing total
knee arthroplasty were randomly divided into two
groups. One group consisted of 12 patients who
received no prophylaxis with an anticoagulant (the
control group), other group consisted of 21 patients
who received the low-molecular-weight h
In this study, the response and behavior of machine foundations resting on dry and saturated sand was investigated experimentally. A physical model was manufactured to simulate steady state harmonic load applied on a footing resting on sandy soil at different operating frequencies. Total of (84) physical models were performed. The parameters that were taken into consideration include loading frequency, size of footing and different soil conditions. The footing parameters are related to the size of the rectangular footing and depth of embedment. Two sizes of rectangular steel model footing were used. The footings were tested by changing all parameters at the surface and at 50 mm depth below model surface. Meanwhile, the investigated paramete
... Show MoreCNC machine is used to machine complex or simple shapes at higher speed with maximum accuracy and minimum error. In this paper a previously designed CNC control system is used to machine ellipses and polylines. The sample needs to be machined is drawn by using one of the drawing software like AUTOCAD® or 3D MAX and is saved in a well-known file format (DXF) then that file is fed to the CNC machine controller by the CNC operator then that part will be machined by the CNC machine. The CNC controller using developed algorithms that reads the DXF file feeds to the machine, extracts the shapes from the file and generates commands to move the CNC machine axes so that these shapes can be machined.
The financial fraud considers part of large concept to management and financial corruption, the financial fraud is appeared especially after corporate, that is Emerge agency theory, that is because recognize relationship between the management company and stakeholder, that is through group from constriction in order to block the management to fraud practice, that on the basis was choose another party in order fraud this practice and give opinion on financial statement, that consider basis decision making from stakeholder to basis the report auditor about creditability this is statement that reflect real activity for the company.The Auditor in order to lead work him Full professionalism to must using group from control Techniques, that is
... Show MoreThe effectiveness of detecting and matching of image features using multiple views of a specified scene using dynamic scene analysis is considered to be a critical first step for many applications in computer vision image processing. The Scale invariant feature transform (SIFT) can be applied very successfully of typical images captured by a digital camera.
In this paper, firstly the SIFT and its variants are systematically analyzed. Then, the performances are evaluated in many situations: change in rotation, change in blurs, change in scale and change in illumination. The outcome results show that each algorithm has its advantages when compared with other algorithms
This paper include the problem of segmenting an image into regions represent (objects), segment this object by define boundary between two regions using a connected component labeling. Then develop an efficient segmentation algorithm based on this method, to apply the algorithm to image segmentation using different kinds of images, this algorithm consist four steps at the first step convert the image gray level the are applied on the image, these images then in the second step convert to binary image, edge detection using Canny edge detection in third Are applie the final step is images. Best segmentation rates are (90%) obtained when using the developed algorithm compared with (77%) which are obtained using (ccl) before enhancement.
In this paper, a new modification was proposed to enhance the security level in the Blowfish algorithm by increasing the difficulty of cracking the original message which will lead to be safe against unauthorized attack. This algorithm is a symmetric variable-length key, 64-bit block cipher and it is implemented using gray scale images of different sizes. Instead of using a single key in cipher operation, another key (KEY2) of one byte length was used in the proposed algorithm which has taken place in the Feistel function in the first round both in encryption and decryption processes. In addition, the proposed modified Blowfish algorithm uses five Sboxes instead of four; the additional key (KEY2) is selected randomly from additional Sbox
... Show MoreThe use of remote sensing technologies was gained more attention due to an increasing need to collect data for the environmental changes. Satellite image classification is a relatively recent type of remote sensing uses satellite imagery to indicate many key environment characteristics. This study aims at classifying and extracting vacant lands from high resolution satellite images of Baghdad city by supervised Classification tool in ENVI 5.3 program. The classification accuracy was 15%, which can be regarded as fairly acceptable given the difficulty of differentiating vacant land surfaces from other surfaces such as roof tops of buildings.
One of the serious environmental challenges that Iraq faces is climate changes and impacts of changing weather patterns and extreme global weather events. This paper addresses changes in the temporal and spatial characteristics of water levels of Razzaza Lake and response to climatic changes using archived series of Multispectral satellite images Landsat. TM, ETM+ and OLI images acquired on 1990, 2000 and of 2016. In order to extract, mapping the water surface area of the Razzaza Lake, Multispectral spectral band rationing the Normalized Difference Water Index (NDWI) technique was adopted, and the climatic elements data for the period (1990-2016) were analyzed which provide significant information of surfac
... Show More"Watermarking" is one method in which digital information is buried in a carrier signal;
the hidden information should be related to the carrier signal. There are many different types of
digital watermarking, including traditional watermarking that uses visible media (such as snaps,
images, or video), and a signal may be carrying many watermarks. Any signal that can tolerate
noise, such as audio, video, or picture data, can have a digital watermark implanted in it. A digital
watermark must be able to withstand changes that can be made to the carrier signal in order to
protect copyright information in media files. The goal of digital watermarking is to ensure the
integrity of data, whereas stegano