Significant advances in the automated glaucoma detection techniques have been made through the employment of the Machine Learning (ML) and Deep Learning (DL) methods, an overview of which will be provided in this paper. What sets the current literature review apart is its exclusive focus on the aforementioned techniques for glaucoma detection using the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines for filtering the selected papers. To achieve this, an advanced search was conducted in the Scopus database, specifically looking for research papers published in 2023, with the keywords "glaucoma detection", "machine learning", and "deep learning". Among the multiple found papers, the ones focusing on ML and DL techniques were selected. The best performance metrics obtained using ML recorded in the reviewed papers, were for the SVM, which achieved accuracies of 98.31%, 98.61%, 96.43%, 96.67%, 95.24%, and 98.60% in the ACRIMA, REFUGE, RIM-ONE, ORIGA-light, DRISHTI-GS, and sjchoi86-HRF databases, respectively, employing the REFUGE-trained model, while when deploying the ACRIMA-trained model, it attained accuracies of 98.92%, 99.06%, 98.27%, 97.10%, 96.97%, and 96.36%, in the same databases, respectively. The best performance metrics obtained utilizing DL recorded in the reviewed papers, were for the lightweight CNN, with an accuracy of 99.67% in the Diabetic Retinopathy (DR) and 96.5% in the Glaucoma (GL) databases. In the context of non-healthy screening, CNN achieved an accuracy of 99.03% when distinguishing between GL and DR cases. Finally, the best performance metrics were obtained using ensemble learning methods, which achieved an accuracy of 100%, specificity of 100%, and sensitivity of 100%. The current review offers valuable insights for clinicians and summarizes the recent techniques used by the ML and DL for glaucoma detection, including algorithms, databases, and evaluation criteria.
With the high usage of computers and networks in the current time, the amount of security threats is increased. The study of intrusion detection systems (IDS) has received much attention throughout the computer science field. The main objective of this study is to examine the existing literature on various approaches for Intrusion Detection. This paper presents an overview of different intrusion detection systems and a detailed analysis of multiple techniques for these systems, including their advantages and disadvantages. These techniques include artificial neural networks, bio-inspired computing, evolutionary techniques, machine learning, and pattern recognition.
Pavement crack and pothole identification are important tasks in transportation maintenance and road safety. This study offers a novel technique for automatic asphalt pavement crack and pothole detection which is based on image processing. Different types of cracks (transverse, longitudinal, alligator-type, and potholes) can be identified with such techniques. The goal of this research is to evaluate road surface damage by extracting cracks and potholes, categorizing them from images and videos, and comparing the manual and the automated methods. The proposed method was tested on 50 images. The results obtained from image processing showed that the proposed method can detect cracks and potholes and identify their severity levels wit
... Show MoreCorona virus sickness has become a big public health issue in 2019. Because of its contact-transparent characteristics, it is rapidly spreading. The use of a face mask is among the most efficient methods for preventing the transmission of the Covid-19 virus. Wearing the face mask alone can cut the chance of catching the virus by over 70\%. Consequently, World Health Organization (WHO) advised wearing masks in crowded places as precautionary measures. Because of the incorrect use of facial masks, illnesses have spread rapidly in some locations. To solve this challenge, we needed a reliable mask monitoring system. Numerous government entities are attempting to make wearing a face mask mandatory; this process can be facilitated by using face m
... Show MoreToday’s modern medical imaging research faces the challenge of detecting brain tumor through Magnetic Resonance Images (MRI). Normally, to produce images of soft tissue of human body, MRI images are used by experts. It is used for analysis of human organs to replace surgery. For brain tumor detection, image segmentation is required. For this purpose, the brain is partitioned into two distinct regions. This is considered to be one of the most important but difficult part of the process of detecting brain tumor. Hence, it is highly necessary that segmentation of the MRI images must be done accurately before asking the computer to do the exact diagnosis. Earlier, a variety of algorithms were developed for segmentation of MRI images by usin
... Show MoreThe meniscus has a crucial function in human anatomy, and Magnetic Resonance Imaging (M.R.I.) plays an essential role in meniscus assessment. It is difficult to identify cartilage lesions using typical image processing approaches because the M.R.I. data is so diverse. An M.R.I. data sequence comprises numerous images, and the attributes area we are searching for may differ from each image in the series. Therefore, feature extraction gets more complicated, hence specifically, traditional image processing becomes very complex. In traditional image processing, a human tells a computer what should be there, but a deep learning (D.L.) algorithm extracts the features of what is already there automatically. The surface changes become valuable when
... Show MoreDigital tampering identification, which detects picture modification, is a significant area of image analysis studies. This area has grown with time with exceptional precision employing machine learning and deep learning-based strategies during the last five years. Synthesis and reinforcement-based learning techniques must now evolve to keep with the research. However, before doing any experimentation, a scientist must first comprehend the current state of the art in that domain. Diverse paths, associated outcomes, and analysis lay the groundwork for successful experimentation and superior results. Before starting with experiments, universal image forensics approaches must be thoroughly researched. As a result, this review of variou
... Show MoreGlaucoma is a visual disorder, which is one of the significant driving reason for visual impairment. Glaucoma leads to frustrate the visual information transmission to the brain. Dissimilar to other eye illness such as myopia and cataracts. The impact of glaucoma can’t be cured; The Disc Damage Likelihood Scale (DDLS) can be used to assess the Glaucoma. The proposed methodology suggested simple method to extract Neuroretinal rim (NRM) region then dividing the region into four sectors after that calculate the width for each sector and select the minimum value to use it in DDLS factor. The feature was fed to the SVM classification algorithm, the DDLS successfully classified Glaucoma d
The current study aimed to review previous scholarly efforts to understand the concept of sustainable development, its practices, and its significance for public institutions. The study focuses on the dimensions of sustainable development—environmental, social, and economic—within public institutions. Sustainable development allows these institutions to balance environmental protection, economic growth, and social justice, ensuring the prosperity of both current and future generations. Furthermore, sustainable development is crucial for maintaining organizational performance. The review bridges knowledge gaps related to sustainable development and utilizes an analytical approach, surveying previous studies on the topic. The sele
... Show More