Generally, radiologists analyse the Magnetic Resonance Imaging (MRI) by visual inspection to detect and identify the presence of tumour or abnormal tissue in brain MR images. The huge number of such MR images makes this visual interpretation process, not only laborious and expensive but often erroneous. Furthermore, the human eye and brain sensitivity to elucidate such images gets reduced with the increase of number of cases, especially when only some slices contain information of the affected area. Therefore, an automated system for the analysis and classification of MR images is mandatory. In this paper, we propose a new method for abnormality detection from T1-Weighted MRI of human head scans using three planes, including axial plane, coronal plane, and sagittal plane. Three different thresholds, which are based on texture features: mean, energy and entropy, are obtained automatically. This allowed to accurately separating the MRI slice into normal and abnormal one. However, the abnormality detection contained some normal blocks assigned wrongly as abnormal and vice versa. This problem is surmounted by applying the fine-tuning mechanism. Finally, the MRI slice abnormality detection is achieved by selecting the abnormal slices along its tumour region (Region of Interest-ROI).
Pavement crack and pothole identification are important tasks in transportation maintenance and road safety. This study offers a novel technique for automatic asphalt pavement crack and pothole detection which is based on image processing. Different types of cracks (transverse, longitudinal, alligator-type, and potholes) can be identified with such techniques. The goal of this research is to evaluate road surface damage by extracting cracks and potholes, categorizing them from images and videos, and comparing the manual and the automated methods. The proposed method was tested on 50 images. The results obtained from image processing showed that the proposed method can detect cracks and potholes and identify their severity levels wit
... Show MoreGenerally, statistical methods are used in various fields of science, especially in the research field, in which Statistical analysis is carried out by adopting several techniques, according to the nature of the study and its objectives. One of these techniques is building statistical models, which is done through regression models. This technique is considered one of the most important statistical methods for studying the relationship between a dependent variable, also called (the response variable) and the other variables, called covariate variables. This research describes the estimation of the partial linear regression model, as well as the estimation of the “missing at random” values (MAR). Regarding the
... Show MoreIn recent years, there has been a significant increase in research demonstrating the new and diverse uses of non-thermal food processing technologies, including more efficient mixing and blending processes, faster energy and mass transfer, lower temperature and selective extraction, reduced thermal and concentration gradients, reduced equipment size, faster response to extraction control, faster start-up, increased production, and a reduction in the number of steps in preparation and processing. Applications of ultrasound technology have indicated that this technology has a promising and significant future in the food industry and preservation, and there is a wide scope for its use due to the higher purity of final products and the
... Show MoreThis research describes a new model inspired by Mobilenetv2 that was trained on a very diverse dataset. The goal is to enable fire detection in open areas to replace physical sensor-based fire detectors and reduce false alarms of fires, to achieve the lowest losses in open areas via deep learning. A diverse fire dataset was created that combines images and videos from several sources. In addition, another self-made data set was taken from the farms of the holy shrine of Al-Hussainiya in the city of Karbala. After that, the model was trained with the collected dataset. The test accuracy of the fire dataset that was trained with the new model reached 98.87%.
The rapid increase in the number of older people with Alzheimer's disease (AD) and other forms of dementia represents one of the major challenges to the health and social care systems. Early detection of AD makes it possible for patients to access appropriate services and to benefit from new treatments and therapies, as and when they become available. The onset of AD starts many years before the clinical symptoms become clear. A biomarker that can measure the brain changes in this period would be useful for early diagnosis of AD. Potentially, the electroencephalogram (EEG) can play a valuable role in early detection of AD. Damage in the brain due to AD leads to changes in the information processing activity of the brain and the EEG which ca
... Show MoreKE Sharquie, SA Al-Mashhadani, AA Noaimi, AA Hasan, Journal of Cutaneous and Aesthetic Surgery, 2012 - Cited by 19
Numerical simulations are carried out to evaluate the coherence concept’s effect on the performance regarding the optical system, when observing and imaging the planet’s surface. In numerous optical approaches, the coherence qualities of light sources play an important role. This paper provides an overview about the mathematical formulation of temporal and spatial coherence and incoherence properties of light sources. The circular aperture was used to describe the optical system like a telescope. The simulation results show that diffraction-limited for incoherent imaging system certainly improves the image. Yet, the quality of the image is degraded by the light source's highly spatial and temporal coherence properties, resulting in a
... Show More