Segmentation of real world images considered as one of the most challenging tasks in the computer vision field due to several issues that associated with this kind of images such as high interference between object foreground and background, complicated objects and the pixels intensities of the object and background are almost similar in some cases. This research has introduced a modified adaptive segmentation process with image contrast stretching namely Gamma Stretching to improve the segmentation problem. The iterative segmentation process based on the proposed criteria has given the flexibility to the segmentation process in finding the suitable region of interest. As well as, the using of Gamma stretching will help in separating the pixels of the objects and background through making the dark intensity pixels darker and the light intensity pixels lighter. The first 20 classes of Caltech 101 dataset have been utilized to demonstrate the performance of the proposed segmentation approach. Also, the Saliency Cut method has been adopted as a benchmark segmentation method. In summary, the proposed method improved some of the segmentation problems and outperforms the current segmentation method namely Saliency Cut method with segmentation accuracy 77.368%, as well as it can be used as a very useful step in improving the performance of visual object categorization system because the region of interest is mostly available.
The study presents the modification of the Broyden-Flecher-Goldfarb-Shanno (BFGS) update (H-Version) based on the determinant property of inverse of Hessian matrix (second derivative of the objective function), via updating of the vector s ( the difference between the next solution and the current solution), such that the determinant of the next inverse of Hessian matrix is equal to the determinant of the current inverse of Hessian matrix at every iteration. Moreover, the sequence of inverse of Hessian matrix generated by the method would never approach a near-singular matrix, such that the program would never break before the minimum value of the objective function is obtained. Moreover, the new modification of BFGS update (H-vers
... Show MoreThe digital image with the wavelet tools is increasing nowadays with MATLAB library, by using this method based on invariant moments which are a set of seven moments can be derived from the second and third moments , which can be calculated after converting the image from colored map to gray scale , rescale the image to (512 * 512 ) pixel , dividing the image in to four equal pieces (256 * 256 ) for each piece , then for gray scale image ( 512 * 512 ) and the four pieces (256 * 256 ) calculate wavelet with moment and invariant moment, then store the result with the author ,owner for this image to build data base for the original image to decide the authority of these images by u
... Show MoreAlzheimer's disease (AD) increasingly affects the elderly and is a major killer of those 65 and over. Different deep-learning methods are used for automatic diagnosis, yet they have some limitations. Deep Learning is one of the modern methods that were used to detect and classify a medical image because of the ability of deep Learning to extract the features of images automatically. However, there are still limitations to using deep learning to accurately classify medical images because extracting the fine edges of medical images is sometimes considered difficult, and some distortion in the images. Therefore, this research aims to develop A Computer-Aided Brain Diagnosis (CABD) system that can tell if a brain scan exhibits indications of
... Show MoreWe are used Bayes estimators for unknown scale parameter when shape Parameter is known of Erlang distribution. Assuming different informative priors for unknown scale parameter. We derived The posterior density with posterior mean and posterior variance using different informative priors for unknown scale parameter which are the inverse exponential distribution, the inverse chi-square distribution, the inverse Gamma distribution, and the standard Levy distribution as prior. And we derived Bayes estimators based on the general entropy loss function (GELF) is used the Simulation method to obtain the results. we generated different cases for the parameters of the Erlang model, for different sample sizes. The estimates have been comp
... Show MoreA common problem facing many Application models is to extract and combine information from multiple, heterogeneous sources and to derive information of a new quality or abstraction level. New approaches for managing consistency, uncertainty or quality of Arabic data and enabling e-client analysis of distributed, heterogeneous sources are still required. This paper presents a new method by combining two algorithms (the partitioning and Grouping) that will be used to transform information in a real time heterogeneous Arabic database environment
Research on the automated extraction of essential data from an electrocardiography (ECG) recording has been a significant topic for a long time. The main focus of digital processing processes is to measure fiducial points that determine the beginning and end of the P, QRS, and T waves based on their waveform properties. The presence of unavoidable noise during ECG data collection and inherent physiological differences among individuals make it challenging to accurately identify these reference points, resulting in suboptimal performance. This is done through several primary stages that rely on the idea of preliminary processing of the ECG electrical signal through a set of steps (preparing raw data and converting them into files tha
... Show More