Segmentation of real world images considered as one of the most challenging tasks in the computer vision field due to several issues that associated with this kind of images such as high interference between object foreground and background, complicated objects and the pixels intensities of the object and background are almost similar in some cases. This research has introduced a modified adaptive segmentation process with image contrast stretching namely Gamma Stretching to improve the segmentation problem. The iterative segmentation process based on the proposed criteria has given the flexibility to the segmentation process in finding the suitable region of interest. As well as, the using of Gamma stretching will help in separating the pixels of the objects and background through making the dark intensity pixels darker and the light intensity pixels lighter. The first 20 classes of Caltech 101 dataset have been utilized to demonstrate the performance of the proposed segmentation approach. Also, the Saliency Cut method has been adopted as a benchmark segmentation method. In summary, the proposed method improved some of the segmentation problems and outperforms the current segmentation method namely Saliency Cut method with segmentation accuracy 77.368%, as well as it can be used as a very useful step in improving the performance of visual object categorization system because the region of interest is mostly available.
Production sites suffer from idle in marketing of their products because of the lack in the efficient systems that analyze and track the evaluation of customers to products; therefore some products remain untargeted despite their good quality. This research aims to build a modest model intended to take two aspects into considerations. The first aspect is diagnosing dependable users on the site depending on the number of products evaluated and the user's positive impact on rating. The second aspect is diagnosing products with low weights (unknown) to be generated and recommended to users depending on logarithm equation and the number of co-rated users. Collaborative filtering is one of the most knowledge discovery techniques used positive
... Show MoreDeepfake is a type of artificial intelligence used to create convincing images, audio, and video hoaxes and it concerns celebrities and everyone because they are easy to manufacture. Deepfake are hard to recognize by people and current approaches, especially high-quality ones. As a defense against Deepfake techniques, various methods to detect Deepfake in images have been suggested. Most of them had limitations, like only working with one face in an image. The face has to be facing forward, with both eyes and the mouth open, depending on what part of the face they worked on. Other than that, a few focus on the impact of pre-processing steps on the detection accuracy of the models. This paper introduces a framework design focused on this asp
... Show MoreData hiding (Steganography) is a method used for data security purpose and to protect the data during its transmission. Steganography is used to hide the communication between two parties by embedding a secret message inside another cover (audio, text, image or video). In this paper a new text Steganography method is proposed that based on a parser and the ASCII of non-printed characters to hide the secret information in the English cover text after coding the secret message and compression it using modified Run Length Encoding method (RLE). The proposed method achieved a high capacity ratio for Steganography (five times more than the cover text length) when compared with other methods, and provides a 1.0 transparency by depending on som
... Show MoreThe primary objective of this paper is to improve a biometric authentication and classification model using the ear as a distinct part of the face since it is unchanged with time and unaffected by facial expressions. The proposed model is a new scenario for enhancing ear recognition accuracy via modifying the AdaBoost algorithm to optimize adaptive learning. To overcome the limitation of image illumination, occlusion, and problems of image registration, the Scale-invariant feature transform technique was used to extract features. Various consecutive phases were used to improve classification accuracy. These phases are image acquisition, preprocessing, filtering, smoothing, and feature extraction. To assess the proposed
... Show MoreThe unstable and uncertain nature of natural rubber prices makes them highly volatile and prone to outliers, which can have a significant impact on both modeling and forecasting. To tackle this issue, the author recommends a hybrid model that combines the autoregressive (AR) and Generalized Autoregressive Conditional Heteroscedasticity (GARCH) models. The model utilizes the Huber weighting function to ensure the forecast value of rubber prices remains sustainable even in the presence of outliers. The study aims to develop a sustainable model and forecast daily prices for a 12-day period by analyzing 2683 daily price data from Standard Malaysian Rubber Grade 20 (SMR 20) in Malaysia. The analysis incorporates two dispersion measurements (I
... Show MoreIn this paper, the theoretical cross section in pre-equilibrium nuclear reaction has been studied for the reaction at energy 22.4 MeV. Ericson’s formula of partial level density PLD and their corrections (William’s correction and spin correction) have been substituted in the theoretical cross section and compared with the experimental data for nucleus. It has been found that the theoretical cross section with one-component PLD from Ericson’s formula when doesn’t agree with the experimental value and when . There is little agreement only at the high value of energy range with the experimental cross section. The theoretical cross section that depends on the one-component William's formula and on-component corrected to spi
... Show MoreAs a result of the significance of image compression in reducing the volume of data, the requirement for this compression permanently necessary; therefore, will be transferred more quickly using the communication channels and kept in less space in memory. In this study, an efficient compression system is suggested; it depends on using transform coding (Discrete Cosine Transform or bi-orthogonal (tap-9/7) wavelet transform) and LZW compression technique. The suggested scheme was applied to color and gray models then the transform coding is applied to decompose each color and gray sub-band individually. The quantization process is performed followed by LZW coding to compress the images. The suggested system was applied on a set of seven stand
... Show More