Fractal image compression depends on representing an image using affine transformations. The main concern for researches in the discipline of fractal image compression (FIC) algorithm is to decrease encoding time needed to compress image data. The basic technique is that each portion of the image is similar to other portions of the same image. In this process, there are many models that were developed. The presence of fractals was initially noticed and handled using Iterated Function System (IFS); that is used for encoding images. In this paper, a review of fractal image compression is discussed with its variants along with other techniques. A summarized review of contributions is achieved to determine the fulfillment of fractal image compression, specifically for the block indexing methods based on the moment descriptor. Block indexing method depends on classifying the domain and range blocks using moments to generate an invariant descriptor that reduces the long encoding time. A comparison is performed between the blocked indexing technology and other fractal image techniques to determine the importance of block indexing in saving encoding time and achieving better compression ratio while maintaining image quality on Lena image.
In Computer-based applications, there is a need for simple, low-cost devices for user authentication. Biometric authentication methods namely keystroke dynamics are being increasingly used to strengthen the commonly knowledge based method (example a password) effectively and cheaply for many types of applications. Due to the semi-independent nature of the typing behavior it is difficult to masquerade, making it useful as a biometric. In this paper, C4.5 approach is used to classify user as authenticated user or impostor by combining unigraph features (namely Dwell time (DT) and flight time (FT)) and digraph features (namely Up-Up Time (UUT) and Down-Down Time (DDT)). The results show that DT enhances the performance of digraph features by i
... Show More
XML is being incorporated into the foundation of E-business data applications. This paper addresses the problem of the freeform information that stored in any organization and how XML with using this new approach will make the operation of the search very efficient and time consuming. This paper introduces new solution and methodology that has been developed to capture and manage such unstructured freeform information (multi information) depending on the use of XML schema technologies, neural network idea and object oriented relational database, in order to provide a practical solution for efficiently management multi freeform information system.
Autism Spectrum Disorder, also known as ASD, is a neurodevelopmental disease that impairs speech, social interaction, and behavior. Machine learning is a field of artificial intelligence that focuses on creating algorithms that can learn patterns and make ASD classification based on input data. The results of using machine learning algorithms to categorize ASD have been inconsistent. More research is needed to improve the accuracy of the classification of ASD. To address this, deep learning such as 1D CNN has been proposed as an alternative for the classification of ASD detection. The proposed techniques are evaluated on publicly available three different ASD datasets (children, Adults, and adolescents). Results strongly suggest that 1D
... Show MoreThe present work aims to study the efficiency of using aluminum refuse, which is available locally (after dissolving it in sodium hydroxide), with different coagulants like alum [Al2 (SO4)3.18H2O], Ferric chloride FeCl3 and polyaluminum chloride (PACl) to improve the quality of water. The results showed that using this coagulant in the flocculation process gave high results in the removal of turbidity as well as improving the quality of water by precipitating a great deal of ions causing hardness. From the experimental results of the Jar test, the optimum alum dosages are (25, 50 and 70 ppm), ferric chloride dosages are (15, 40 and 60 ppm) and polyaluminum chloride dosages were (10, 35 and 55 ppm) for initial water turbidity (100, 500 an
... Show MoreThe aerodynamic characteristics of general three-dimensional rectangular wings are considered using non-linear interaction between two-dimensional viscous-inviscid panel method and vortex ring method. The potential flow of a two-dimensional airfoil by the pioneering Hess & Smith method was used with viscous laminar, transition and turbulent boundary layer to solve flow about complex configuration of airfoils including stalling effect. Viterna method was used to extend the aerodynamic characteristics of the specified airfoil to high angles of attacks. A modified vortex ring method was used to find the circulation values along span wise direction of the wing and then interacted with sectional circulation obtained by Kutta-Joukowsky theorem of
... Show MoreThe ability of using aluminum filings which is locally solid waste was tested as a mono media in gravity rapid filter. The present study was conducted to evaluate the effect of variation of influent water turbidity (10, 20and 30 NTU); flow rate(30, 40, and 60 l/hr) and bed height (30and60)cm on the performance of aluminum filings filter media for 5 hours run time and compare it with the conventional sand filter. The results indicated that aluminum filings filter showed better performance than sand filter in the removal of turbidity and in the reduction of head loss. Results showed that the statistical model developed by the multiple linear regression was proved to be
valid, and it could be used to predict head loss in aluminum filings
Text categorization refers to the process of grouping text or documents into classes or categories according to their content. Text categorization process consists of three phases which are: preprocessing, feature extraction and classification. In comparison to the English language, just few studies have been done to categorize and classify the Arabic language. For a variety of applications, such as text classification and clustering, Arabic text representation is a difficult task because Arabic language is noted for its richness, diversity, and complicated morphology. This paper presents a comprehensive analysis and a comparison for researchers in the last five years based on the dataset, year, algorithms and the accuracy th
... Show More