Image compression plays an important role in reducing the size and storage of data while increasing the speed of its transmission through the Internet significantly. Image compression is an important research topic for several decades and recently, with the great successes achieved by deep learning in many areas of image processing, especially image compression, and its use is increasing Gradually in the field of image compression. The deep learning neural network has also achieved great success in the field of processing and compressing various images of different sizes. In this paper, we present a structure for image compression based on the use of a Convolutional AutoEncoder (CAE) for deep learning, inspired by the diversity of human eyes' observation of the different colors and features of images. We propose a multi-layer hybrid system for deep learning using the unsupervised CAE architecture and using the color clustering of the K-mean algorithm to compress images and determine their size and color intensity. The system is implemented using Kodak and Challenge on Learned Image Compression (CLIC) dataset for deep learning. Experimental results show that our proposed method is superior to the traditional compression methods of the autoencoder, and the proposed work has better performance in terms of performance speed and quality measures Peak Signal To Noise Ratio (PSNR) and Structural Similarity Index (SSIM) where the results achieved better performance and high efficiency With high compression bit rates and low Mean Squared Error (MSE) rate the results recorded the highest compression ratios that ranged between (0.7117 to 0.8707) for the Kodak dataset and (0.7191 to 0.9930) for CLIC dataset. The system achieved high accuracy and quality in comparison to the error coefficient, which was recorded (0.0126 to reach 0.0003) below, and this system is onsidered the most quality and accurate compared to the methods of deep learning compared to the deep learning methods of the autoencoder
Today with increase using social media, a lot of researchers have interested in topic extraction from Twitter. Twitter is an unstructured short text and messy that it is critical to find topics from tweets. While topic modeling algorithms such as Latent Semantic Analysis (LSA) and Latent Dirichlet Allocation (LDA) are originally designed to derive topics from large documents such as articles, and books. They are often less efficient when applied to short text content like Twitter. Luckily, Twitter has many features that represent the interaction between users. Tweets have rich user-generated hashtags as keywords. In this paper, we exploit the hashtags feature to improve topics learned
Cryptography is a method used to mask text based on any encryption method, and the authorized user only can decrypt and read this message. An intruder tried to attack in many manners to access the communication channel, like impersonating, non-repudiation, denial of services, modification of data, threatening confidentiality and breaking availability of services. The high electronic communications between people need to ensure that transactions remain confidential. Cryptography methods give the best solution to this problem. This paper proposed a new cryptography method based on Arabic words; this method is done based on two steps. Where the first step is binary encoding generation used t
... Show MoreThe quality of Global Navigation Satellite Systems (GNSS) networks are considerably influenced by the configuration of the observed baselines. Where, this study aims to find an optimal configuration for GNSS baselines in terms of the number and distribution of baselines to improve the quality criteria of the GNSS networks. First order design problem (FOD) was applied in this research to optimize GNSS network baselines configuration, and based on sequential adjustment method to solve its objective functions.
FOD for optimum precision (FOD-p) was the proposed model which based on the design criteria of A-optimality and E-optimality. These design criteria were selected as objective functions of precision, whic
... Show MoreVolleyball is one of the sports that require physical and skill abilities thus many teaching models appeared to teach these abilities like group investigation model. The research aimed at identifying the effect of group investigation model on learning underarm and overhead passing in volleyball. The researchers hypothesized statistical differences between pre and posttests in learning underarm and overhead passing in volleyball as well as differences in posttests of controlling and experimental groups in learning underarm and overhead passing in volleyball. The researcher used the experimental method on (30) second year female students of physical education and sport sciences college/ university of Baghdad. Group investigation model was app
... Show MoreFor over a decade, educational technology has been used sparingly in our schools and universities. Online training courses have been used since 2003 to fill the gaps in our learning system and to add extra program besides classroom learning. This paper aims to investigate the Iraqi EFL instructors’ participating in online training courses and its influence on the process of teaching and learning.
The sample of present study consists of 30 instructors from University of Baghdad. The questionnaire of sixteen items was constructed. After ensuring validity and reliability of questionnaire, it was applied on March 2013 and the result shows that most of instructors improve their teaching methods b
... Show MoreCorrect grading of apple slices can help ensure quality and improve the marketability of the final product, which can impact the overall development of the apple slice industry post-harvest. The study intends to employ the convolutional neural network (CNN) architectures of ResNet-18 and DenseNet-201 and classical machine learning (ML) classifiers such as Wide Neural Networks (WNN), Naïve Bayes (NB), and two kernels of support vector machines (SVM) to classify apple slices into different hardness classes based on their RGB values. Our research data showed that the DenseNet-201 features classified by the SVM-Cubic kernel had the highest accuracy and lowest standard deviation (SD) among all the methods we tested, at 89.51 % 1.66 %. This
... Show MoreRecent years have seen an explosion in graph data from a variety of scientific, social and technological fields. From these fields, emotion recognition is an interesting research area because it finds many applications in real life such as in effective social robotics to increase the interactivity of the robot with human, driver safety during driving, pain monitoring during surgery etc. A novel facial emotion recognition based on graph mining has been proposed in this paper to make a paradigm shift in the way of representing the face region, where the face region is represented as a graph of nodes and edges and the gSpan frequent sub-graphs mining algorithm is used to find the frequent sub-structures in the graph database of each emotion. T
... Show MoreAbstract
This study investigates the mechanical compression properties of tin-lead and lead-free alloy spherical balls, using more than 500 samples to identify statistical variability in the properties in each alloy. Isothermal aging was done to study and compare the aging effect on the microstructure and properties.
The results showed significant elastic and plastic anisotropy of tin phase in lead-free tin based solder and that was compared with simulation using a Crystal Plasticity Finite Element (CPEF) method that has the anisotropy of Sn installed. The results and experiments were in good agreement, indicating the range of values expected with anisotropic properties.
Keywords<
... Show More