Image compression plays an important role in reducing the size and storage of data while increasing the speed of its transmission through the Internet significantly. Image compression is an important research topic for several decades and recently, with the great successes achieved by deep learning in many areas of image processing, especially image compression, and its use is increasing Gradually in the field of image compression. The deep learning neural network has also achieved great success in the field of processing and compressing various images of different sizes. In this paper, we present a structure for image compression based on the use of a Convolutional AutoEncoder (CAE) for deep learning, inspired by the diversity of human eyes' observation of the different colors and features of images. We propose a multi-layer hybrid system for deep learning using the unsupervised CAE architecture and using the color clustering of the K-mean algorithm to compress images and determine their size and color intensity. The system is implemented using Kodak and Challenge on Learned Image Compression (CLIC) dataset for deep learning. Experimental results show that our proposed method is superior to the traditional compression methods of the autoencoder, and the proposed work has better performance in terms of performance speed and quality measures Peak Signal To Noise Ratio (PSNR) and Structural Similarity Index (SSIM) where the results achieved better performance and high efficiency With high compression bit rates and low Mean Squared Error (MSE) rate the results recorded the highest compression ratios that ranged between (0.7117 to 0.8707) for the Kodak dataset and (0.7191 to 0.9930) for CLIC dataset. The system achieved high accuracy and quality in comparison to the error coefficient, which was recorded (0.0126 to reach 0.0003) below, and this system is onsidered the most quality and accurate compared to the methods of deep learning compared to the deep learning methods of the autoencoder
The current issues in spam email detection systems are directly related to spam email classification's low accuracy and feature selection's high dimensionality. However, in machine learning (ML), feature selection (FS) as a global optimization strategy reduces data redundancy and produces a collection of precise and acceptable outcomes. A black hole algorithm-based FS algorithm is suggested in this paper for reducing the dimensionality of features and improving the accuracy of spam email classification. Each star's features are represented in binary form, with the features being transformed to binary using a sigmoid function. The proposed Binary Black Hole Algorithm (BBH) searches the feature space for the best feature subsets,
... Show MoreIn the last few years, the Internet of Things (IoT) is gaining remarkable attention in both academic and industrial worlds. The main goal of the IoT is laying on describing everyday objects with different capabilities in an interconnected fashion to the Internet to share resources and to carry out the assigned tasks. Most of the IoT objects are heterogeneous in terms of the amount of energy, processing ability, memory storage, etc. However, one of the most important challenges facing the IoT networks is the energy-efficient task allocation. An efficient task allocation protocol in the IoT network should ensure the fair and efficient distribution of resources for all objects to collaborate dynamically with limited energy. The canonical de
... Show MoreIn this paper we proposes the philosophy of the Darwinian selection as synthesis method called Genetic algorithm ( GA ), and include new merit function with simple form then its uses in other works for designing one of the kinds of multilayer optical filters called high reflection mirror. Here we intend to investigate solutions for many practical problems. This work appears designed high reflection mirror that have good performance with reduction the number of layers, which can enable one to controlling the errors effect of the thickness layers on the final product, where in this work we can yield such a solution in a very shorter time by controlling the length of the chromosome and optimal genetic operators . Res
... Show MoreNonlinear regression models are important tools for solving optimization problems. As traditional techniques would fail to reach satisfactory solutions for the parameter estimation problem. Hence, in this paper, the BAT algorithm to estimate the parameters of Nonlinear Regression models is used . The simulation study is considered to investigate the performance of the proposed algorithm with the maximum likelihood (MLE) and Least square (LS) methods. The results show that the Bat algorithm provides accurate estimation and it is satisfactory for the parameter estimation of the nonlinear regression models than MLE and LS methods depend on Mean Square error.
This research deals with the design and simulation of a solar power system consisting of a KC200GT solar panel, a closed loop boost converter and a three phase inverter by using Matlab / Simulink. The mathematical equations of the solar panel design are presented. The electrical characteristics of the panel are tested at the values of 1000 for light radiation and 25 °C for temperature environment. The Proportional Integral (PI) controller is connected as feedback with the Boost converter to obtain a stable output voltage by reducing the oscillations in the voltage to charge a battery connected to the output of the converter. Two methods (Particle Swarm Optimization (PSO) and Zeigler- Nichols) are used for tuning
... Show MoreIn this study, He's parallel numerical algorithm by neural network is applied to type of integration of fractional equations is Abel’s integral equations of the 1st and 2nd kinds. Using a Levenberge – Marquaradt training algorithm as a tool to train the network. To show the efficiency of the method, some type of Abel’s integral equations is solved as numerical examples. Numerical results show that the new method is very efficient problems with high accuracy.
An automatic text summarization system mimics how humans summarize by picking the most significant sentences in a source text. However, the complexities of the Arabic language have become challenging to obtain information quickly and effectively. The main disadvantage of the traditional approaches is that they are strictly constrained (especially for the Arabic language) by the accuracy of sentence feature functions, weighting schemes, and similarity calculations. On the other hand, the meta-heuristic search approaches have a feature tha
... Show MoreRecognition is one of the basic characteristics of human brain, and also for the living creatures. It is possible to recognize images, persons, or patterns according to their characteristics. This recognition could be done using eyes or dedicated proposed methods. There are numerous applications for pattern recognition such as recognition of printed or handwritten letters, for example reading post addresses automatically and reading documents or check reading in bank.
One of the challenges which faces researchers in character recognition field is the recognition of digits, which are written by hand. This paper describes a classification method for on-line handwrit
... Show MoreIn light of increasing demand for energy consumption due to life complexity and its requirements, which reflected on architecture in type and size, Environmental challenges have emerged in the need to reduce emissions and power consumption within the construction sector. Which urged designers to improve the environmental performance of buildings by adopting new design approaches, Invest digital technology to facilitate design decision-making, in short time, effort and cost. Which doesn’t stop at the limits of acceptable efficiency, but extends to the level of (the highest performance), which doesn’t provide by traditional approaches that adopted by researchers and local institutions in their studies and architectural practices, limit
... Show MoreDensely deployment of sensors is generally employed in wireless sensor networks (WSNs) to ensure energy-efficient covering of a target area. Many sensors scheduling techniques have been recently proposed for designing such energy-efficient WSNs. Sensors scheduling has been modeled, in the literature, as a generalization of minimum set covering problem (MSCP) problem. MSCP is a well-known NP-hard optimization problem used to model a large range of problems arising from scheduling, manufacturing, service planning, information retrieval, etc. In this paper, the MSCP is modeled to design an energy-efficient wireless sensor networks (WSNs) that can reliably cover a target area. Unlike other attempts in the literature, which consider only a si
... Show More