Image compression plays an important role in reducing the size and storage of data while increasing the speed of its transmission through the Internet significantly. Image compression is an important research topic for several decades and recently, with the great successes achieved by deep learning in many areas of image processing, especially image compression, and its use is increasing Gradually in the field of image compression. The deep learning neural network has also achieved great success in the field of processing and compressing various images of different sizes. In this paper, we present a structure for image compression based on the use of a Convolutional AutoEncoder (CAE) for deep learning, inspired by the diversity of human eyes' observation of the different colors and features of images. We propose a multi-layer hybrid system for deep learning using the unsupervised CAE architecture and using the color clustering of the K-mean algorithm to compress images and determine their size and color intensity. The system is implemented using Kodak and Challenge on Learned Image Compression (CLIC) dataset for deep learning. Experimental results show that our proposed method is superior to the traditional compression methods of the autoencoder, and the proposed work has better performance in terms of performance speed and quality measures Peak Signal To Noise Ratio (PSNR) and Structural Similarity Index (SSIM) where the results achieved better performance and high efficiency With high compression bit rates and low Mean Squared Error (MSE) rate the results recorded the highest compression ratios that ranged between (0.7117 to 0.8707) for the Kodak dataset and (0.7191 to 0.9930) for CLIC dataset. The system achieved high accuracy and quality in comparison to the error coefficient, which was recorded (0.0126 to reach 0.0003) below, and this system is onsidered the most quality and accurate compared to the methods of deep learning compared to the deep learning methods of the autoencoder
Many researchers consider Homogeneous Charge Compression Ignition (HCCI) engine mode as a promising alternative to combustion in Spark Ignition and Compression Ignition Engines. The HCCI engine runs on lean mixtures of fuel and air, and the combustion is produced from the fuel autoignition instead of ignited by a spark. This combustion mode was investigated in this paper. A variable compression ratio, spark ignition engine type TD110 was used in the experiments. The tested fuel was Iraqi conventional gasoline (ON=82).
The results showed that HCCI engine can run in very lean equivalence ratios. The brake specific fuel consumption was reduced about 28% compared with a spark ignition engine. The experimental tests showed that the em
... Show MoreBecause of vulnerable threats and attacks against database during transmission from sender to receiver, which is one of the most global security concerns of network users, a lightweight cryptosystem using Rivest Cipher 4 (RC4) algorithm is proposed. This cryptosystem maintains data privacy by performing encryption of data in cipher form and transfers it over the network and again performing decryption to original data. Hens, ciphers represent encapsulating system for database tables
Semantic segmentation is an exciting research topic in medical image analysis because it aims to detect objects in medical images. In recent years, approaches based on deep learning have shown a more reliable performance than traditional approaches in medical image segmentation. The U-Net network is one of the most successful end-to-end convolutional neural networks (CNNs) presented for medical image segmentation. This paper proposes a multiscale Residual Dilated convolution neural network (MSRD-UNet) based on U-Net. MSRD-UNet replaced the traditional convolution block with a novel deeper block that fuses multi-layer features using dilated and residual convolution. In addition, the squeeze and execution attention mechanism (SE) and the s
... Show MoreThis paper presents an experimental and numerical study which was carried out to examine the influence of the size and the layout of the web openings on the load carrying capacity and the serviceability of reinforced concrete deep beams. Five full-scale simply supported reinforced concrete deep beams with two large web openings created in shear regions were tested up to failure. The shear span to overall depth ratio was (1.1). Square openings were located symmetrically relative to the midspan section either at the midpoint or at the interior boundaries of the shear span. Two different side dimensions for the square openings were considered, mainly, (200) mm and (230) mm. The strength results proved that the shear capacity of the dee
... Show MoreThis paper presents an experimental and numerical study which was carried out to examine the influence of the size and the layout of the web openings on the load carrying capacity and the serviceability of reinforced concrete deep beams. Five full-scale simply supported reinforced concrete deep beams with two large web openings created in shear regions were tested up to failure. The shear span to overall depth ratio was (1.1). Square openings were located symmetrically relative to the midspan section either at the midpoint or at the interior boundaries of the shear span. Two different side dimensions for the square openings were considered, mainly, (200) mm and (230) mm. The strength results proved that the shear capacity of the dee
... Show MoreSteganography is defined as hiding confidential information in some other chosen media without leaving any clear evidence of changing the media's features. Most traditional hiding methods hide the message directly in the covered media like (text, image, audio, and video). Some hiding techniques leave a negative effect on the cover image, so sometimes the change in the carrier medium can be detected by human and machine. The purpose of suggesting hiding information is to make this change undetectable. The current research focuses on using complex method to prevent the detection of hiding information by human and machine based on spiral search method, the Structural Similarity Index Metrics measures are used to get the accuracy and quality
... Show MoreStatistical learning theory serves as the foundational bedrock of Machine learning (ML), which in turn represents the backbone of artificial intelligence, ushering in innovative solutions for real-world challenges. Its origins can be linked to the point where statistics and the field of computing meet, evolving into a distinct scientific discipline. Machine learning can be distinguished by its fundamental branches, encompassing supervised learning, unsupervised learning, semi-supervised learning, and reinforcement learning. Within this tapestry, supervised learning takes center stage, divided in two fundamental forms: classification and regression. Regression is tailored for continuous outcomes, while classification specializes in c
... Show MoreThe aim of this study to identity using Daniel's model and Driver’s model in learning a kinetic chain on the uneven bars in the artistic gymnastics for female students. The researchers used the experimental method to design equivalent groups with a preand post-test, and the research community was identified with the students of the third stage in the college for the academic year 2020-2021 .The subject was, (3) class were randomly selected, so (30) students distributed into (3) groups). has been conducted pretesting after implementation of the curriculum for (4) weeks and used the statistical bag of social sciences(SPSS)to process the results of the research and a set of conclusions was reached, the most important of which is t
... Show More