Image compression plays an important role in reducing the size and storage of data while increasing the speed of its transmission through the Internet significantly. Image compression is an important research topic for several decades and recently, with the great successes achieved by deep learning in many areas of image processing, especially image compression, and its use is increasing Gradually in the field of image compression. The deep learning neural network has also achieved great success in the field of processing and compressing various images of different sizes. In this paper, we present a structure for image compression based on the use of a Convolutional AutoEncoder (CAE) for deep learning, inspired by the diversity of human eyes' observation of the different colors and features of images. We propose a multi-layer hybrid system for deep learning using the unsupervised CAE architecture and using the color clustering of the K-mean algorithm to compress images and determine their size and color intensity. The system is implemented using Kodak and Challenge on Learned Image Compression (CLIC) dataset for deep learning. Experimental results show that our proposed method is superior to the traditional compression methods of the autoencoder, and the proposed work has better performance in terms of performance speed and quality measures Peak Signal To Noise Ratio (PSNR) and Structural Similarity Index (SSIM) where the results achieved better performance and high efficiency With high compression bit rates and low Mean Squared Error (MSE) rate the results recorded the highest compression ratios that ranged between (0.7117 to 0.8707) for the Kodak dataset and (0.7191 to 0.9930) for CLIC dataset. The system achieved high accuracy and quality in comparison to the error coefficient, which was recorded (0.0126 to reach 0.0003) below, and this system is onsidered the most quality and accurate compared to the methods of deep learning compared to the deep learning methods of the autoencoder
Non-orthogonal Multiple Access (NOMA) is a multiple-access technique allowing multiusers to share the same communication resources, increasing spectral efficiency and throughput. NOMA has been shown to provide significant performance gains over orthogonal multiple access (OMA) regarding spectral efficiency and throughput. In this paper, two scenarios of NOMA are analyzed and simulated, involving two users and multiple users (four users) to evaluate NOMA's performance. The simulated results indicate that the achievable sum rate for the two users’ scenarios is 16.7 (bps/Hz), while for the multi-users scenario is 20.69 (bps/Hz) at transmitted power of 25 dBm. The BER for two users’ scenarios is 0.004202 and 0.001564 for
... Show MoreAbstract
Binary polymer blend was prepared by mechanical mixing method of unsaturated polyester resin with Nitrile Butadiene Rubber (NBR) with different weight ratios (0, 5, 10 and 15) % of (NBR). Tensile characteristics and wear rates of these blends were studied for all mixing ratios. The microstructure of fracture surfaces of the prepared samples were investigated by optical microscope. The results were showed that strain rates of the resin material increase after blending it with rubber while the ultimate tensile strength and Young’s modulus values of it will decrease. It is also noticed that the wear rate of resin decreases with increasing of (NBR) content.
Keywords:<
... Show MoreIn this paper, some estimators for the unknown shape parameter and reliability function of Basic Gompertz distribution have been obtained, such as Maximum likelihood estimator and Bayesian estimators under Precautionary loss function using Gamma prior and Jefferys prior. Monte-Carlo simulation is conducted to compare mean squared errors (MSE) for all these estimators for the shape parameter and integrated mean squared error (IMSE's) for comparing the performance of the Reliability estimators. Finally, the discussion is provided to illustrate the results that summarized in tables.
In this paper, an estimate has been made for parameters and the reliability function for Transmuted power function (TPF) distribution through using some estimation methods as proposed new technique for white, percentile, least square, weighted least square and modification moment methods. A simulation was used to generate random data that follow the (TPF) distribution on three experiments (E1 , E2 , E3) of the real values of the parameters, and with sample size (n=10,25,50 and 100) and iteration samples (N=1000), and taking reliability times (0< t < 0) . Comparisons have been made between the obtained results from the estimators using mean square error (MSE). The results showed the
... Show MoreIn this paper, we introduce the concept of cubic bipolar-fuzzy ideals with thresholds (α,β),(ω,ϑ) of a semigroup in KU-algebra as a generalization of sets and in short (CBF). Firstly, a (CBF) sub-KU-semigroup with a threshold (α,β),(ω,ϑ) and some results in this notion are achieved. Also, (cubic bipolar fuzzy ideals and cubic bipolar fuzzy k-ideals) with thresholds (α,β),(ω ,ϑ) are defined and some properties of these ideals are given. Relations between a (CBF).sub algebra and-a (CBF) ideal are proved. A few characterizations of a (CBF) k-ideal with threshol
... Show MoreDiverting river flow during construction of a main dam involves the construction of cofferdams, and tunnels, channels or other temporary passages. Diversion channels are commonly used in wide valleys where the high flow makes tunnels or culverts uneconomic. The diversion works must form part of the overall project design since it will have a major impact on its cost, as well as on the design, construction program and overall cost of the permanent works. Construction costs contain of excavation, lining of the channel, and construction of upstream and downstream cofferdams. The optimization model was applied to obtain optimalchannel cross section, height of upstream cofferdam, and height of downstream cofferdamwith minimum construction cost
... Show Moreعملية تغيير حجم الصورة في مجال معالجة الصور باستخدام التحويلات الهندسية بدون تغيير دقة الصورة تعرف ب image scaling او image resizing. عملية تغيير حجم الصورة لها تطبيقات واسعة في مجال الحاسوب والهاتف النقال والاجهزة الالكترونية الاخرى. يقترح هذا البحث طريقة لتغيير حجم الصورة باستخدام المعادلات الخاصة بمنحني Bezier وكيفية الحصول على افضل نتائج. تم استخدام Bezier curve في اعمال سابقة في مجالات مختلفة ولكن في هذا البحث تم استخد
... Show MoreThe Normalized Difference Vegetation Index (NDVI) is commonly used as a measure of land surface greenness based on the assumption that NDVI value is positively proportional to the amount of green vegetation in an image pixel area. The Normalized Difference Vegetation Index data set of Landsat based on the remote sensing information is used to estimate the area of plant cover in region west of Baghdad during 1990-2001. The results show that in the period of 1990 and 2001 the plant area in region of Baghdad increased from (44760.25) hectare to (75410.67) hectare. The vegetation area increased during the period 1990-2001, and decreases the exposed area.
Computational Thinking (CT) is very useful in the process of solving everyday problems for undergraduates. In terms of content, computational thinking involves solving problems, studying data patterns, deconstructing problems using algorithms and procedures, doing simulations, computer modeling, and reasoning about abstract things. However, there is a lack of studies dealing with it and its skills that can be developed and utilized in the field of information and technology used in learning and teaching. The descriptive research method was used, and a test research tool was prepared to measure the level of (CT) consisting of (24) items of the type of multiple-choice to measure the level of "CT". The research study group consists of
... Show More