In this paper, we designed a new efficient stream cipher cryptosystem that depend on a chaotic map to encrypt (decrypt) different types of digital images. The designed encryption system passed all basic efficiency criteria (like Randomness, MSE, PSNR, Histogram Analysis, and Key Space) that were applied to the key extracted from the random generator as well as to the digital images after completing the encryption process.
Gypseous soils are common in several regions in the world including Iraq, where more than 28.6% of its surface is covered with this type of soil. This soil, with high gypsum content, causes different problems for construction and strategic projects. As a result of water flow through the soil mass, the permeability and chemical arrangement of these soils varies with time due to the solubility and leaching of gypsum. In this study, the soil of 36% gypsum content, was taken from one location about 100 km southwest of Baghdad, where the samples were taken from depths (0.5 - 1) m below the natural ground and mixed with (3%, 6%, 9%) of Copolymer and Novolac polymer to improve the engineering properties that include: collapsibility, perm
... Show MoreIn this paper, An application of non-additive measures for re-evaluating the degree of importance of some student failure reasons has been discussed. We apply non-additive fuzzy integral model (Sugeno, Shilkret and Choquet) integrals for some expected factors which effect student examination performance for different students' cases.
This paper is dealing with non-polynomial spline functions "generalized spline" to find the approximate solution of linear Volterra integro-differential equations of the second kind and extension of this work to solve system of linear Volterra integro-differential equations. The performance of generalized spline functions are illustrated in test examples
The main focus of this research is to examine the Travelling Salesman Problem (TSP) and the methods used to solve this problem where this problem is considered as one of the combinatorial optimization problems which met wide publicity and attention from the researches for to it's simple formulation and important applications and engagement to the rest of combinatorial problems , which is based on finding the optimal path through known number of cities where the salesman visits each city only once before returning to the city of departure n this research , the benefits of( FMOLP) algorithm is employed as one of the best methods to solve the (TSP) problem and the application of the algorithm in conjun
... Show MoreIn this study, the quality assurance of the linear accelerator available at the Baghdad Center for Radiation Therapy and Nuclear Medicine was verified using Star Track and Perspex. The study was established from August to December 2018. This study showed that there was an acceptable variation in the dose output of the linear accelerator. This variation was ±2% and it was within the permissible range according to the recommendations of the manufacturer of the accelerator (Elkta).
In this paper, we will present proposed enhance process of image compression by using RLE algorithm. This proposed yield to decrease the size of compressing image, but the original method used primarily for compressing a binary images [1].Which will yield increasing the size of an original image mostly when used for color images. The test of an enhanced algorithm is performed on sample consists of ten BMP 24-bit true color images, building an application by using visual basic 6.0 to show the size after and before compression process and computing the compression ratio for RLE and for the enhanced RLE algorithm.
Image segmentation can be defined as a cutting or segmenting process of the digital image into many useful points which are called segmentation, that includes image elements contribute with certain attributes different form Pixel that constitute other parts. Two phases were followed in image processing by the researcher in this paper. At the beginning, pre-processing image on images was made before the segmentation process through statistical confidence intervals that can be used for estimate of unknown remarks suggested by Acho & Buenestado in 2018. Then, the second phase includes image segmentation process by using "Bernsen's Thresholding Technique" in the first phase. The researcher drew a conclusion that in case of utilizing
... Show MoreIn this research, an analysis for the standard Hueckel edge detection algorithm behaviour by using three dimensional representations for the edge goodness criterion is presents after applying it on a real high texture satellite image, where the edge goodness criterion is analysis statistically. The Hueckel edge detection algorithm showed a forward exponential relationship between the execution time with the used disk radius. Hueckel restrictions that mentioned in his papers are adopted in this research. A discussion for the resultant edge shape and malformation is presented, since this is the first practical study of applying Hueckel edge detection algorithm on a real high texture image containing ramp edges (satellite image).
In recent years, with the rapid development of the current classification system in digital content identification, automatic classification of images has become the most challenging task in the field of computer vision. As can be seen, vision is quite challenging for a system to automatically understand and analyze images, as compared to the vision of humans. Some research papers have been done to address the issue in the low-level current classification system, but the output was restricted only to basic image features. However, similarly, the approaches fail to accurately classify images. For the results expected in this field, such as computer vision, this study proposes a deep learning approach that utilizes a deep learning algorithm.
... Show MoreFractal image compression gives some desirable properties like fast decoding image, and very good rate-distortion curves, but suffers from a high encoding time. In fractal image compression a partitioning of the image into ranges is required. In this work, we introduced good partitioning process by means of merge approach, since some ranges are connected to the others. This paper presents a method to reduce the encoding time of this technique by reducing the number of range blocks based on the computing the statistical measures between them . Experimental results on standard images show that the proposed method yields minimize (decrease) the encoding time and remain the quality results passable visually.