The advancements in Information and Communication Technology (ICT), within the previous decades, has significantly changed people’s transmit or store their information over the Internet or networks. So, one of the main challenges is to keep these information safe against attacks. Many researchers and institutions realized the importance and benefits of cryptography in achieving the efficiency and effectiveness of various aspects of secure communication.This work adopts a novel technique for secure data cryptosystem based on chaos theory. The proposed algorithm generate 2-Dimensional key matrix having the same dimensions of the original image that includes random numbers obtained from the 1-Dimensional logistic chaotic map for given control parameters, which is then processed by converting the fractional parts of them through a function into a set of non-repeating numbers that leads to a vast number of unpredicted probabilities (the factorial of rows times columns). Double layers of rows and columns permutation are made to the values of numbers for a specified number of stages. Then, XOR is performed between the key matrix and the original image, which represent an active resolve for data encryption for any type of files (text, image, audio, video, … etc). The results proved that the proposed encryption technique is very promising when tested on more than 500 image samples according to security measurements where the histograms of cipher images are very flatten compared with that for original images, while the averages of Mean Square Error is very high (10115.4) and Peak Signal to Noise Ratio is very low (8.17), besides Correlation near zero and Entropy close to 8 (7.9975).
Abstract
The research Compared two methods for estimating fourparametersof the compound exponential Weibull - Poisson distribution which are the maximum likelihood method and the Downhill Simplex algorithm. Depending on two data cases, the first one assumed the original data (Non-polluting), while the second one assumeddata contamination. Simulation experimentswere conducted for different sample sizes and initial values of parameters and under different levels of contamination. Downhill Simplex algorithm was found to be the best method for in the estimation of the parameters, the probability function and the reliability function of the compound distribution in cases of natural and contaminateddata.
... Show More
Measuring the efficiency of postgraduate and undergraduate programs is one of the essential elements in educational process. In this study, colleges of Baghdad University and data for the academic year (2011-2012) have been chosen to measure the relative efficiencies of postgraduate and undergraduate programs in terms of their inputs and outputs. A relevant method to conduct the analysis of this data is Data Envelopment Analysis (DEA). The effect of academic staff to the number of enrolled and alumni students to the postgraduate and undergraduate programs are the main focus of the study.
The research Concentrates on modern Variable in the organizations that is 6 sigma. The field study is two of Iraqi industrial organizations, The first is state company of …………… , the other is the state company of ……
The problem of the research determines some questions and hypotheses, The data was Collected by question air, which contains 5 dimensions and (10) critical Successful factories .
The sample contains (42) who Works in that organizations. The points out many conclusions. The main of it, there is significant differences among the two organizations Then The research concluded with a number of important recommendations serve it's objectives .
... Show MoreFG Mohammed, HM Al-Dabbas, Iraqi journal of science, 2018 - Cited by 6
Within the framework of big data, energy issues are highly significant. Despite the significance of energy, theoretical studies focusing primarily on the issue of energy within big data analytics in relation to computational intelligent algorithms are scarce. The purpose of this study is to explore the theoretical aspects of energy issues in big data analytics in relation to computational intelligent algorithms since this is critical in exploring the emperica aspects of big data. In this chapter, we present a theoretical study of energy issues related to applications of computational intelligent algorithms in big data analytics. This work highlights that big data analytics using computational intelligent algorithms generates a very high amo
... Show MoreBusiness organizations have faced many challenges in recent times, most important of which is information technology, because it is widely spread and easy to use. Its use has led to an increase in the amount of data that business organizations deal with an unprecedented manner. The amount of data available through the internet is a problem that many parties seek to find solutions for. Why is it available there in this huge amount randomly? Many expectations have revealed that in 2017, there will be devices connected to the internet estimated at three times the population of the Earth, and in 2015 more than one and a half billion gigabytes of data was transferred every minute globally. Thus, the so-called data mining emerged as a
... Show MoreThe virtual decomposition control (VDC) is an efficient tool suitable to deal with the full-dynamics-based control problem of complex robots. However, the regressor-based adaptive control used by VDC to control every subsystem and to estimate the unknown parameters demands specific knowledge about the system physics. Therefore, in this paper, we focus on reorganizing the equation of the VDC for a serial chain manipulator using the adaptive function approximation technique (FAT) without needing specific system physics. The dynamic matrices of the dynamic equation of every subsystem (e.g. link and joint) are approximated by orthogonal functions due to the minimum approximation errors produced. The contr
An optical video communication system is designed and constructed using pulse frequency modulation (PFM) technique. In this work PFM pulses are generated at the transmitter using voltage control oscillator (VCO) of width 50 ns for each pulse. Double frequency, equal width and narrow pulses are produced in the receiver be for demodulation. The use of the frequency doubling technique in such a system results in a narrow transmission bandwidth (25 ns) and high receiver sensitivity.