Accurate predictive tools for VLE calculation are always needed. A new method is introduced for VLE calculation which is very simple to apply with very good results compared with previously used methods. It does not need any physical property except each binary system need tow constants only. Also, this method can be applied to calculate VLE data for any binary system at any polarity or from any group family. But the system binary should not confirm an azeotrope. This new method is expanding in application to cover a range of temperature. This expansion does not need anything except the application of the new proposed form with the system of two constants. This method with its development is applied to 56 binary mixtures with 1120 equilibrium data point with very good accuracy. The developments of this method are applied on 13 binary systems at different temperatures which gives very good accuracy.
This research study Blur groups (Fuzzy Sets) which is the perception of the most modern in the application in various practical and theoretical areas and in various fields of life, was addressed to the fuzzy random variable whose value is not real, but the numbers Millbh because it expresses the mysterious phenomena or uncertain with measurements are not assertive. Fuzzy data were presented for binocular test and analysis of variance method of random Fuzzy variables , where this method depends on a number of assumptions, which is a problem that prevents the use of this method in the case of non-realized.
The great scientific progress has led to widespread Information as information accumulates in large databases is important in trying to revise and compile this vast amount of data and, where its purpose to extract hidden information or classified data under their relations with each other in order to take advantage of them for technical purposes.
And work with data mining (DM) is appropriate in this area because of the importance of research in the (K-Means) algorithm for clustering data in fact applied with effect can be observed in variables by changing the sample size (n) and the number of clusters (K)
... Show MoreThe advancements in Information and Communication Technology (ICT), within the previous decades, has significantly changed people’s transmit or store their information over the Internet or networks. So, one of the main challenges is to keep these information safe against attacks. Many researchers and institutions realized the importance and benefits of cryptography in achieving the efficiency and effectiveness of various aspects of secure communication.This work adopts a novel technique for secure data cryptosystem based on chaos theory. The proposed algorithm generate 2-Dimensional key matrix having the same dimensions of the original image that includes random numbers obtained from the 1-Dimensional logistic chaotic map for given con
... Show More
Abstract : This research is concerned with studying the best type and method of irrigation as well as the best cultivated area to reduce the cost of producing dunums of wheat crop in Iraq , and was based on data taken from the Ministry of Planning / Central Statistical Organization About cost of wheat crop production for (12) Iraqi governorates except Kurdistan, Nineveh, Salah al-Din, Anbar) and the sample size (554) according to the cost survey carried out by the Ministry of Planning / Central Statistical Organization for 2017, The results of the research showed that there are significant statistical differences between production costs when using t
... Show MoreThe widespread use of the Internet of things (IoT) in different aspects of an individual’s life like banking, wireless intelligent devices and smartphones has led to new security and performance challenges under restricted resources. The Elliptic Curve Digital Signature Algorithm (ECDSA) is the most suitable choice for the environments due to the smaller size of the encryption key and changeable security related parameters. However, major performance metrics such as area, power, latency and throughput are still customisable and based on the design requirements of the device.
The present paper puts forward an enhancement for the throughput performance metric by p
... Show MoreA hierarchically porous structured zeolite composite was synthesized from NaX zeolite supported on carbonaceous porous material produced by thermal treatment for plum stones which is an agro-waste. This kind of inorganic-organic composite has an improved performance because bulky molecules can easily access the micropores due to the short diffusion path to the active sites which means a higher diffusion rate. The composite was prepared using a green synthesis method, including an eco-friendly polymer to attach NaX zeolite on the carbon surface by phase inversion. The synthesized composite was characterized using X-ray diffraction spectrometry, Fourier transforms infrared spectroscopy, field emission scanning electron microscopy, energy d
... Show MoreIn this research Bi2S3 thin films have been prepared on glass substrates using chemical spray pyrolysis method at substrate temperature (300oC) and molarity (0.015) mol. Structural and optical properties of the thin films above have been studied; XRD analysis demonstrated that the Bi2S3 films are polycrystalline with (031) orientation and with Orthorhombic structure. The optical properties were studied using the spectral of the absorbance and transmission of films in wavelength ranging (300-1100) nm. The study showed that the films have high transmission within the range of the visible spectrum. Also absorption coefficient, extinction coefficient and the optical energy gap (Eg) was calculated, found that the film have direct ener
... Show MoreData scarcity is a major challenge when training deep learning (DL) models. DL demands a large amount of data to achieve exceptional performance. Unfortunately, many applications have small or inadequate data to train DL frameworks. Usually, manual labeling is needed to provide labeled data, which typically involves human annotators with a vast background of knowledge. This annotation process is costly, time-consuming, and error-prone. Usually, every DL framework is fed by a significant amount of labeled data to automatically learn representations. Ultimately, a larger amount of data would generate a better DL model and its performance is also application dependent. This issue is the main barrier for