Accurate predictive tools for VLE calculation are always needed. A new method is introduced for VLE calculation which is very simple to apply with very good results compared with previously used methods. It does not need any physical property except each binary system need tow constants only. Also, this method can be applied to calculate VLE data for any binary system at any polarity or from any group family. But the system binary should not confirm an azeotrope. This new method is expanding in application to cover a range of temperature. This expansion does not need anything except the application of the new proposed form with the system of two constants. This method with its development is applied to 56 binary mixtures with 1120 equilibrium data point with very good accuracy. The developments of this method are applied on 13 binary systems at different temperatures which gives very good accuracy.
In this research, a simple experiment in the field of agriculture was studied, in terms of the effect of out-of-control noise as a result of several reasons, including the effect of environmental conditions on the observations of agricultural experiments, through the use of Discrete Wavelet transformation, specifically (The Coiflets transform of wavelength 1 to 2 and the Daubechies transform of wavelength 2 To 3) based on two levels of transform (J-4) and (J-5), and applying the hard threshold rules, soft and non-negative, and comparing the wavelet transformation methods using real data for an experiment with a size of 26 observations. The application was carried out through a program in the language of MATLAB. The researcher concluded that
... Show MoreBig data analysis is essential for modern applications in areas such as healthcare, assistive technology, intelligent transportation, environment and climate monitoring. Traditional algorithms in data mining and machine learning do not scale well with data size. Mining and learning from big data need time and memory efficient techniques, albeit the cost of possible loss in accuracy. We have developed a data aggregation structure to summarize data with large number of instances and data generated from multiple data sources. Data are aggregated at multiple resolutions and resolution provides a trade-off between efficiency and accuracy. The structure is built once, updated incrementally, and serves as a common data input for multiple mining an
... Show MoreFusarium wilt causes economic losses on tomatoes every year. Thus, a variety of chemicals have been used to combat the disease. Pesticides have been effective in managing the disease, but they keep damaging the environment. Recently, eco-friendly approaches have been used to control plant diseases. This study aimed to achieve an environmentally safe solution using biological agents to induce systemic resistance in tomato plants to control Fusarium wilt disease caused by Fusarium oxysporum f.sp. lycopersici (FOL) in the greenhouse. The pathogen (FOL) has been molecularly confirmed and the biological agents have been isolated from the Iraqi environment. The effectiveness of the biological agents has been tested and confirmed. Results showed t
... Show MoreAcute lymphoblastic leukemia (ALL) is a cancer of the blood and bone marrow (spongy tissue in the center of bone). In ALL, too many bone marrow stem cells develop into a type of white blood cell called lymphocytes. These abnormal lymphocytes are not able to fight infection very well. The aim of this study was to investigate possible links between E3 SUMO-Protein Ligase NSE2 [NSMCE2] and increase DNA damage in the childhood patients with Acute lymphoblastic leukemia (ALL). Laboratory investigations including hemoglobin(Hb) ,white blood cell (WBC) , serum total protein , albumin ,globulin , in addition to serum total antioxidant activity (TAA) , Advanced oxidation protein products(AOPP) and E3 SUMO-Protein Ligase NSE2[NSMCE2]. Blood samples
... Show MoreBig data of different types, such as texts and images, are rapidly generated from the internet and other applications. Dealing with this data using traditional methods is not practical since it is available in various sizes, types, and processing speed requirements. Therefore, data analytics has become an important tool because only meaningful information is analyzed and extracted, which makes it essential for big data applications to analyze and extract useful information. This paper presents several innovative methods that use data analytics techniques to improve the analysis process and data management. Furthermore, this paper discusses how the revolution of data analytics based on artificial intelligence algorithms might provide
... Show MoreIn this research, the methods of Kernel estimator (nonparametric density estimator) were relied upon in estimating the two-response logistic regression, where the comparison was used between the method of Nadaraya-Watson and the method of Local Scoring algorithm, and optimal Smoothing parameter λ was estimated by the methods of Cross-validation and generalized Cross-validation, bandwidth optimal λ has a clear effect in the estimation process. It also has a key role in smoothing the curve as it approaches the real curve, and the goal of using the Kernel estimator is to modify the observations so that we can obtain estimators with characteristics close to the properties of real parameters, and based on medical data for patients with chro
... Show MoreToday, problems of spatial data integration have been further complicated by the rapid development in communication technologies and the increasing amount of available data sources on the World Wide Web. Thus, web-based geospatial data sources can be managed by different communities and the data themselves can vary in respect to quality, coverage, and purpose. Integrating such multiple geospatial datasets remains a challenge for geospatial data consumers. This paper concentrates on the integration of geometric and classification schemes for official data, such as Ordnance Survey (OS) national mapping data, with volunteered geographic information (VGI) data, such as the data derived from the OpenStreetMap (OSM) project. Useful descriptions o
... Show MoreAbstract: The article aimed to formulate an MLX binary ethosome hydrogel for topical delivery to escalate MLX solubility, facilitate dermal permeation, avoid systemic adverse events, and compare the permeation flux and efficacy with the classical type. MLX ethosomes were prepared using the hot method according to the Box–Behnken experimental design. The formulation was implemented according to 16 design formulas with four center points. Independent variables were (soya lecithin, ethanol, and propylene glycol concentrations) and dependent variables (vesicle size, dispersity index, encapsulation efficiency, and zeta potential). The design suggested the optimized formula (MLX−Ethos−OF) with the highest desirability to perform the
... Show MoreGround-based active optical sensors (GBAOS) have been successfully used in agriculture to predict crop yield potential (YP) early in the season and to improvise N rates for optimal crop yield. However, the models were found weak or inconsistent due to environmental variation especially rainfall. The objectives of the study were to evaluate if GBAOS could predict YP across multiple locations, soil types, cultivation systems, and rainfall differences. This study was carried from 2011 to 2013 on corn (Zea mays L.) in North Dakota, and in 2017 in potatoes in Maine. Six N rates were used on 50 sites in North Dakota and 12 N rates on two sites, one dryland and one irrigated, in Maine. Two active GBAOS used for this study were GreenSeeker and Holl
... Show MoreThe data preprocessing step is an important step in web usage mining because of the nature of log data, which are heterogeneous, unstructured, and noisy. Given the scalability and efficiency of algorithms in pattern discovery, a preprocessing step must be applied. In this study, the sequential methodologies utilized in the preprocessing of data from web server logs, with an emphasis on sub-phases, such as session identification, user identification, and data cleansing, are comprehensively evaluated and meticulously examined.