The expansion in water projects implementations in Turkey and Syria becomes of great concern to the workers in the field of water resources management in Iraq. Such expansion with the absence of bi-lateral agreement between the three riparian countries of Tigris and Euphrates Rivers; Turkey, Syria and Iraq, is expected to lead to a substantially reduction of water inflow to the territories of Iraq. Accordingly, this study consists of two parts: first part is aiming to study the changes of the water inflow to the territory of Iraq, at Turkey and Syria borders, from 1953 to 2009; the results indicated that the annual mean inflow in Tigris River was decreased from 677 m3/sec to 526 m3/sec, after operating Turkey reservoirs, while in the Euphrates River the annual mean inflow was decreased from 1006 m3/sec to 627m3/sec after operating Syria and Turkey reservoirs. Second part is forecasting the monthly inflow and the water demand under the reduced inflow data. The results show that the future inflow of the Tigris River is expected to decrease to 57%, and reaches 301m3/sec. The Mosul reservoir will be able to supply 64% only of the water requirements to the downstream. The share of Iraq from the inflow of the Euphrates River is expected to be 58%, therefore the future inflow will reach 290 m3/sec. The Haditha reservoir will be able to supply 46% only of the water requirements to the downstream, due to reduced inflow at Iraqi border in the future.
Among the metaheuristic algorithms, population-based algorithms are an explorative search algorithm superior to the local search algorithm in terms of exploring the search space to find globally optimal solutions. However, the primary downside of such algorithms is their low exploitative capability, which prevents the expansion of the search space neighborhood for more optimal solutions. The firefly algorithm (FA) is a population-based algorithm that has been widely used in clustering problems. However, FA is limited in terms of its premature convergence when no neighborhood search strategies are employed to improve the quality of clustering solutions in the neighborhood region and exploring the global regions in the search space. On the
... Show MoreWith the revolutionized expansion of the Internet, worldwide information increases the application of communication technology, and the rapid growth of significant data volume boosts the requirement to accomplish secure, robust, and confident techniques using various effective algorithms. Lots of algorithms and techniques are available for data security. This paper presents a cryptosystem that combines several Substitution Cipher Algorithms along with the Circular queue data structure. The two different substitution techniques are; Homophonic Substitution Cipher and Polyalphabetic Substitution Cipher in which they merged in a single circular queue with four different keys for each of them, which produces eight different outputs for
... Show More
The great scientific progress has led to widespread Information as information accumulates in large databases is important in trying to revise and compile this vast amount of data and, where its purpose to extract hidden information or classified data under their relations with each other in order to take advantage of them for technical purposes.
And work with data mining (DM) is appropriate in this area because of the importance of research in the (K-Means) algorithm for clustering data in fact applied with effect can be observed in variables by changing the sample size (n) and the number of clusters (K)
... Show MoreThis study sought to investigate the impacts of big data, artificial intelligence (AI), and business intelligence (BI) on Firms' e-learning and business performance at Jordanian telecommunications industry. After the samples were checked, a total of 269 were collected. All of the information gathered throughout the investigation was analyzed using the PLS software. The results show a network of interconnections can improve both e-learning and corporate effectiveness. This research concluded that the integration of big data, AI, and BI has a positive impact on e-learning infrastructure development and organizational efficiency. The findings indicate that big data has a positive and direct impact on business performance, including Big
... Show MoreSeismic inversion technique is applied to 3D seismic data to predict porosity property for carbonate Yamama Formation (Early Cretaceous) in an area located in southern Iraq. A workflow is designed to guide the manual procedure of inversion process. The inversion use a Model Based Inversion technique to convert 3D seismic data into 3D acoustic impedance depending on low frequency model and well data is the first step in the inversion with statistical control for each inversion stage. Then, training the 3D acoustic impedance volume, seismic data and porosity wells data with multi attribute transforms to find the best statistical attribute that is suitable to invert the point direct measurement of porosity from well to 3D porosity distribut
... Show MoreIn real situations all observations and measurements are not exact numbers but more or less non-exact, also called fuzzy. So, in this paper, we use approximate non-Bayesian computational methods to estimate inverse Weibull parameters and reliability function with fuzzy data. The maximum likelihood and moment estimations are obtained as non-Bayesian estimation. The maximum likelihood estimators have been derived numerically based on two iterative techniques namely “Newton-Raphson†and the “Expectation-Maximization†techniques. In addition, we provide compared numerically through Monte-Carlo simulation study to obtained estimates of the parameters and reliability function i
... Show MoreAbstract
The research Compared two methods for estimating fourparametersof the compound exponential Weibull - Poisson distribution which are the maximum likelihood method and the Downhill Simplex algorithm. Depending on two data cases, the first one assumed the original data (Non-polluting), while the second one assumeddata contamination. Simulation experimentswere conducted for different sample sizes and initial values of parameters and under different levels of contamination. Downhill Simplex algorithm was found to be the best method for in the estimation of the parameters, the probability function and the reliability function of the compound distribution in cases of natural and contaminateddata.
... Show More
Data scarcity is a major challenge when training deep learning (DL) models. DL demands a large amount of data to achieve exceptional performance. Unfortunately, many applications have small or inadequate data to train DL frameworks. Usually, manual labeling is needed to provide labeled data, which typically involves human annotators with a vast background of knowledge. This annotation process is costly, time-consuming, and error-prone. Usually, every DL framework is fed by a significant amount of labeled data to automatically learn representations. Ultimately, a larger amount of data would generate a better DL model and its performance is also application dependent. This issue is the main barrier for
Background: Obesity typically results from a variety of causes and factors which contribute, genetics included, and style of living choices, and described as excessive body fat accumulation of body fat lead to excessive body, is a chronic disorder that combines pathogenic environmental and genetic factors. So, the current study objective was to investigate the of the FTO gene rs9939609 polymorphism and the obesity risk. Explaining the relationship between fat mass and obesity-associated gene (FTO) rs9939609 polymorphism and obesity in adults. Methods: Identify research exploring the association between the obesity risk and the variation polymorphisms of FTO gene rs9939609. We combined the modified odds ratios (OR) as total groups and subgro
... Show More