Generally, direct measurement of soil compression index (Cc) is expensive and time-consuming. To save time and effort, indirect methods to obtain Cc may be an inexpensive option. Usually, the indirect methods are based on a correlation between some easier measuring descriptive variables such as liquid limit, soil density, and natural water content. This study used the ANFIS and regression methods to obtain Cc indirectly. To achieve the aim of this investigation, 177 undisturbed samples were collected from the cohesive soil in Sulaymaniyah Governorate in Iraq. Results of this study indicated that ANFIS models over-performed the Regression method in estimating Cc with R2 of 0.66 and 0.48 for both ANFIS and Regression models, respectively. This work is an effort to practice the advantages of machine learning techniques to build a robust and cost-effective model for Cc estimation by designers, decision makers, and stakeholders.
In petroleum industry, the early knowledge of “pore pressure gradient” is the basis in well design and the extraction of these information is more direct when the pore pressure gradient is equal to normal gradient; however, this matter will be more complex if it deviate from that limit which is called “abnormal pore pressure”, if this variable does not put in consideration, then many drilling problems will occur might lead to entire hole loss. To estimate the pore pressure gradient there are several methods, in this study; Eaton method’s is selected to extract the underground pressure program using drilling data (normalized rate of penetration) and logs data (sonic and density log). The results shows that an abnormal high press
... Show MoreThe study aims to build a water quality index that fits the Iraqi aquatic systems and reflects the environmental reality of Iraqi water. The developed Iraqi Water Quality Index (IQWQI) includes physical and chemical components. To build the IQWQI, Delphi method was used to communicate with local and global experts in water quality indices for their opinion regarding the best and most important parameter we can use in building the index and the established weight of each parameter. From the data obtained in this study, 70% were used for building the model and 30% for evaluating the model. Multiple scenarios were applied to the model inputs to study the effects of increasing parameters. The model was built 4 by 4 until it reached 17 parame
... Show MoreThis research discussed, the process of comparison between the regression model of partial least squares and tree regression, where these models included two types of statistical methods represented by the first type "parameter statistics" of the partial least squares, which is adopted when the number of variables is greater than the number of observations and also when the number of observations larger than the number of variables, the second type is the "nonparametric statistic" represented by tree regression, which is the division of data in a hierarchical way. The regression models for the two models were estimated, and then the comparison between them, where the comparison between these methods was according to a Mean Square
... Show MoreRecently Tobit Quantile Regression(TQR) has emerged as an important tool in statistical analysis . in order to improve the parameter estimation in (TQR) we proposed Bayesian hierarchical model with double adaptive elastic net technique and Bayesian hierarchical model with adaptive ridge regression technique .
in double adaptive elastic net technique we assume different penalization parameters for penalization different regression coefficients in both parameters λ1and λ2 , also in adaptive ridge regression technique we assume different penalization parameters for penalization different regression coefficients i
... Show MoreThe searching process using a binary codebook of combined Block Truncation Coding (BTC) method and Vector Quantization (VQ), i.e. a full codebook search for each input image vector to find the best matched code word in the codebook, requires a long time. Therefore, in this paper, after designing a small binary codebook, we adopted a new method by rotating each binary code word in this codebook into 900 to 2700 step 900 directions. Then, we systematized each code word depending on its angle to involve four types of binary code books (i.e. Pour when , Flat when , Vertical when, or Zigzag). The proposed scheme was used for decreasing the time of the coding procedure, with very small distortion per block, by designing s
... Show More
It is considered as one of the statistical methods used to describe and estimate the relationship between randomness (Y) and explanatory variables (X). The second is the homogeneity of the variance, in which the dependent variable is a binary response takes two values (One when a specific event occurred and zero when that event did not happen) such as (injured and uninjured, married and unmarried) and that a large number of explanatory variables led to the emergence of the problem of linear multiplicity that makes the estimates inaccurate, and the method of greatest possibility and the method of declination of the letter was used in estimating A double-response logistic regression model by adopting the Jackna
... Show More
It is considered as one of the statistical methods used to describe and estimate the relationship between randomness (Y) and explanatory variables (X). The second is the homogeneity of the variance, in which the dependent variable is a binary response takes two values (One when a specific event occurred and zero when that event did not happen) such as (injured and uninjured, married and unmarried) and that a large number of explanatory variables led to the emergence of the problem of linear multiplicity that makes the estimates inaccurate, and the method of greatest possibility and the method of declination of the letter was used in estimating A double-response logistic regression model by adopting the Jackna
... Show More