In this study, we made a comparison between LASSO & SCAD methods, which are two special methods for dealing with models in partial quantile regression. (Nadaraya & Watson Kernel) was used to estimate the non-parametric part ;in addition, the rule of thumb method was used to estimate the smoothing bandwidth (h). Penalty methods proved to be efficient in estimating the regression coefficients, but the SCAD method according to the mean squared error criterion (MSE) was the best after estimating the missing data using the mean imputation method
Due to the increased of information existing on the World Wide Web (WWW), the subject of how to extract new and useful knowledge from the log file has gained big interest among researchers in data mining and knowledge discovery topics.
Web miming, which is a subset of data mining divided into three particular ways, web content mining, web structure mining, web usage mining. This paper is interested in server log file, which is belonging to the third category (web usage mining). This file will be analyzed according to the suggested algorithm to extract the behavior of the user. Knowing the behavior is coming from knowing the complete path which is taken from the specific user.
Extracting these types of knowledge required many of KDD
One of the costliest problems facing the production of hydrocarbons in unconsolidated sandstone reservoirs is the production of sand once hydrocarbon production starts. The sanding start prediction model is very important to decide on sand control in the future, including whether or when sand control should be used. This research developed an easy-to-use Computer program to determine the beginning of sanding sites in the driven area. The model is based on estimating the critical pressure drop that occurs when sand is onset to produced. The outcomes have been drawn as a function of the free sand production with the critical flow rates for reservoir pressure decline. The results show that the pressure drawdown required to
... Show MoreGovernmental establishments are maintaining historical data for job applicants for future analysis of predication, improvement of benefits, profits, and development of organizations and institutions. In e-government, a decision can be made about job seekers after mining in their information that will lead to a beneficial insight. This paper proposes the development and implementation of an applicant's appropriate job prediction system to suit his or her skills using web content classification algorithms (Logit Boost, j48, PART, Hoeffding Tree, Naive Bayes). Furthermore, the results of the classification algorithms are compared based on data sets called "job classification data" sets. Experimental results indicate
... Show MoreSteganography is a technique to hide a secret message within a different multimedia carrier so that the secret message cannot be identified. The goals of steganography techniques include improvements in imperceptibility, information hiding, capacity, security, and robustness. In spite of numerous secure methodologies that have been introduced, there are ongoing attempts to develop these techniques to make them more secure and robust. This paper introduces a color image steganographic method based on a secret map, namely 3-D cat. The proposed method aims to embed data using a secure structure of chaotic steganography, ensuring better security. Rather than using the complete image for data hiding, the selection of
... Show MoreWellbore instability is a significant problem faced during drilling operations and causes loss of circulation, caving, stuck pipe, and well kick or blowout. These problems take extra time to treat and increase the Nonproductive Time (NPT). This paper aims to review the factors that influence the stability of wellbores and know the methods that have been reached to reduce them. Based on a current survey, the factors that affect the stability of the wellbore are far-field stress, rock mechanical properties, natural fractures, pore pressure, wellbore trajectory, drilling fluid chemicals, mobile formations, naturally over-pressured shale collapse, mud weight, temperature, and time. Also, the most suitable ways to reduce well
... Show More
Abstract
Due to the lack of previous statistical study of the behavior of payments, specifically health insurance, which represents the largest proportion of payments in the general insurance companies in Iraq, this study was selected and applied in the Iraqi insurance company.
In order to find the convenient model representing the health insurance payments, we initially detected two probability models by using (Easy Fit) software:
First, a single Lognormal for the whole sample and the other is a Compound Weibull for the two Sub samples (small payments and large payments), and we focused on the compoun
... Show MoreNecessary and sufficient conditions for the operator equation I AXAX n  ï€* , to have a real positive definite solution X are given. Based on these conditions, some properties of the operator A as well as relation between the solutions X andAare given.
In this paper two ranking functions are employed to treat the fuzzy multiple objective (FMO) programming model, then using two kinds of membership function, the first one is trapezoidal fuzzy (TF) ordinary membership function, the second one is trapezoidal fuzzy weighted membership function. When the objective function is fuzzy, then should transform and shrinkage the fuzzy model to traditional model, finally solving these models to know which one is better
Several authors have used ranking function for solving linear programming problem. In This paper is proposed two ranking function for solving fuzzy linear programming and compare these two approach with trapezoidal fuzzy number .The proposed approach is very easy to understand and it can applicable, also the data were chosen from general company distribution of dairy (Canon company) was proposed test approach and compare; This paper prove that the second proposed approach is better to give the results and satisfy the minimal cost using Q.M. Software
The huge evolving in the information technologies, especially in the few last decades, has produced an increase in the volume of data on the World Wide Web, which is still growing significantly. Retrieving the relevant information on the Internet or any data source with a query created by a few words has become a big challenge. To override this, query expansion (QE) has an important function in improving the information retrieval (IR), where the original query of user is recreated to a new query by appending new related terms with the same importance. One of the problems of query expansion is the choosing of suitable terms. This problem leads to another challenge of how to retrieve the important documents with high precision, high recall
... Show More