Generally, statistical methods are used in various fields of science, especially in the research field, in which Statistical analysis is carried out by adopting several techniques, according to the nature of the study and its objectives. One of these techniques is building statistical models, which is done through regression models. This technique is considered one of the most important statistical methods for studying the relationship between a dependent variable, also called (the response variable) and the other variables, called covariate variables. This research describes the estimation of the partial linear regression model, as well as the estimation of the “missing at random” values (MAR). Regarding the parametric part, a method has been developed to estimate the parametric of the partial linear regression model represented by weighted estimators as well as by the suggested method (EMBW). Two methods of the simulation were compared using three sizes (n = 100,150,200) and using three different values for and zero mean and it was found that the proposed method (EMBW) was superior to the weighted estimator method.
The technology of reducing dimensions and choosing variables are very important topics in statistical analysis to multivariate. When two or more of the predictor variables are linked in the complete or incomplete regression relationships, a problem of multicollinearity are occurred which consist of the breach of one basic assumptions of the ordinary least squares method with incorrect estimates results.
There are several methods proposed to address this problem, including the partial least squares (PLS), used to reduce dimensional regression analysis. By using linear transformations that convert a set of variables associated with a high link to a set of new independent variables and unr
... Show MoreInterval methods for verified integration of initial value problems (IVPs) for ODEs have been used for more than 40 years. For many classes of IVPs, these methods have the ability to compute guaranteed error bounds for the flow of an ODE, where traditional methods provide only approximations to a solution. Overestimation, however, is a potential drawback of verified methods. For some problems, the computed error bounds become overly pessimistic, or integration even breaks down. The dependency problem and the wrapping effect are particular sources of overestimations in interval computations. Berz (see [1]) and his co-workers have developed Taylor model methods, which extend interval arithmetic with symbolic computations. The latter is an ef
... Show MoreThe Assignment model is a mathematical model that aims to express a real problem facing factories and companies which is characterized by the guarantee of its activity in order to make the appropriate decision to get the best allocation of machines or jobs or workers on machines in order to increase efficiency or profits to the highest possible level or reduce costs or time To the extent possible, and in this research has been using the method of labeling to solve the problem of the fuzzy assignment of real data has been approved by the tire factory Diwaniya, where the data included two factors are the factors of efficiency and cost, and was solved manually by a number of iterations until reaching the optimization solution,
... Show MoreMost of the medical datasets suffer from missing data, due to the expense of some tests or human faults while recording these tests. This issue affects the performance of the machine learning models because the values of some features will be missing. Therefore, there is a need for a specific type of methods for imputing these missing data. In this research, the salp swarm algorithm (SSA) is used for generating and imputing the missing values in the pain in my ass (also known Pima) Indian diabetes disease (PIDD) dataset, the proposed algorithm is called (ISSA). The obtained results showed that the classification performance of three different classifiers which are support vector machine (SVM), K-nearest neighbour (KNN), and Naïve B
... Show MoreIn data transmission a change in single bit in the received data may lead to miss understanding or a disaster. Each bit in the sent information has high priority especially with information such as the address of the receiver. The importance of error detection with each single change is a key issue in data transmission field.
The ordinary single parity detection method can detect odd number of errors efficiently, but fails with even number of errors. Other detection methods such as two-dimensional and checksum showed better results and failed to cope with the increasing number of errors.
Two novel methods were suggested to detect the binary bit change errors when transmitting data in a noisy media.Those methods were: 2D-Checksum me
Database is characterized as an arrangement of data that is sorted out and disseminated in a way that allows the client to get to the data being put away in a simple and more helpful way. However, in the era of big-data the traditional methods of data analytics may not be able to manage and process the large amount of data. In order to develop an efficient way of handling big-data, this work studies the use of Map-Reduce technique to handle big-data distributed on the cloud. This approach was evaluated using Hadoop server and applied on EEG Big-data as a case study. The proposed approach showed clear enhancement for managing and processing the EEG Big-data with average of 50% reduction on response time. The obtained results provide EEG r
... Show MoreA simple reverse-phase high performance liquid chromatographic method for the simultaneous analysis (separation and quantification) of furosemide (FURO), carbamazepine (CARB), diazepam (DIAZ) and carvedilol (CARV) has been developed and validated. The method was carried out on a NUCLEODUR® 100-5 C18ec column (250 x 4.6 mm, i. d.5μm), with a mobile phase comprising of acetonitrile: deionized water (50: 50 v/v, pH adjusted to 3.6 ±0.05 with acetic acid) at a flow rate 1.5 mL.min-1 and the quantification was achieved at 226 nm. The retention times of FURO, CARB, DIAZ and CARV were found to be 1.90 min, 2.79 min, 5.39 min and 9.56 min respectively. The method was validated in terms of linearity, accuracy, precision, limit of detection and li
... Show More