In this paper, we will study non parametric model when the response variable have missing data (non response) in observations it under missing mechanisms MCAR, then we suggest Kernel-Based Non-Parametric Single-Imputation instead of missing value and compare it with Nearest Neighbor Imputation by using the simulation about some difference models and with difference cases as the sample size, variance and rate of missing data.
A non-parametric kernel method with Bootstrap technology was used to estimate the confidence intervals of the system failure function of the log-normal distribution trace data. These are the times of failure of the machines of the spinning department of the weaving company in Wasit Governorate. Estimating the failure function in a parametric way represented by the method of the maximum likelihood estimator (MLE). The comparison between the parametric and non-parametric methods was done by using the average of Squares Error (MES) criterion. It has been noted the efficiency of the nonparametric methods based on Bootstrap compared to the parametric method. It was also noted that the curve estimation is more realistic and appropriate for the re
... Show MoreTransforming the common normal distribution through the generated Kummer Beta model to the Kummer Beta Generalized Normal Distribution (KBGND) had been achieved. Then, estimating the distribution parameters and hazard function using the MLE method, and improving these estimations by employing the genetic algorithm. Simulation is used by assuming a number of models and different sample sizes. The main finding was that the common maximum likelihood (MLE) method is the best in estimating the parameters of the Kummer Beta Generalized Normal Distribution (KBGND) compared to the common maximum likelihood according to Mean Squares Error (MSE) and Mean squares Error Integral (IMSE) criteria in estimating the hazard function. While the pr
... Show More
Abstract:
The models of time series often suffer from the problem of the existence of outliers that accompany the data collection process for many reasons, their existence may have a significant impact on the estimation of the parameters of the studied model. Access to highly efficient estimators is one of the most important stages of statistical analysis, And it is therefore important to choose the appropriate methods to obtain good estimators. The aim of this research is to compare the ordinary estimators and the robust estimators of the estimation of the parameters of
... Show MoreFinding the shortest route in wireless mesh networks is an important aspect. Many techniques are used to solve this problem like dynamic programming, evolutionary algorithms, weighted-sum techniques, and others. In this paper, we use dynamic programming techniques to find the shortest path in wireless mesh networks due to their generality, reduction of complexity and facilitation of numerical computation, simplicity in incorporating constraints, and their onformity to the stochastic nature of some problems. The routing problem is a multi-objective optimization problem with some constraints such as path capacity and end-to-end delay. Single-constraint routing problems and solutions using Dijkstra, Bellman-Ford, and Floyd-Warshall algorith
... Show MoreAccording to the circumstances experienced by our country which led to Occurrence of many crises that are the most important crisis is gaining fuel therefore , the theory of queue ( waiting line ) had been used to solve this crisis and as the relevance of this issue indirect and essential role in daily life .
This research aims to conduct a study of the distribution of gasoline station in (both sides AL – kharkh and AL Rusafa, for the purpose of reducing wasting time and services time through the criteria of the theory of queues and work to improve the efficiency of these stations by the other hand. we are working to reduce the cost of station and increase profits by reducing the active serv
... Show MoreIn general, the importance of cluster analysis is that one can evaluate elements by clustering multiple homogeneous data; the main objective of this analysis is to collect the elements of a single, homogeneous group into different divisions, depending on many variables. This method of analysis is used to reduce data, generate hypotheses and test them, as well as predict and match models. The research aims to evaluate the fuzzy cluster analysis, which is a special case of cluster analysis, as well as to compare the two methods—classical and fuzzy cluster analysis. The research topic has been allocated to the government and private hospitals. The sampling for this research was comprised of 288 patients being treated in 10 hospitals. As t
... Show MoreAnalysis the economic and financial phenomena and other requires to build the appropriate model, which represents the causal relations between factors. The operation building of the model depends on Imaging conditions and factors surrounding an in mathematical formula and the Researchers target to build that formula appropriately. Classical linear regression models are an important statistical tool, but used in a limited way, where is assumed that the relationship between the variables illustrations and response variables identifiable. To expand the representation of relationships between variables that represent the phenomenon under discussion we used Varying Coefficient Models
... Show MoreThe aim of the study was to evaluate the efficacy of diode laser (λ=940 nm) in the management of gingival hyperpigmentation compared to the conventional bur method. Materials and methods: Eighteen patients with gingival hyperpigmentation were selected for the study with an age between 12-37 years old. The site of treatment was the upper gingiva using diode laser for the right half and the conventional method for the left half. All patients were re-evaluated after the following intervals: 3 days, 7 days, 1 month and 6 months post-operation. Pain and functions were re-evaluated in each visit for a period of 1 day, 3 days and 1 week post-operation. Laser parameters included 1.5 W in continuous mode with an initiated tip (400 μm) placed in
... Show MorePurpose: The research aims to estimate models representing phenomena that follow the logic of circular (angular) data, accounting for the 24-hour periodicity in measurement. Theoretical framework: The regression model is developed to account for the periodic nature of the circular scale, considering the periodicity in the dependent variable y, the explanatory variables x, or both. Design/methodology/approach: Two estimation methods were applied: a parametric model, represented by the Simple Circular Regression (SCR) model, and a nonparametric model, represented by the Nadaraya-Watson Circular Regression (NW) model. The analysis used real data from 50 patients at Al-Kindi Teaching Hospital in Baghdad. Findings: The Mean Circular Erro
... Show MoreThe advancement of digital technology has increased the deployment of wireless sensor networks (WSNs) in our daily life. However, locating sensor nodes is a challenging task in WSNs. Sensing data without an accurate location is worthless, especially in critical applications. The pioneering technique in range-free localization schemes is a sequential Monte Carlo (SMC) method, which utilizes network connectivity to estimate sensor location without additional hardware. This study presents a comprehensive survey of state-of-the-art SMC localization schemes. We present the schemes as a thematic taxonomy of localization operation in SMC. Moreover, the critical characteristics of each existing scheme are analyzed to identify its advantages
... Show More