It is the regression analysis is the foundation stone of knowledge of statistics , which mostly depends on the ordinary least square method , but as is well known that the way the above mentioned her several conditions to operate accurately and the results can be unreliable , add to that the lack of certain conditions make it impossible to complete the work and analysis method and among those conditions are the multi-co linearity problem , and we are in the process of detected that problem between the independent variables using farrar –glauber test , in addition to the requirement linearity data and the lack of the condition last has been resorting to the nonparametric regression and processor the problem using kernel ridge regression function and that depend on estimate band width ( smoothing parameter ) therefore has been resorting to two different ways to estimate the parameter and are Rule of thumb (RULE) and Bootstrap (BOOT) and comparison between those ways using the style of simulation
Gray-Scale Image Brightness/Contrast Enhancement with Multi-Model
Histogram linear Contrast Stretching (MMHLCS) method
Multilevel models are among the most important models widely used in the application and analysis of data that are characterized by the fact that observations take a hierarchical form, In our research we examined the multilevel logistic regression model (intercept random and slope random model) , here the importance of the research highlights that the usual regression models calculate the total variance of the model and its inability to read variance and variations between levels ,however in the case of multi-level regression models, the calculation of the total variance is inaccurate and therefore these models calculate the variations for each level of the model, Where the research aims to estimate the parameters of this m
... Show MoreBackground: Hyperlipidemia is an elevated fat (lipids), mostly cholesterol and triglycerides, in the blood. These lipids usually bind to proteins to remain circulated so-called lipoprotein. Aims of the study: To determine taste detection threshold and estimate the trace elements (zinc) in serum and saliva of those patients and compare all of these with healthy control subjects. Methods: Eighty subjects were incorporated in this study, thy were divided into two groups: forty patients on simvastatin treatment age between (35-60) years, and forty healthy control of age range between (35-60) years. Saliva was collected by non-stimulated technique within 10 minutes. Serum was obtained from each subject. Zinc was estimated in serum and saliva
... Show MorePersistence of antibiotics in the aquatic environment has raised concerns regarding their potential influence on potable water quality and human health. This study analyzes the presence of antibiotics in potable water from two treatment plants in Baghdad City. The collected samples were separated using a solid-phase extraction method with hydrophilic-lipophilic balance (HLB) cartridge before being analyzed. The detected antibiotics in the raw and finished drinking water were analyzed and assessed using high-performance liquid chromatography (HPLC), with fluorometric detector and UV detector. The results confirmed that different antibiotics including fluoroquinolones and
It is considered as one of the statistical methods used to describe and estimate the relationship between randomness (Y) and explanatory variables (X). The second is the homogeneity of the variance, in which the dependent variable is a binary response takes two values (One when a specific event occurred and zero when that event did not happen) such as (injured and uninjured, married and unmarried) and that a large number of explanatory variables led to the emergence of the problem of linear multiplicity that makes the estimates inaccurate, and the method of greatest possibility and the method of declination of the letter was used in estimating A double-response logistic regression model by adopting the Jackna
... Show More
It is considered as one of the statistical methods used to describe and estimate the relationship between randomness (Y) and explanatory variables (X). The second is the homogeneity of the variance, in which the dependent variable is a binary response takes two values (One when a specific event occurred and zero when that event did not happen) such as (injured and uninjured, married and unmarried) and that a large number of explanatory variables led to the emergence of the problem of linear multiplicity that makes the estimates inaccurate, and the method of greatest possibility and the method of declination of the letter was used in estimating A double-response logistic regression model by adopting the Jackna
... Show MoreThe electric energy is one of the most important renewable energies used in the world as it is the main source for sustainable development and economic development through its use in (production, transport and distribution), and in Iraq, the electric power sector has suffered from many problems and obstacles, as providing electric current is one of the most prominent difficulties and challenges That successive governments and residents have faced since the early nineties of the last century and are still ongoing, and that Iraq has all the climatic conditions for developing the work of the electricity system from renewable energies such as solar and hydroelectric energy, as well as gas fields that have become a Basic pillar of pow
... Show MoreIn this study, we focused on the random coefficient estimation of the general regression and Swamy models of panel data. By using this type of data, the data give a better chance of obtaining a better method and better indicators. Entropy's methods have been used to estimate random coefficients for the general regression and Swamy of the panel data which were presented in two ways: the first represents the maximum dual Entropy and the second is general maximum Entropy in which a comparison between them have been done by using simulation to choose the optimal methods.
The results have been compared by using mean squares error and mean absolute percentage error to different cases in term of correlation valu
... Show MoreA new algorithm is proposed to compress speech signals using wavelet transform and linear predictive coding. Signal compression based on the concept of selecting a small number of approximation coefficients after they are compressed by the wavelet decomposition (Haar and db4) at a suitable chosen level and ignored details coefficients, and then approximation coefficients are windowed by a rectangular window and fed to the linear predictor. Levinson Durbin algorithm is used to compute LP coefficients, reflection coefficients and predictor error. The compress files contain LP coefficients and previous sample. These files are very small in size compared to the size of the original signals. Compression ratio is calculated from the size of th
... Show More