Estimation of the unknown parameters in 2-D sinusoidal signal model can be considered as important and difficult problem. Due to the difficulty to find estimate of all the parameters of this type of models at the same time, we propose sequential non-liner least squares method and sequential robust M method after their development through the use of sequential approach in the estimate suggested by Prasad et al to estimate unknown frequencies and amplitudes for the 2-D sinusoidal compounds but depending on Downhill Simplex Algorithm in solving non-linear equations for the purpose of obtaining non-linear parameters estimation which represents frequencies and then use of least squares formula to estimate linear parameters which represents amplitude . solve non-linear equations using Newton –Raphson method in sequential non-linear least squares method and obtain parameters estimate that represents frequencies and linear parameters which represents amplitude at the same time, and compared this method with sequential robust M method when the signal affected by different types of noise including the normal distribution of the error and the heavy-tailed distributions error, numerical simulation are performed to observe the performance of the estimation methods for different sample size, and various level of variance using a statistical measure of mean square error (MSE), we conclude in general that sequential non-linear least squares method is more efficiency compared to others if we follow the normal and logistic distribution of noise, but if the noise follow Cauchy distribution it was a sequential robust M method based on bi-square weight function is the best in the estimation.
Abstract
The grey system model GM(1,1) is the model of the prediction of the time series and the basis of the grey theory. This research presents the methods for estimating parameters of the grey model GM(1,1) is the accumulative method (ACC), the exponential method (EXP), modified exponential method (Mod EXP) and the Particle Swarm Optimization method (PSO). These methods were compared based on the Mean square error (MSE) and the Mean Absolute percentage error (MAPE) as a basis comparator and the simulation method was adopted for the best of the four methods, The best method was obtained and then applied to real data. This data represents the consumption rate of two types of oils a he
... Show MoreAbstract
In this research will be treated with a healthy phenomenon has a significant impact on different age groups in the community, but a phenomenon tonsillitis where they will be first Tawfiq model slope self moving averages seasonal ARMA Seasonal through systematic Xbox Cengnzla counter with rheumatoid tonsils in the city of Mosul, and for the period 2004-2009 with prediction of these numbers coming twelve months, has found that the specimen is the best representation of the data model is the phenomenon SARMA (1,1) * (2,1) 12 from the other side and explanatory variables using a maximum temperature and minimum temperature, sol
It is considered as one of the statistical methods used to describe and estimate the relationship between randomness (Y) and explanatory variables (X). The second is the homogeneity of the variance, in which the dependent variable is a binary response takes two values (One when a specific event occurred and zero when that event did not happen) such as (injured and uninjured, married and unmarried) and that a large number of explanatory variables led to the emergence of the problem of linear multiplicity that makes the estimates inaccurate, and the method of greatest possibility and the method of declination of the letter was used in estimating A double-response logistic regression model by adopting the Jackna
... Show More
It is considered as one of the statistical methods used to describe and estimate the relationship between randomness (Y) and explanatory variables (X). The second is the homogeneity of the variance, in which the dependent variable is a binary response takes two values (One when a specific event occurred and zero when that event did not happen) such as (injured and uninjured, married and unmarried) and that a large number of explanatory variables led to the emergence of the problem of linear multiplicity that makes the estimates inaccurate, and the method of greatest possibility and the method of declination of the letter was used in estimating A double-response logistic regression model by adopting the Jackna
... Show MoreThis paper discusses the Sums of Squares of “m” consecutive Woodall Numbers. These discussions are made from the definition of Woodall numbers. Also learn the comparability of Woodall numbers and other special numbers. An attempt to communicate the formula for the sums of squares of ‘m’ Woodall numbers and its matrix form are discussed. Further, this study expresses some more correlations between Woodall numbers and other special numbers.
A study has been done to find the optimum separators pressures of separation stations. Stage separation of oil and gas is accomplished with a series of separators operating at sequentially reduced pressures. Liquid discharged from a higher pressure separator into the lower pressure separator. The set of working separators pressures which yield maximum recovery of liquid hydrocarbon from the well fluid is the optimum set of pressures which is the target of this work.
Computer model is used to find the optimum separators pressures. The model employs the Peng-Robinson equation of state for volatile oil. Application of this model shows good improvement of al
This study's objective is to assess how well UV spectrophotometry can be used in conjunction with multivariate calibration based on partial least squares (PLS) regression for concurrent quantitative analysis of antibacterial mixture (Levofloxacin (LIV), Metronidazole (MET), Rifampicin (RIF) and Sulfamethoxazole (SUL)) in their artificial mixtures and pharmaceutical formulations. The experimental calibration and validation matrixes were created using 42 and 39 samples, respectively. The concentration range taken into account was 0-17 μg/mL for all components. The calibration standards' absorbance measurements were made between 210 and 350 nm, with intervals of 0.2 nm. The associated parameters were examined in order to develop the optimal c
... Show MoreUse of lower squares and restricted boxes
In the estimation of the first-order self-regression parameter
AR (1) (simulation study)
X-ray diffractometers deliver the best quality diffraction data while being easy to use and adaptable to various applications. When X-ray photons strike electrons in materials, the incident photons scatter in a direction different from the incident beam; if the scattered beams do not change in wavelength, this is known as elastic scattering, which causes amplitude and intensity diffraction, leading to constructive interference. When the incident beam gives some of its energy to the electrons, the scattered beam's wavelength differs from the incident beam's wavelength, causing inelastic scattering, which leads to destructive interference and zero-intensity diffraction. In this study, The modified size-strain plot method was used to examin
... Show MoreThe Estimation Of The Reliability Function Depends On The Accuracy Of The Data Used To Estimate The Parameters Of The Probability distribution, and Because Some Data Suffer from a Skew in their Data to Estimate the Parameters and Calculate the Reliability Function in light of the Presence of Some Skew in the Data, there must be a Distribution that has flexibility in dealing with that Data. As in the data of Diyala Company for Electrical Industries, as it was observed that there was a positive twisting in the data collected from the Power and Machinery Department, which required distribution that deals with those data and searches for methods that accommodate this problem and lead to accurate estimates of the reliability function,
... Show More