Circular data (circular sightings) are periodic data and are measured on the unit's circle by radian or grades. They are fundamentally different from those linear data compatible with the mathematical representation of the usual linear regression model due to their cyclical nature. Circular data originate in a wide variety of fields of scientific, medical, economic and social life. One of the most important statistical methods that represents this data, and there are several methods of estimating angular regression, including teachers and non-educationalists, so the letter included the use of three models of angular regression, two of which are teaching models and one of which is a model of educators. ) (DM) (MLE) and circular shrinkage model (Circular Shrinkage Method) (SH) This method is a method proposed by the researcher, and the non-educational model is the circular positional regression model Local Linear Circular Regression (LL), and the Mean Circular Error (MCE) criterion was used to compare the three models. The results were shown on the experimental side (simulation) using inverse method (inverse method) and using R language software, in simulation experiments (9 experiments) and for all default values, Lack of preference for teacher models compared to non-teacher models.
In this paper, 3D simulation of the global coronal magnetic field, which use observed line of sight component of the photosphere magnetic field from (MDI/SOHO) was carried out using potential field model. The obtained results, improved the theoretical models of the coronal magnetic field, which represent a suitable lower boundary conditions (Bx, By, Bz) at the base of the linear force-free and nonlinear force free models, provides a less computationally expensive method than other models. Generally, very high speed computer and special configuration is needed to solve such problem as well as the problem of viewing the streamline of the magnetic field. For high accuracy special mathematical treatment was adopted to solve the computation comp
... Show MoreOften phenomena suffer from disturbances in their data as well as the difficulty of formulation, especially with a lack of clarity in the response, or the large number of essential differences plaguing the experimental units that have been taking this data from them. Thus emerged the need to include an estimation method implicit rating of these experimental units using the method of discrimination or create blocks for each item of these experimental units in the hope of controlling their responses and make it more homogeneous. Because of the development in the field of computers and taking the principle of the integration of sciences it has been found that modern algorithms used in the field of Computer Science genetic algorithm or ant colo
... Show MoreThis research aims to study the methods of reduction of dimensions that overcome the problem curse of dimensionality when traditional methods fail to provide a good estimation of the parameters So this problem must be dealt with directly . Two methods were used to solve the problem of high dimensional data, The first method is the non-classical method Slice inverse regression ( SIR ) method and the proposed weight standard Sir (WSIR) method and principal components (PCA) which is the general method used in reducing dimensions, (SIR ) and (PCA) is based on the work of linear combinations of a subset of the original explanatory variables, which may suffer from the problem of heterogeneity and the problem of linear
... Show MoreNuclear emission rates for nucleon-induced reactions are theoretically calculated based on the one-component exciton model that uses state density with non-Equidistance Spacing Model (non-ESM). Fair comparison is made from different state density values that assumed various degrees of approximation formulae, beside the zeroth-order formula corresponding to the ESM. Calculations were made for 96Mo nucleus subjected to (N,N) reaction at Emax=50 MeV. The results showed that the non-ESM treatment for the state density will significantly improve the emission rates calculated for various exciton configurations. Three terms might suffice a proper calculation, but the results kept changing even for ten terms. However, five terms is found to give
... Show MoreThis work includes synthesis of new six membered heterocyclic rings with effective amino group using the reaction of benzylideneacetophenone (chalcone) (1) with thiourea or urea in alcoholic basic medium to form: 1,3-thiazen-2-amine (2), and 1,3-oxazin-2-amine (8) respectively. The diazotization reaction was carried out with sodium nitrite in presence of hydrochloric acid to form diazonium salts which suffered coupling reaction with naphthols and phenols in the presence of sodium hydroxide to form colored azo dyes (4-7, and 10-13). o-methylation reaction of compounds (7) and (10) yielded : 1,3-thiazin -2-yl-diazenyl (14), and 1,3-oxazin-2-yl-diazenyl (15) respectively.The new compounds were characterized using vario
... Show MoreThe objective of the study is to demonstrate the predictive ability is better between the logistic regression model and Linear Discriminant function using the original data first and then the Home vehicles to reduce the dimensions of the variables for data and socio-economic survey of the family to the province of Baghdad in 2012 and included a sample of 615 observation with 13 variable, 12 of them is an explanatory variable and the depended variable is number of workers and the unemployed.
Was conducted to compare the two methods above and it became clear by comparing the logistic regression model best of a Linear Discriminant function written
... Show MoreThe logistic regression model is an important statistical model showing the relationship between the binary variable and the explanatory variables. The large number of explanations that are usually used to illustrate the response led to the emergence of the problem of linear multiplicity between the explanatory variables that make estimating the parameters of the model not accurate.
... Show MoreThe purpose of this article is to improve and minimize noise from the signal by studying wavelet transforms and showing how to use the most effective ones for processing and analysis. As both the Discrete Wavelet Transformation method was used, we will outline some transformation techniques along with the methodology for applying them to remove noise from the signal. Proceeds based on the threshold value and the threshold functions Lifting Transformation, Wavelet Transformation, and Packet Discrete Wavelet Transformation. Using AMSE, A comparison was made between them , and the best was selected. When the aforementioned techniques were applied to actual data that was represented by each of the prices, it became evident that the lift
... Show More