In this research، a comparison has been made between the robust estimators of (M) for the Cubic Smoothing Splines technique، to avoid the problem of abnormality in data or contamination of error، and the traditional estimation method of Cubic Smoothing Splines technique by using two criteria of differentiation which are (MADE، WASE) for different sample sizes and disparity levels to estimate the chronologically different coefficients functions for the balanced longitudinal data which are characterized by observations obtained through (n) from the independent subjects، each one of them is measured repeatedly by group of specific time points (m)،since the frequent measurements within the subjects are almost connected and independent among the different subjects
Abstract
In this research we study the wavelet characteristics for the important time series known as Sunspot, on the aim of verifying the periodogram that other researchers had reached by the spectral transform, and noticing the variation in the period length on one side and the shifting on another.
A continuous wavelet analysis is done for this series and the periodogram in it is marked primarily. for more accuracy, the series is partitioned to its the approximate and the details components to five levels, filtering these components by using fixed threshold on one time and independent threshold on another, finding the noise series which represents the difference between
... Show MoreAn Expression for the transition charge density is investigated
where the deformation in nuclear collective modes is taken into
consideration besides the shell model transition density. The
inelastic longitudinal C2 and C4 form factors are calculated using
this transition charge density for the Ne Mg 20 24 , , Si 28 and S 32
nuclei. In this work, the core polarization transition density is
evaluated by adopting the shape of Tassie model togther with the
derived form of the ground state two-body charge density
distributions (2BCDD's). It is noticed that the core polarization
effects which represent the collective modes are essential in
obtaining a remarkable agreement between the calculated inelastic
longi
Abstract
The Non - Homogeneous Poisson process is considered as one of the statistical subjects which had an importance in other sciences and a large application in different areas as waiting raws and rectifiable systems method , computer and communication systems and the theory of reliability and many other, also it used in modeling the phenomenon that occurred by unfixed way over time (all events that changed by time).
This research deals with some of the basic concepts that are related to the Non - Homogeneous Poisson process , This research carried out two models of the Non - Homogeneous Poisson process which are the power law model , and Musa –okumto , to estimate th
... Show MoreThis research aims to analyze and simulate biochemical real test data for uncovering the relationships among the tests, and how each of them impacts others. The data were acquired from Iraqi private biochemical laboratory. However, these data have many dimensions with a high rate of null values, and big patient numbers. Then, several experiments have been applied on these data beginning with unsupervised techniques such as hierarchical clustering, and k-means, but the results were not clear. Then the preprocessing step performed, to make the dataset analyzable by supervised techniques such as Linear Discriminant Analysis (LDA), Classification And Regression Tree (CART), Logistic Regression (LR), K-Nearest Neighbor (K-NN), Naïve Bays (NB
... Show MoreThis paper is concerned with finding solutions to free-boundary inverse coefficient problems. Mathematically, we handle a one-dimensional non-homogeneous heat equation subject to initial and boundary conditions as well as non-localized integral observations of zeroth and first-order heat momentum. The direct problem is solved for the temperature distribution and the non-localized integral measurements using the Crank–Nicolson finite difference method. The inverse problem is solved by simultaneously finding the temperature distribution, the time-dependent free-boundary function indicating the location of the moving interface, and the time-wise thermal diffusivity or advection velocities. We reformulate the inverse problem as a non-
... Show MorePurpose: to demonstrate the possibility of moving to electronic data exchange dimensions (regulatory requirements, technical requirements, human requirements, senior management support) to simplify the work procedures dimensions (modern procedures, clarity of procedures, short procedures, availability of information and means required. The simplicity of the models used because of its importance to keep abreast of recent developments in the service of municipal works through the application of electronic data interchange, which simplifies procedures and out of the routine in the performance of the work of municipal departments has developed. It was applied to Municipality (Hilla) so that the transformation
... Show MoreThe two-frequency shell model approach is used to calculate the
ground state matter density distribution and the corresponding root
mean square radii of the two-proton17Ne halo nucleus with the
assumption that the model space of 15O core nucleus differ from the
model space of extra two loosely bound valence protons. Two
different size parameters bcore and bhalo of the single particle wave
functions of the harmonic oscillator potential are used. The
calculations are carried out for different configurations of the outer
halo protons in 17Ne nucleus and the structure of this halo nucleus
shows that the dominant configuration when the two halo protons in
the 1d5/2 orbi
In this research , design and study a (beam expander) for the Nd – YAG laser with (1.06 ?m) Wavelength has been studied at 5X zoom with narrow diversion in the room temperature. by using (ZEMAX) to study the system. Evaluate its performance via (ZEMAX) outputs, as bright Spot Diagram via (RMS), Ray Fan Plot, Geometric Encircled Energy and the value of Focal shift. Then study the effect of field of view on the outputs in the room temperature.
The Weibull distribution is considered one of the Type-I Generalized Extreme Value (GEV) distribution, and it plays a crucial role in modeling extreme events in various fields, such as hydrology, finance, and environmental sciences. Bayesian methods play a strong, decisive role in estimating the parameters of the GEV distribution due to their ability to incorporate prior knowledge and handle small sample sizes effectively. In this research, we compare several shrinkage Bayesian estimation methods based on the squared error and the linear exponential loss functions. They were adopted and compared by the Monte Carlo simulation method. The performance of these methods is assessed based on their accuracy and computational efficiency in estimati
... Show More