The paired sample t-test is a type of classical test statistics that is used to test the difference between two means in paired data, but it is not robust against the violation of the normality assumption. In this paper, some alternative robust tests are suggested by combining the Jackknife resampling with each of the Wilcoxon signed-rank test for small sample size and Wilcoxon signed-rank test for large sample size, using normal approximation. The Monte Carlo simulation experiments were employed to study the performance of the test statistics of each of these tests depending on the type one error rates and the power rates of the test statistics. All these tests were applied on different sample sizes generated from three distributions, represented by Bivariate normal distribution, contaminated Bivariate normal distribution, and Bivariate exponential distribution.
The aim of this study to identify the effect of using two strategies for active learning ( Jigsaw Strategy & Problems Solving) in learning some balanced beam's skills in artistic gymnastics for women , as well as to identify the best of the three methods (jigsaw strategy , problems solving and the traditional method) in learning some skills balance beam , the research has used the experimental methodology, and the subject included the students of the college of Physical Education and Sports Sciences / University of Baghdad / third grade and by the lot was selected (10) students for each group of groups Search three and The statistical package for social sciences (SPSS) was used means, the standard deviation and the (T.test), the one way a n
... Show More
It is considered as one of the statistical methods used to describe and estimate the relationship between randomness (Y) and explanatory variables (X). The second is the homogeneity of the variance, in which the dependent variable is a binary response takes two values (One when a specific event occurred and zero when that event did not happen) such as (injured and uninjured, married and unmarried) and that a large number of explanatory variables led to the emergence of the problem of linear multiplicity that makes the estimates inaccurate, and the method of greatest possibility and the method of declination of the letter was used in estimating A double-response logistic regression model by adopting the Jackna
... Show More
It is considered as one of the statistical methods used to describe and estimate the relationship between randomness (Y) and explanatory variables (X). The second is the homogeneity of the variance, in which the dependent variable is a binary response takes two values (One when a specific event occurred and zero when that event did not happen) such as (injured and uninjured, married and unmarried) and that a large number of explanatory variables led to the emergence of the problem of linear multiplicity that makes the estimates inaccurate, and the method of greatest possibility and the method of declination of the letter was used in estimating A double-response logistic regression model by adopting the Jackna
... Show MoreReduce the required time for measuring the permeability of clayey soils by using new manufactured cell
The particle-hole state densities have been calculated for 232Th in
the case of incident neutron with , 1 Z Z T T T T and 2 Z T T .
The finite well depth, surface effect, isospin and Pauli correction are
considered in the calculation of the state densities and then the
transition rates. The isospin correction function ( ) iso f has been
examined for different exciton configurations and at different
excitation energies up to 100 MeV. The present results are indicated
that the included corrections have more affected on transition rates
behavior for , , and above 30MeV excitation energy
In this study used three methods such as Williamson-hall, size-strain Plot, and Halder-Wagner to analysis x-ray diffraction lines to determine the crystallite size and the lattice strain of the nickel oxide nanoparticles and then compare the results of these methods with two other methods. The results were calculated for each of these methods to the crystallite size are (0.42554) nm, (1.04462) nm, and (3.60880) nm, and lattice strain are (0.56603), (1.11978), and (0.64606) respectively were compared with the result of Scherrer method (0.29598) nm,(0.34245),and the Modified Scherrer (0.97497). The difference in calculated results Observed for each of these methods in this study.
In this paper, Mann-Kendall test was used to investigate the existence of possible deterministic and stochastic climatic trends in (Baghdad,Basrah,Mosul,Al-Qaim) stations. The statistical test was applied to annual monthly mean of temperatures for the period (19932009). The values of S-statistic were (62, 44, 52, 64) by comparing these values with the table of null probability values for S we get a probability of (0.002, 0.026, 0.010, 0.002) this result is less than α for the 95% confidence level (α = 0.05) indicating a significant result at this level of confidence. Concluded that an increasing trend in concentration is present at the 95% confidence level and the variance of the S-statistic is calculated and it is com
... Show MoreIn this article, the lattice Boltzmann method with two relaxation time (TRT) for the D2Q9 model is used to investigate numerical results for 2D flow. The problem is performed to show the dissipation of the kinetic energy rate and its relationship with the enstrophy growth for 2D dipole wall collision. The investigation is carried out for normal collision and oblique incidents at an angle of . We prove the accuracy of moment -based boundary conditions with slip and Navier-Maxwell slip conditions to simulate this flow. These conditions are under the effect of Burnett-order stress conditions that are consistent with the discrete Boltzmann equation. Stable results are found by using this kind of boundary condition where d
... Show MoreHartha Formation is an overburdened horizon in the X-oilfield which generates a lot of Non-Productive Time (NPT) associated with drilling mud losses. This study has been conducted to investigate the loss events in this formation as well as to provide geological interpretations based on datasets from nine wells in this field of interest. The interpretation was based on different analyses including wireline logs, cuttings descriptions, image logs, and analog data. Seismic and coherency data were also used to formulate the geological interpretations and calibrate that with the loss events of the Hartha Fm.
The results revealed that the upper part of the Hartha Fm. was identified as an interval capable of creating potentia
... Show MoreThis work presents the use of laser diode in the fiber distributed data interface FDDI networks. FDDI uses optical fiber as a transmission media. This solves the problems resulted from the EMI, and noise. In addition it increases the security of transmission. A network with a ring topology consists of three computers was designed and implemented. The timed token protocol was used to achieve and control the process of communication over the ring. Nonreturn to zero inversion (NRZI) modulation was carried out as a part of the physical (PHY) sublayer. The optical system consists of a laser diode with wavelength of 820 nm and 2.5 mW maximum output power as a source, optical fiber as a channel, and positive intrinsic negative (PIN) photodiode
... Show More