The nuclear level density parameter in non Equi-Spacing Model (NON-ESM), Equi-Spacing Model (ESM) and the Backshifted Energy Dependent Fermi Gas model (BSEDFG) was determined for 106 nuclei; the results are tabulated and compared with the experimental works. It was found that there are no recognizable differences between our results and the experimental -values. The calculated level density parameters have been used in computing the state density as a function of the excitation energies for 58Fe and 246Cm nuclei. The results are in a good agreement with the experimental results from earlier published work.
The calculation. of the nuclear. charge. density. distributions. ρ(r) and root. mean. square. radius.( RMS ) by elastic. electron. scattering. of medium. mass. nuclei. such. as (90Zr, 92Mo) based. on the model. of the modified. shell. and the use of the probability. of occupation. on the surface. orbits. of level 2p, 2s eroding. shells. and 1g gaining. shells. The occupation probabilities of these states differ noticeably from the predictions of the SSM. We have found. an improvement. in the determination. of ground. charge. density. and this improvement. allow. more precise. identification. of (CDD) between. (92Mo- 90Zr) to illustrate the influence of the extra
... Show MoreThe pre - equilibrium and equilibrium double differential cross
sections are calculated at different energies using Kalbach Systematic
approach in terms of Exciton model with Feshbach, Kerman and
Koonin (FKK) statistical theory. The angular distribution of nucleons
and light nuclei on 27Al target nuclei, at emission energy in the center
of mass system, are considered, using the Multistep Compound
(MSC) and Multistep Direct (MSD) reactions. The two-component
exciton model with different corrections have been implemented in
calculating the particle-hole state density towards calculating the
transition rates of the possible reactions and follow up the calculation
the differential cross-sections, that include MS
In this paper, point estimation for parameter ? of Maxwell-Boltzmann distribution has been investigated by using simulation technique, to estimate the parameter by two sections methods; the first section includes Non-Bayesian estimation methods, such as (Maximum Likelihood estimator method, and Moment estimator method), while the second section includes standard Bayesian estimation method, using two different priors (Inverse Chi-Square and Jeffrey) such as (standard Bayes estimator, and Bayes estimator based on Jeffrey's prior). Comparisons among these methods were made by employing mean square error measure. Simulation technique for different sample sizes has been used to compare between these methods.
A comparison of double informative and non- informative priors assumed for the parameter of Rayleigh distribution is considered. Three different sets of double priors are included, for a single unknown parameter of Rayleigh distribution. We have assumed three double priors: the square root inverted gamma (SRIG) - the natural conjugate family of priors distribution, the square root inverted gamma – the non-informative distribution, and the natural conjugate family of priors - the non-informative distribution as double priors .The data is generating form three cases from Rayleigh distribution for different samples sizes (small, medium, and large). And Bayes estimators for the parameter is derived under a squared erro
... Show MoreThis paper is interested in comparing the performance of the traditional methods to estimate parameter of exponential distribution (Maximum Likelihood Estimator, Uniformly Minimum Variance Unbiased Estimator) and the Bayes Estimator in the case of data to meet the requirement of exponential distribution and in the case away from the distribution due to the presence of outliers (contaminated values). Through the employment of simulation (Monte Carlo method) and the adoption of the mean square error (MSE) as criterion of statistical comparison between the performance of the three estimators for different sample sizes ranged between small, medium and large (n=5,10,25,50,100) and different cases (wit
... Show MoreThis paper is concerned with Double Stage Shrinkage Bayesian (DSSB) Estimator for lowering the mean squared error of classical estimator ˆ q for the scale parameter (q) of an exponential distribution in a region (R) around available prior knowledge (q0) about the actual value (q) as initial estimate as well as to reduce the cost of experimentations. In situation where the experimentations are time consuming or very costly, a Double Stage procedure can be used to reduce the expected sample size needed to obtain the estimator. This estimator is shown to have smaller mean squared error for certain choice of the shrinkage weight factor y( ) and for acceptance region R. Expression for
... Show MoreThis paper describes a new finishing process using magnetic abrasives were newly made to finish effectively brass plate that is very difficult to be polished by the conventional machining processes. Taguchi experimental design method was adopted for evaluating the effect of the process parameters on the improvement of the surface roughness and hardness by the magnetic abrasive polishing. The process parameters are: the applied current to the inductor, the working gap between the workpiece and the inductor, the rotational speed and the volume of powder. The analysis of variance(ANOVA) was analyzed using statistical software to identify the optimal conditions for better surface roughness and hardness. Regressions models based on statistical m
... Show MoreIn this paper, wavelets were used to study the multivariate fractional Brownian motion through the deviations of the random process to find an efficient estimation of Hurst exponent. The results of simulations experiments were shown that the performance of the proposed estimator was efficient. The estimation process was made by taking advantage of the detail coefficients stationarity from the wavelet transform, as the variance of this coefficient showed the power-low behavior. We use two wavelet filters (Haar and db5) to manage minimizing the mean square error of the model.