Copula modeling is widely used in modern statistics. The boundary bias problem is one of the problems faced when estimating by nonparametric methods, as kernel estimators are the most common in nonparametric estimation. In this paper, the copula density function was estimated using the probit transformation nonparametric method in order to get rid of the boundary bias problem that the kernel estimators suffer from. Using simulation for three nonparametric methods to estimate the copula density function and we proposed a new method that is better than the rest of the methods by five types of copulas with different sample sizes and different levels of correlation between the copula variables and the different parameters for the function. The
... Show MoreIn this article, we developed a new loss function, as the simplification of linear exponential loss function (LINEX) by weighting LINEX function. We derive a scale parameter, reliability and the hazard functions in accordance with upper record values of the Lomax distribution (LD). To study a small sample behavior performance of the proposed loss function using a Monte Carlo simulation, we make a comparison among maximum likelihood estimator, Bayesian estimator by means of LINEX loss function and Bayesian estimator using square error loss (SE) function. The consequences have shown that a modified method is the finest for valuing a scale parameter, reliability and hazard functions.
The optimization calculations are made to find the optimum properties of combined quadrupole lens consist of electrostatic and magnetic lenses to produce achromatic lens. The modified bell-shaped model is used and the calculation is made by solving the equation of motion and finding the transfer matrices in convergence and divergence planes, these matrices are used to find the properties of lens as the magnification and aberrations coefficients. To find the optimum values of chromatic and spherical aberrations coefficients, the effect of both the excitation parameter of the lens (n) and the effective length of the lens into account as effective parameters in the optimization processing
A computational investigation is carried out in the field of charged particle optics with the aid of the numerical analysis methods. The work is concerned with the design of symmetrical double pole piece magnetic lens. The axial magnetic flux density distribution is determined by using exponential model, from which the paraxial-ray equation is solved to obtain the trajectory of particles that satisfy the suggested exponential model. From the knowledge of the first and second derivatives of axial potential distribution, the optical properties such as the focal length and aberration coefficients (radial distortion coefficient and spiral distortion coefficient) are determined. Finally, the pole piece profiles capable of pr
... Show MoreIn this work the diode planer magnetron sputtering device was
designed and fabricated. This device consists of two aluminum discs
(8cm) diameter and (5mm) thick. The distance between the two
electrodes is 2cm, 3cm, 4cm and 5cm.
Design and construction a double probe of tungsten wire with
(0.1mm) diameter and (1.2mm) length has been done to investigate
electron temperature, electron and ion density under different
distances between cathode and anode. The probes were situated in
the center of plasma between anode and cathode.
The results of this work show that, when the distance between
cathode and anode increased, the electron temperature decreased.
Also, the electron density increases with the increasing
In data mining, classification is a form of data analysis that can be used to extract models describing important data classes. Two of the well known algorithms used in data mining classification are Backpropagation Neural Network (BNN) and Naïve Bayesian (NB). This paper investigates the performance of these two classification methods using the Car Evaluation dataset. Two models were built for both algorithms and the results were compared. Our experimental results indicated that the BNN classifier yield higher accuracy as compared to the NB classifier but it is less efficient because it is time-consuming and difficult to analyze due to its black-box implementation.
In this paper we used frequentist and Bayesian approaches for the linear regression model to predict future observations for unemployment rates in Iraq. Parameters are estimated using the ordinary least squares method and for the Bayesian approach using the Markov Chain Monte Carlo (MCMC) method. Calculations are done using the R program. The analysis showed that the linear regression model using the Bayesian approach is better and can be used as an alternative to the frequentist approach. Two criteria, the root mean square error (RMSE) and the median absolute deviation (MAD) were used to compare the performance of the estimates. The results obtained showed that the unemployment rates will continue to increase in the next two decade
... Show MoreBMMAM Saleh, EUROPEAN ACADEMIC RESEARCH, 2016