Entropy define as uncertainty measure has been transfared by using the cumulative distribution function and reliability function for the Burr type – xii. In the case of data which suffer from volatility to build a model the probability distribution on every failure of a sample after achieving limitations function, probabilistic distribution. Has been derived formula probability distribution of the new transfer application entropy on the probability distribution of continuous Burr Type-XII and tested a new function and found that it achieved the conditions function probability, been derived mean and function probabilistic aggregate in order to be approved in the generation of data for the purpose of implementation of simulation experiments. Was then estimate parameters of the probability distribution that has been extracted from the distribution formula for the function of every failure using a method as possible the greatest and the way White and the way the estimated mixed, and comparison between the adoption of the standard average squares error (MSE) to compare the results using the method of simulation in the demo to get to the advantage estimators and volumes of different samples to my teacher and measurement form of distribution. The results reveal that the mixed estimated parameter is the best form either parameter shape, and the results showed that the best estimated of scale parameters are the White estimator
The root-mean square-radius of proton, neutron, matter and charge radii, energy level, inelastic longitudinal form factors, reduced transition probability from the ground state to first-excited 2+ state of even-even isotopes, quadrupole moments, quadrupole deformation parameter, and the occupation numbers for some calcium isotopes for A=42,44,46,48,50 are computed using fp-model space and FPBM interaction. 40Ca nucleus is regarded as the inert core for all isotopes under this model space with valence nucleons are moving throughout the fp-shell model space involving 1f7/2, 2p3/2, 1f5/2, and 2p1/2 orbits. Model space is used to present calculations using FPBM intera
... Show MoreAbstract
This study was conducted by using soil map of LD7 project to interpret the
distribution and shapes of map units by using the index of compaction as an
index of map unit shape explanation. Where there were wide and varied
ranges of compaction index of map units, where the maximum value was
0.892 for MF9 map unit and the lower value was 0.010 for same map unit.
MF9 has wide range appearance of index of compaction after those indices
were statistically analyzed by using cluster analysis to group the similar
ranges together to ease using their values, so the unit MF9 was considered as
key map unit that appears in the soils of LD7 project which may be used to
expect another map units existence in area of
Because of the experience of the mixture problem of high correlation and the existence of linear MultiCollinearity between the explanatory variables, because of the constraint of the unit and the interactions between them in the model, which increases the existence of links between the explanatory variables and this is illustrated by the variance inflation vector (VIF), L-Pseudo component to reduce the bond between the components of the mixture.
To estimate the parameters of the mixture model, we used in our research the use of methods that increase bias and reduce variance, such as the Ridge Regression Method and the Least Absolute Shrinkage and Selection Operator (LASSO) method a
... Show MoreIn this paper, the 5 minutes measured wind speed data for year 2012 at 10 meter height for Tweitha have been statically analyzed to assess the time of wind turbine electrical power generation. After collection Tweitha wind data and calculation of mean wind speed the cumulative Weibull diagram and probability density function was ploted, then each of cumulative Weibull distribution, cut-in and furling turbine wind speed could be used as a mathematical input parameters in order to estimate the hours of electrical power generation for wind turbine during one day or one year. In Tweitha site, found that the average wind speed was (v= 1.76 m/s), so five different wind turbines were be selected to calculate hours of electrical generation for A
... Show MoreThis work is aimed to design a system which is able to diagnose two types of tumors in a human brain (benign and malignant), using curvelet transform and probabilistic neural network. Our proposed method follows an approach in which the stages are preprocessing using Gaussian filter, segmentation using fuzzy c-means and feature extraction using curvelet transform. These features are trained and tested the probabilistic neural network. Curvelet transform is to extract the feature of MRI images. The proposed screening technique has successfully detected the brain cancer from MRI images of an almost 100% recognition rate accuracy.
Excessive skewness which occurs sometimes in the data is represented as an obstacle against normal distribution. So, recent studies have witnessed activity in studying the skew-normal distribution (SND) that matches the skewness data which is regarded as a special case of the normal distribution with additional skewness parameter (α), which gives more flexibility to the normal distribution. When estimating the parameters of (SND), we face the problem of the non-linear equation and by using the method of Maximum Likelihood estimation (ML) their solutions will be inaccurate and unreliable. To solve this problem, two methods can be used that are: the genetic algorithm (GA) and the iterative reweighting algorithm (IR) based on the M
... Show MoreThe aim of this study is to estimate the parameters and reliability function for kumaraswamy distribution of this two positive parameter (a,b > 0), which is a continuous probability that has many characterstics with the beta distribution with extra advantages.
The shape of the function for this distribution and the most important characterstics are explained and estimated the two parameter (a,b) and the reliability function for this distribution by using the maximum likelihood method (MLE) and Bayes methods. simulation experiments are conducts to explain the behaviour of the estimation methods for different sizes depending on the mean squared error criterion the results show that the Bayes is bet
... Show MoreThe phenomenon of informal building Spread recently in Iraqi residential areas, in general, and in Baghdad, in particular, due to the urgent housing need, on the one hand, and lack of commitment to building controls, on the other hand, to highlight the phenomenon of uncommitted building to controls and housing governing legislation in Iraq, leading to heterogeneity in both building densities and plot areas, and disorder in the urban fabric and urban escape of those areas. Research problem identified as the absence of a clear vision about the General aspects of the phenomenon of informal building in residential street scene, and the role of designed housing projects as a substitute for informal building in built residential areas. The des
... Show MoreFlexible molecular docking is a computational method of structure-based drug design to evaluate binding interactions between receptor and ligand and identify the ligand conformation within the receptor pocket. Currently, various molecular docking programs are extensively applied; therefore, realizing accuracy and performance of the various docking programs could have a significant value. In this comparative study, the performance and accuracy of three widely used non-commercial docking software (AutoDock Vina, 1-Click Docking, and UCSF DOCK) was evaluated through investigations of the predicted binding affinity and binding conformation of the same set of small molecules (HIV-1 protease inhibitors) and a protein target HIV-1 protease enzy
... Show More