The analysis of survival and reliability considered of topics and methods of vital statistics at the present time because of their importance in the various demographical, medical, industrial and engineering fields. This research focused generate random data for samples from the probability distribution Generalized Gamma: GG, known as: "Inverse Transformation" Method: ITM, which includes the distribution cycle integration function incomplete Gamma integration making it more difficult classical estimation so will be the need to illustration to the method of numerical approximation and then appreciation of the function of survival function. It was estimated survival function by simulation the way "Monte Carlo". The Entropy method used for the purposes of assessment and estimating and fitting, this along with the use of the classical method. It was to identify the best estimation method through the use of a of comparison criteria: Root of Mean Square Error: RMSE, and the Mean Absolute Percentage Error: MAPE. Sample sizes were selected as (n = 18, 30, 50, 81) which represents the size of data generation n = 18 five-year age groups for the phenomenon being studied and the sample size n = 81 age group represents a unilateral, and replicated the experiment (500) times. The results showed the simulation that the Maximum Likelihood method is the best in the case of small and medium-sized samples where it was applied to the data for five-year age groups suffering from disturbances and confusion of Iraq Household socio-Economic survey: IHSES II2012 while entropy method outperformed in the case of large samples where it was applied to age groups monounsaturated resulting from the use of mathematical method lead to results based on the staging equation data (Formula for Interpolation) placed Sprague (Sprague) and these transactions or what is called Sprague transactions (Sprague multipliers) are used to derive the preparation of deaths and the preparation of the population by unilateral age within the age groups a five-year given the use of the death toll and the preparation of the population in this age group and its environs from a five-year categories by using Excel program where the use of age groups monounsaturated data for accuracy not detect any age is in danger of annihilation.
Iraqi EFL teachers face problems in teaching “English for Iraq Series” for primary public school pupils. In this paper, the researchers are going to identify the main problems faced by our teachers and try to find solutions to these problems. To achieve the aim of the study, list of questions asked and from teachers’ responses, the researchers have got an idea about the main problems which are related to textbook material, parents, learners, environment and technology. Therefore, the researchers adapted a questionnaire to achieve the purpose of the study with some changes and modifications. This questionnaire with five point scale (strongly agree, agree, undecided, disagree, strongly disagree). To achieve face validity, the
... Show MoreAmong the metaheuristic algorithms, population-based algorithms are an explorative search algorithm superior to the local search algorithm in terms of exploring the search space to find globally optimal solutions. However, the primary downside of such algorithms is their low exploitative capability, which prevents the expansion of the search space neighborhood for more optimal solutions. The firefly algorithm (FA) is a population-based algorithm that has been widely used in clustering problems. However, FA is limited in terms of its premature convergence when no neighborhood search strategies are employed to improve the quality of clustering solutions in the neighborhood region and exploring the global regions in the search space. On the
... Show More
The great scientific progress has led to widespread Information as information accumulates in large databases is important in trying to revise and compile this vast amount of data and, where its purpose to extract hidden information or classified data under their relations with each other in order to take advantage of them for technical purposes.
And work with data mining (DM) is appropriate in this area because of the importance of research in the (K-Means) algorithm for clustering data in fact applied with effect can be observed in variables by changing the sample size (n) and the number of clusters (K)
... Show MoreIn this study, dead and live anaerobic biomass was used in biosorption of Pb(II), Cr(III) and Cd(II) ions from a synthetic wastewater. The biosorption was investigated by batch adsorption experiments. It was found that, the biosorption capacities were significantly affected by biosorbent dosage. The process follows Langmuir isotherm (regression coefficient 0.995, 0.99 and 0.987 for Pb(II), Cr(III) and Cd(II) ions, respectively, onto dead anaerobic biomass) model with uniform distribution over the biomass surface. The experimental uptake capacity was 51.56, 29.2 and 28 mg/g for Pb(II), Cr(III) and Cd(II), respectively, onto dead anaerobic biomass, compared with 35, 13.6 and 11.8 mg/g for Pb(II), Cr(III) and Cd(II), respectively, onto live
... Show MoreThis paper provides an attempt for modeling rate of penetration (ROP) for an Iraqi oil field with aid of mud logging data. Data of Umm Radhuma formation was selected for this modeling. These data include weight on bit, rotary speed, flow rate and mud density. A statistical approach was applied on these data for improving rate of penetration modeling. As result, an empirical linear ROP model has been developed with good fitness when compared with actual data. Also, a nonlinear regression analysis of different forms was attempted, and the results showed that the power model has good predicting capability with respect to other forms.
This research includes the study of dual data models with mixed random parameters, which contain two types of parameters, the first is random and the other is fixed. For the random parameter, it is obtained as a result of differences in the marginal tendencies of the cross sections, and for the fixed parameter, it is obtained as a result of differences in fixed limits, and random errors for each section. Accidental bearing the characteristic of heterogeneity of variance in addition to the presence of serial correlation of the first degree, and the main objective in this research is the use of efficient methods commensurate with the paired data in the case of small samples, and to achieve this goal, the feasible general least squa
... Show MoreEncryption of data is translating data to another shape or symbol which enables people only with an access to the secret key or a password that can read it. The data which are encrypted are generally referred to as cipher text, while data which are unencrypted are known plain text. Entropy can be used as a measure which gives the number of bits that are needed for coding the data of an image. As the values of pixel within an image are dispensed through further gray-levels, the entropy increases. The aim of this research is to compare between CAST-128 with proposed adaptive key and RSA encryption methods for video frames to determine the more accurate method with highest entropy. The first method is achieved by applying the "CAST-128" and
... Show MoreObjective(s): This study aims to assess health related quality of life among Iraqi patients with chronic viral hepatitis
B and C also to find out the relationship between health related quality of life and patients demographic
characteristic and to design a new measurement scale for assessing QoL among viral hepatitis B and C patients
which can be suitable to be adopted for Iraqi patients
Methodology: A descriptive quantitative study is carried out at Gastroenterology and Hepatology Teaching
Hospital from February, 1st, 2011 to August 30th 2011, Anon probability (purposive sample) of (100) chronic viral
hepatitis B and C persons , who were clients of Gastroenterology and Hepatology Teaching Hospital / outpatient
clin