Poverty phenomenon is very substantial topic that determines the future of societies and governments and the way that they deals with education, health and economy. Sometimes poverty takes multidimensional trends through education and health. The research aims at studying multidimensional poverty in Iraq by using panelized regression methods, to analyze Big Data sets from demographical surveys collected by the Central Statistical Organization in Iraq. We choose classical penalized regression method represented by The Ridge Regression, Moreover; we choose another penalized method which is the Smooth Integration of Counting and Absolute Deviation (SICA) to analyze Big Data sets related to the different poverty forms in Iraq. Euclidian Distance (ED) was used to compare the two methods and the research conclude that the SICA method is better than Ridge estimator with Big Data conditions.
With the revolutionized expansion of the Internet, worldwide information increases the application of communication technology, and the rapid growth of significant data volume boosts the requirement to accomplish secure, robust, and confident techniques using various effective algorithms. Lots of algorithms and techniques are available for data security. This paper presents a cryptosystem that combines several Substitution Cipher Algorithms along with the Circular queue data structure. The two different substitution techniques are; Homophonic Substitution Cipher and Polyalphabetic Substitution Cipher in which they merged in a single circular queue with four different keys for each of them, which produces eight different outputs for
... Show MoreThis paper presents a hybrid approach for solving null values problem; it hybridizes rough set theory with intelligent swarm algorithm. The proposed approach is a supervised learning model. A large set of complete data called learning data is used to find the decision rule sets that then have been used in solving the incomplete data problem. The intelligent swarm algorithm is used for feature selection which represents bees algorithm as heuristic search algorithm combined with rough set theory as evaluation function. Also another feature selection algorithm called ID3 is presented, it works as statistical algorithm instead of intelligent algorithm. A comparison between those two approaches is made in their performance for null values estima
... Show MoreGenerally, direct measurement of soil compression index (Cc) is expensive and time-consuming. To save time and effort, indirect methods to obtain Cc may be an inexpensive option. Usually, the indirect methods are based on a correlation between some easier measuring descriptive variables such as liquid limit, soil density, and natural water content. This study used the ANFIS and regression methods to obtain Cc indirectly. To achieve the aim of this investigation, 177 undisturbed samples were collected from the cohesive soil in Sulaymaniyah Governorate in Iraq. Results of this study indicated that ANFIS models over-performed the Regression method in estimating Cc with R2 of 0.66 and 0.48 for both ANFIS and Regre
... Show MoreSoftware-Defined Networking (SDN) has evolved network management by detaching the control plane from the data forwarding plane, resulting in unparalleled flexibility and efficiency in network administration. However, the heterogeneity of traffic in SDN presents issues in achieving Quality of Service (QoS) demands and efficiently managing network resources. SDN traffic flows are often divided into elephant flows (EFs) and mice flows (MFs). EFs, which are distinguished by their huge packet sizes and long durations, account for a small amount of total traffic but require disproportionate network resources, thus causing congestion and delays for smaller MFs. MFs, on the other hand, have a short lifetime and are latency-sensitive, but they accou
... Show MoreIn this research, several estimators concerning the estimation are introduced. These estimators are closely related to the hazard function by using one of the nonparametric methods namely the kernel function for censored data type with varying bandwidth and kernel boundary. Two types of bandwidth are used: local bandwidth and global bandwidth. Moreover, four types of boundary kernel are used namely: Rectangle, Epanechnikov, Biquadratic and Triquadratic and the proposed function was employed with all kernel functions. Two different simulation techniques are also used for two experiments to compare these estimators. In most of the cases, the results have proved that the local bandwidth is the best for all the
... Show MoreAchieving an accurate and optimal rate of penetration (ROP) is critical for a cost-effective and safe drilling operation. While different techniques have been used to achieve this goal, each approach has limitations, prompting researchers to seek solutions. This study’s objective is to conduct the strategy of combining the Bourgoyne and Young (BYM) ROP equations with Bagging Tree regression in a southern Iraqi field. Although BYM equations are commonly used and widespread to estimate drilling rates, they need more specific drilling parameters to capture different ROP complexities. The Bagging Tree algorithm, a random forest variant, addresses these limitations by blending domain kno
The particle-hole state densities have been calculated for 232Th in
the case of incident neutron with , 1 Z Z T T T T and 2 Z T T .
The finite well depth, surface effect, isospin and Pauli correction are
considered in the calculation of the state densities and then the
transition rates. The isospin correction function ( ) iso f has been
examined for different exciton configurations and at different
excitation energies up to 100 MeV. The present results are indicated
that the included corrections have more affected on transition rates
behavior for , , and above 30MeV excitation energy
In this research we assumed that the number of emissions by time (𝑡) of radiation particles is distributed poisson distribution with parameter (𝑡), where < 0 is the intensity of radiation. We conclude that the time of the first emission is distributed exponentially with parameter 𝜃, while the time of the k-th emission (𝑘 = 2,3,4, … . . ) is gamma distributed with parameters (𝑘, 𝜃), we used a real data to show that the Bayes estimator 𝜃 ∗ for 𝜃 is more efficient than 𝜃̂, the maximum likelihood estimator for 𝜃 by using the derived variances of both estimators as a statistical indicator for efficiency
The stress(Y) – strength(X) model reliability Bayesian estimation which defines life of a component with strength X and stress Y (the component fails if and only if at any time the applied stress is greater than its strength) has been studied, then the reliability; R=P(Y<X), can be considered as a measure of the component performance. In this paper, a Bayesian analysis has been considered for R when the two variables X and Y are independent Weibull random variables with common parameter α in order to study the effect of each of the two different scale parameters β and λ; respectively, using three different [weighted, quadratic and entropy] loss functions under two different prior functions [Gamma and extension of Jeffery
... Show More