Excessive skewness which occurs sometimes in the data is represented as an obstacle against normal distribution. So, recent studies have witnessed activity in studying the skew-normal distribution (SND) that matches the skewness data which is regarded as a special case of the normal distribution with additional skewness parameter (α), which gives more flexibility to the normal distribution. When estimating the parameters of (SND), we face the problem of the non-linear equation and by using the method of Maximum Likelihood estimation (ML) their solutions will be inaccurate and unreliable. To solve this problem, two methods can be used that are: the genetic algorithm (GA) and the iterative reweighting algorithm (IR) based on the Maximum Likelihood method. Monte Carlo simulation was used with different skewness levels and sample sizes, and the superiority of the results was compared. It was concluded that (SND) model estimation using (GA) is the best when the samples sizes are small and medium, while large samples indicate that the (IR) algorithm is the best. The study was also done using real data to find the parameter estimation and a comparison between the superiority of the results based on (AIC, BIC, Mse and Def) criteria.
Introduction and Aim: Kruppel Like Factor 14 (KLF14) gene plays an important role in metabolic illnesses and is also involved in the regulation of many other biological processes. This study's objective was to determine whether or not the KLF14 single-nucleotide-polymorphism (SNP) known as rs972283 was linked to an increased risk of peptic ulcer disease in the population that was being investigated. Materials and Methods: Participants in this study included 71 people who had been diagnosed with peptic ulcers and 50 people who were considered to be healthy controls. In order to genotype the KLF14 SNP rs972283, an amplification refractory mutation system-polymerase chain reaction (ARMS-PCR) was carried out, and the PCR results were
... Show MoreIn this article, we will present a quasi-contraction mapping approach for D iteration, and we will prove that this iteration with modified SP iteration has the same convergence rate. At the other hand, we prove that the D iteration approach for quasi-contraction maps is faster than certain current leading iteration methods such as, Mann and Ishikawa. We are giving a numerical example, too.
The convergence speed is the most important feature of Back-Propagation (BP) algorithm. A lot of improvements were proposed to this algorithm since its presentation, in order to speed up the convergence phase. In this paper, a new modified BP algorithm called Speeding up Back-Propagation Learning (SUBPL) algorithm is proposed and compared to the standard BP. Different data sets were implemented and experimented to verify the improvement in SUBPL.
The study showed a significant rise in the proportion of the labor force in agriculture
activity among the detailed economic activities in 1997 with a rate (%28.9), and then
decreased to (%18.8) in 2011, and this belong to the deterioration of agriculture and the
transition to the other economic activities.
2- The highest percentage of male's participation in year 1997 obtained by the activity (A),
which is represented by agriculture , where was (%30.0) while the highest percentage of
female's participation has been brought by the activity (M) which is represented education
with a rate (% 47.9). while in 2011 that the highest proportion of males' concentration
obtained by the activity (L) with a rate (%23.1) while
This paper shews how to estimate the parameter of generalized exponential Rayleigh (GER) distribution by three estimation methods. The first one is maximum likelihood estimator method the second one is moment employing estimation method (MEM), the third one is rank set sampling estimator method (RSSEM)The simulation technique is used for all these estimation methods to find the parameters for generalized exponential Rayleigh distribution. Finally using the mean squares error criterion to compare between these estimation methods to find which of these methods are best to the others
Exponential distribution is one of most common distributions in studies and scientific researches with wide application in the fields of reliability, engineering and in analyzing survival function therefore the researcher has carried on extended studies in the characteristics of this distribution.
In this research, estimation of survival function for truncated exponential distribution in the maximum likelihood methods and Bayes first and second method, least square method and Jackknife dependent in the first place on the maximum likelihood method, then on Bayes first method then comparing then using simulation, thus to accomplish this task, different size samples have been adopted by the searcher us
... Show MoreOne of the serious problems in any wireless communication system using multi carrier modulation technique like Orthogonal Frequency Division Multiplexing (OFDM) is its Peak to Average Power Ratio (PAPR).It limits the transmission power due to the limitation of dynamic range of Analog to Digital Converter and Digital to Analog Converter (ADC/DAC) and power amplifiers at the transmitter, which in turn sets the limit over maximum achievable rate.
This issue is especially important for mobile terminals to sustain longer battery life time. Therefore reducing PAPR can be regarded as an important issue to realize efficient and affordable mobile communication services.
... Show More
Data centric techniques, like data aggregation via modified algorithm based on fuzzy clustering algorithm with voronoi diagram which is called modified Voronoi Fuzzy Clustering Algorithm (VFCA) is presented in this paper. In the modified algorithm, the sensed area divided into number of voronoi cells by applying voronoi diagram, these cells are clustered by a fuzzy C-means method (FCM) to reduce the transmission distance. Then an appropriate cluster head (CH) for each cluster is elected. Three parameters are used for this election process, the energy, distance between CH and its neighbor sensors and packet loss values. Furthermore, data aggregation is employed in each CH to reduce the amount of data transmission which le
... Show More