Excessive skewness which occurs sometimes in the data is represented as an obstacle against normal distribution. So, recent studies have witnessed activity in studying the skew-normal distribution (SND) that matches the skewness data which is regarded as a special case of the normal distribution with additional skewness parameter (α), which gives more flexibility to the normal distribution. When estimating the parameters of (SND), we face the problem of the non-linear equation and by using the method of Maximum Likelihood estimation (ML) their solutions will be inaccurate and unreliable. To solve this problem, two methods can be used that are: the genetic algorithm (GA) and the iterative reweighting algorithm (IR) based on the Maximum Likelihood method. Monte Carlo simulation was used with different skewness levels and sample sizes, and the superiority of the results was compared. It was concluded that (SND) model estimation using (GA) is the best when the samples sizes are small and medium, while large samples indicate that the (IR) algorithm is the best. The study was also done using real data to find the parameter estimation and a comparison between the superiority of the results based on (AIC, BIC, Mse and Def) criteria.
The current research dealt with a vital subject contributing In success Iraqi Industrial Companies general and Iraqi Cement state company A market knowledge, It is one of the most important industrial companies that Which serve to fill the local market need Of cement without resorting to import, The problem of research was limited understanding of the importance of the role played market knowledge of the tendencies and desires of competitors, This in turn affects the company's ability to achieve competitive advantages,The research aims to know the extent of adoption Iraqi Cement state company Concept market knowledge And employment achieving Competitive advantage By removing them (Cost, and quality, and del
... Show MoreSemi-parametric models analysis is one of the most interesting subjects in recent studies due to give an efficient model estimation. The problem when the response variable has one of two values either 0 ( no response) or one – with response which is called the logistic regression model.
We compare two methods Bayesian and . Then the results were compared using MSe criteria.
A simulation had been used to study the empirical behavior for the Logistic model , with different sample sizes and variances. The results using represent that the Bayesian method is better than the at small samples sizes.
... Show MoreProgression in Computer networks and emerging of new technologies in this field helps to find out new protocols and frameworks that provides new computer network-based services. E-government services, a modernized version of conventional government, are created through the steady evolution of technology in addition to the growing need of societies for numerous services. Government services are deeply related to citizens’ daily lives; therefore, it is important to evolve with technological developments—it is necessary to move from the traditional methods of managing government work to cutting-edge technical approaches that improve the effectiveness of government systems for providing services to citizens. Blockchain technology is amon
... Show MoreIntroduction: The stringent response is a bacterial adaptation mechanism triggered by stress conditions, including nutrient limitation. This response helps bacteria survive under harsh conditions, such as those encountered during infection. A key feature of the stringent response is the synthesis of the alarmone (p)ppGpp, which influences various bacterial phenotypes. In several bacterial species, stringent response activation significantly affects biofilm formation and maintenance. Methods: Clinical specimens were collected from multiple hospitals in Baghdad, Iraq. Staphylococcus aureus was identified using conventional biochemical tests. The PCR technique was applied to detect mecA, icaA, and icaD genes, while the Vitek 2 compac
... Show MoreHeavy oil is classified as unconventional oil resource because of its difficulty to recover in its natural state, difficulties in transport and difficulties in marketing it. Upgrading solution to the heavy oil has positive impact technically and economically specially when it will be a competitive with conventional oils from the marketing prospective. Developing Qaiyarah heavy oil field was neglected in the last five decades, the main reason was due to the low quality of the crude oil resulted in the high viscosity and density of the crude oil in the field which was and still a major challenge putting them on the major stream line of production in Iraq. The low quality of the crude properties led to lower oil prices in the global markets
... Show MoreThe liver is an important organ in the body that can be affected by many drugs and toxins. The hepatotoxins can cause oxidant stress that lead to activation of inflammatory cells and cause liver damage. Drug induced bile duct injuries are related to drug toxicity, multiple drugs have been known to cause the development of liver granulomas. Carbamazepine (CBZ) among other antiepileptic drugs is believed to cause hepatic injury. In this study we investigated the effect of (CBZ) 20mg/kg/day on female mice liver after 14 and 30 days of treatment. The histological findings showed that (CBZ) can cause histological alterations in the liver components such as bile duct proliferation, biliary hypertrophy, ductopenia, inflammatory cells infiltration
... Show MoreData scarcity is a major challenge when training deep learning (DL) models. DL demands a large amount of data to achieve exceptional performance. Unfortunately, many applications have small or inadequate data to train DL frameworks. Usually, manual labeling is needed to provide labeled data, which typically involves human annotators with a vast background of knowledge. This annotation process is costly, time-consuming, and error-prone. Usually, every DL framework is fed by a significant amount of labeled data to automatically learn representations. Ultimately, a larger amount of data would generate a better DL model and its performance is also application dependent. This issue is the main barrier for