Multilocus haplotype analysis of candidate variants with genome wide association studies (GWAS) data may provide evidence of association with disease, even when the individual loci themselves do not. Unfortunately, when a large number of candidate variants are investigated, identifying risk haplotypes can be very difficult. To meet the challenge, a number of approaches have been put forward in recent years. However, most of them are not directly linked to the disease-penetrances of haplotypes and thus may not be efficient. To fill this gap, we propose a mixture model-based approach for detecting risk haplotypes. Under the mixture model, haplotypes are clustered directly according to their estimated disease penetrances. A theoretical justification of the above model is provided. Furthermore, we introduce a hypothesis test for haplotype inheritance patterns which underpin this model. The performance of the proposed approach is evaluated by simulations and real data analysis. The results show that the proposed approach outperforms an existing multiple testing method.
Background: Because of many factors play a role in the developing of late lower arch crowding, therefore the objective of the current study is to do vertical analysis for subjects with late lower dental arch crowding. The conducted study is the first attempt to do vertical analysis for Iraqi subjects with late lower arch crowding to see if there is a vertical discrepancy in such patients. Subjects and methods: Eighty subjects were selected according to certain inclusion criteria from patients attending the Orthodontic Department in the College of Dentistry, Baghdad University, patients ranged between 18-25 years old. The 80 patients were divided into two groups (crowding and normal), 40 patients each (20 males and 20 females). A study cast
... Show MoreThe thermal performance of three solar collectors with 3, 6 mm and without perforation absorber plate was assessed experimentally. The experimental tests were implemented in Baghdad during the January and February 2017. Five values of airflow rates range between 0.01 – 0.1 m3/s were used through the test with a constant airflow rate during the test day. The variation of the following parameters air temperature difference, useful energy, absorber plate temperature, and collector efficiency was recorded every 15 minutes. The experimental data reports that the increases the number of absorber plate perforations with a small diameter is more efficient rather than increasing the hole diameter of the absorber plate with decr
... Show MoreThis paper delves into some significant performance measures (PMs) of a bulk arrival queueing system with constant batch size b, according to arrival rates and service rates being fuzzy parameters. The bulk arrival queuing system deals with observation arrival into the queuing system as a constant group size before allowing individual customers entering to the service. This leads to obtaining a new tool with the aid of generating function methods. The corresponding traditional bulk queueing system model is more convenient under an uncertain environment. The α-cut approach is applied with the conventional Zadeh's extension principle (ZEP) to transform the triangular membership functions (Mem. Fs) fuzzy queues into a family of conventional b
... Show MoreIn real situations all observations and measurements are not exact numbers but more or less non-exact, also called fuzzy. So, in this paper, we use approximate non-Bayesian computational methods to estimate inverse Weibull parameters and reliability function with fuzzy data. The maximum likelihood and moment estimations are obtained as non-Bayesian estimation. The maximum likelihood estimators have been derived numerically based on two iterative techniques namely “Newton-Raphson†and the “Expectation-Maximization†techniques. In addition, we provide compared numerically through Monte-Carlo simulation study to obtained estimates of the parameters and reliability function i
... Show MoreData scarcity is a major challenge when training deep learning (DL) models. DL demands a large amount of data to achieve exceptional performance. Unfortunately, many applications have small or inadequate data to train DL frameworks. Usually, manual labeling is needed to provide labeled data, which typically involves human annotators with a vast background of knowledge. This annotation process is costly, time-consuming, and error-prone. Usually, every DL framework is fed by a significant amount of labeled data to automatically learn representations. Ultimately, a larger amount of data would generate a better DL model and its performance is also application dependent. This issue is the main barrier for
University campuses in Iraq are substantial energy consumers, with consumption increasing significantly during periods of high temperatures, underscoring the necessity to enhance their energy performance. Energy simulation tools offer valuable insights into evaluating and improving the energy efficiency of buildings. This study focuses on simulating passive architectural design for three selected buildings at Al-Khwarizmi College of Engineering (AKCOE) to examine the effectiveness of their cooling systems. DesignBuilder software was employed, and climatic data for a year in Baghdad was collected to assess the influence of passive architectural strategies on the thermal performance of the targeted buildings. The simulations revealed that the
... Show More