The haplotype association analysis has been proposed to capture the collective behavior of sets of variants by testing the association of each set instead of individual variants with the disease.Such an analysis typically involves a list of unphased multiple-locus genotypes with potentially sparse frequencies in cases and controls.It starts with inferring haplotypes from genotypes followed by a haplotype co-classification and marginal screening for disease-associated haplotypes.Unfortunately,phasing uncertainty may have a strong effects on the haplotype co-classification and therefore on the accuracy of predicting risk haplotypes.Here,to address the issue,we propose an alternative approach:In Stage 1,we select potential risk genotypes instead of co-classification of the inferred haplotypes.In Stage 2,we infer risk haplotypes from the genotypes inferred from the previous stage.The performance of the proposed procedure is assessed by simulation studies and a real data analysis.Compared to the existing multiple Z-test procedure,we find that the power of genome-wide association studies can be increased by using the proposed procedure.This research was supported by a grant from the Iraq Government.
The region-based association analysis has been proposed to capture the collective behavior of sets of variants by testing the association of each set instead of individual variants with the disease. Such an analysis typically involves a list of unphased multiple-locus genotypes with potentially sparse frequencies in cases and controls. To tackle the problem of the sparse distribution, a two-stage approach was proposed in literature: In the first stage, haplotypes are computationally inferred from genotypes, followed by a haplotype coclassification. In the second stage, the association analysis is performed on the inferred haplotype groups. If a haplotype is unevenly distributed between the case and control samples, this haplotype is labeled
... Show MoreMultilocus haplotype analysis of candidate variants with genome wide association studies (GWAS) data may provide evidence of association with disease, even when the individual loci themselves do not. Unfortunately, when a large number of candidate variants are investigated, identifying risk haplotypes can be very difficult. To meet the challenge, a number of approaches have been put forward in recent years. However, most of them are not directly linked to the disease-penetrances of haplotypes and thus may not be efficient. To fill this gap, we propose a mixture model-based approach for detecting risk haplotypes. Under the mixture model, haplotypes are clustered directly according to their estimated d
Abstract Background: Timely diagnosis of periodontal disease is crucial for restoring healthy periodontal tissue and improving patients’ prognosis. There is a growing interest in using salivary biomarkers as a noninvasive screening tool for periodontal disease. This study aimed to investigate the diagnostic efficacy of two salivary biomarkers, lactate dehydrogenase (LDH) and total protein, for periodontal disease by assessing their sensitivity in relation to clinical periodontal parameters. Furthermore, the study aimed to explore the impact of systemic disease, age, and sex on the accuracy of these biomarkers in the diagnosis of periodontal health. Materials and methods: A total of 145 participants were categorized into three groups based
... Show MoreThe research aims to demonstrate the dual use of analysis to predict financial failure according to the Altman model and stress tests to achieve integration in banking risk management. On the bank’s ability to withstand crises, especially in light of its low rating according to the Altman model, and the possibility of its failure in the future, thus proving or denying the research hypothesis, the research reached a set of conclusions, the most important of which (the bank, according to the Altman model, is threatened with failure in the near future, as it is located within the red zone according to the model’s description, and will incur losses if it is exposed to crises in the future according to the analysis of stress tests
... Show MoreAbstractBackground:Reduced glomeular filtration rate isassociated with increasedmorbidity in patientswith coronary arterydisease.Objectives :To analyze the declining eGFR andmortality risks in a patients with Chronic KidneyDisease and have had Coronary Artery Diseaseincluding risk factors .Patientsand Methods:The study included (160)patientsbetween the ages of 16 and 87years.Glomerular filtration rate was estimated (eGFR)using the Modification of Diet in Renal Diseaseequationand was categorized in the ranges<60 mL· min−1 per 1.73 m2and≥ 60 ml/min/1.73 m2.Baseline risk factors were analyzed by category ofeGFR,.The studied patients in emergencydepartment, were investigatedusing Coxproportional hazard models adjusting for traditiona
... Show MoreBackground: The highest concentrations of
blood glucose during the day are usually found
postprandialy. Postprandial hyperglycemia (PPH)
is likely to promote or aggravate fasting
hyperglycemia. Evidence in recent years suggests
that PPH may play an important role in functional
& structural disturbances in different body organs
particularly the cardiovascular system.
Objective: To evaluate the effect of (PPH) as a
risk factor for coronary Heart disease in Type 2
diabetic patients.
Methods: Sixty-three type2 diabetic patients
were included in this study. All have controlled
fasting blood glucose, with HbA1c correlation.
They were all followed for five months period
(from May to October 2008)
The logistic regression model is one of the oldest and most common of the regression models, and it is known as one of the statistical methods used to describe and estimate the relationship between a dependent random variable and explanatory random variables. Several methods are used to estimate this model, including the bootstrap method, which is one of the estimation methods that depend on the principle of sampling with return, and is represented by a sample reshaping that includes (n) of the elements drawn by randomly returning from (N) from the original data, It is a computational method used to determine the measure of accuracy to estimate the statistics, and for this reason, this method was used to find more accurate estimates. The ma
... Show MoreThe purpose of this article was to identify and assess the importance of risk factors in the tendering phase of construction projects. The construction project cannot succeed without the identification and categorization of these risk elements. In this article, a questionnaire for likelihood and impact was designed and distributed to a panel of specialists to analyze risk factors. The risk matrix was also used to research, explore, and identify the risks that influence the tendering phase of construction projects. The probability and impact values assigned to risk are used to calculate the risk's score. A risk matrix is created by combining probability and impact criteria. To determine the main risk elements for the tender phase of
... Show MoreThe purpose of this article was to identify and assess the importance of risk factors in the tendering phase of construction projects. The construction project cannot succeed without the identification and categorization of these risk elements. In this article, a questionnaire for likelihood and impact was designed and distributed to a panel of specialists to analyze risk factors. The risk matrix was also used to research, explore, and identify the risks that influence the tendering phase of construction projects. The probability and impact values assigned to risk are used to calculate the risk's score. A risk matrix is created by combining probability and impact criteria. To determine the main risk elements for the tend
... Show MoreThe paired sample t-test for testing the difference between two means in paired data is not robust against the violation of the normality assumption. In this paper, some alternative robust tests have been suggested by using the bootstrap method in addition to combining the bootstrap method with the W.M test. Monte Carlo simulation experiments were employed to study the performance of the test statistics of each of these three tests depending on type one error rates and the power rates of the test statistics. The three tests have been applied on different sample sizes generated from three distributions represented by Bivariate normal distribution, Bivariate contaminated normal distribution, and the Bivariate Exponential distribution.