The region-based association analysis has been proposed to capture the collective behavior of sets of variants by testing the association of each set instead of individual variants with the disease. Such an analysis typically involves a list of unphased multiple-locus genotypes with potentially sparse frequencies in cases and controls. To tackle the problem of the sparse distribution, a two-stage approach was proposed in literature: In the first stage, haplotypes are computationally inferred from genotypes, followed by a haplotype coclassification. In the second stage, the association analysis is performed on the inferred haplotype groups. If a haplotype is unevenly distributed between the case and control samples, this haplotype is labeled as a risk haplotype. Unfortunately, the in-silico reconstruction of haplotypes might produce a proportion of false haplotypes which hamper the detection of rare but true haplotypes. Here, to address the issue, we propose an alternative approach: In Stage 1, we cluster genotypes instead of inferred haplotypes and estimate the risk genotypes based on a finite mixture model. In Stage 2, we infer risk haplotypes from risk genotypes inferred from the previous stage. To estimate the finite mixture model, we propose an EM algorithm with a novel data partition-based initialization. The performance of the proposed procedure is assessed by simulation studies and a real data analysis. Compared to the existing multiple Z-test procedure, we find that the power of genome-wide association studies can be increased by using the proposed procedure.
Data scarcity is a major challenge when training deep learning (DL) models. DL demands a large amount of data to achieve exceptional performance. Unfortunately, many applications have small or inadequate data to train DL frameworks. Usually, manual labeling is needed to provide labeled data, which typically involves human annotators with a vast background of knowledge. This annotation process is costly, time-consuming, and error-prone. Usually, every DL framework is fed by a significant amount of labeled data to automatically learn representations. Ultimately, a larger amount of data would generate a better DL model and its performance is also application dependent. This issue is the main barrier for
Atrial fibrillation is associates with elevated risk of stroke. The simplest stroke risk assessment schemes are CHADS2 and CHA2DS2-VASc score. Aspirin and oral anticoagulants are recommended for stroke prevention in such patients.
The aim of this study was to assess status of CHADS2 and CHA2DS2-VASc scores in Iraqi atrial fibrillation patients and to report current status of stroke prevention in these patients with either warfarin or aspirin in relation to these scores.
This prospective cross-sectional study was carried out at Tikrit, Samarra, Sharqat, Baquba, and AL-Numaan hospitals from July 2017 to October 2017. CHADS2
... Show MoreIn this study, we review the ARIMA (p, d, q), the EWMA and the DLM (dynamic linear moodelling) procedures in brief in order to accomdate the ac(autocorrelation) structure of data .We consider the recursive estimation and prediction algorithms based on Bayes and KF (Kalman filtering) techniques for correlated observations.We investigate the effect on the MSE of these procedures and compare them using generated data.
Data compression offers an attractive approach to reducing communication costs using available bandwidth effectively. It makes sense to pursue research on developing algorithms that can most effectively use available network. It is also important to consider the security aspect of the data being transmitted is vulnerable to attacks. The basic aim of this work is to develop a module for combining the operation of compression and encryption on the same set of data to perform these two operations simultaneously. This is achieved through embedding encryption into compression algorithms since both cryptographic ciphers and entropy coders bear certain resemblance in the sense of secrecy. First in the secure compression module, the given text is p
... Show MoreHuman serum albumin (HSA) nanoparticles have been widely used as versatile drug delivery systems for improving the efficiency and pharmaceutical properties of drugs. The present study aimed to design HSA nanoparticle encapsulated with the hydrophobic anticancer pyridine derivative (2-((2-([1,1'-biphenyl]-4-yl)imidazo[1,2-a]pyrimidin-3-yl)methylene)hydrazine-1-carbothioamide (BIPHC)). The synthesis of HSA-BIPHC nanoparticles was achieved using a desolvation process. Atomic force microscopy (AFM) analysis showed the average size of HSA-BIPHC nanoparticles was 80.21 nm. The percentages of entrapment efficacy, loading capacity and production yield were 98.11%, 9.77% and 91.29%, respectively. An In vitro release study revealed that HSA-BIPHC nan
... Show MoreIn this research we study a variance component model, Which is the one of the most important models widely used in the analysis of the data, this model is one type of a multilevel models, and it is considered as linear models , there are three types of linear variance component models ,Fixed effect of linear variance component model, Random effect of linear variance component model and Mixed effect of linear variance component model . In this paper we will examine the model of mixed effect of linear variance component model with one –way random effect ,and the mixed model is a mixture of fixed effect and random effect in the same model, where it contains the parameter (μ) and treatment effect (τi ) which has
... Show MoreAbstract
Shorten the research problem that there is no system or model to evaluate the financial performance of the departments of municipalities where it is not possible for a person or institution both to know or to know their success in terms of the financial work of failure and where it is now than those without assessing the financial performed, authorized to be and necessities FATF is the financial performance assessment, which is the work unfinished aspects without Hence the work came in this study to study and diagnose and analyze financial data in a sample of municipal departments in order to develop a model to assess the financ