The region-based association analysis has been proposed to capture the collective behavior of sets of variants by testing the association of each set instead of individual variants with the disease. Such an analysis typically involves a list of unphased multiple-locus genotypes with potentially sparse frequencies in cases and controls. To tackle the problem of the sparse distribution, a two-stage approach was proposed in literature: In the first stage, haplotypes are computationally inferred from genotypes, followed by a haplotype coclassification. In the second stage, the association analysis is performed on the inferred haplotype groups. If a haplotype is unevenly distributed between the case and control samples, this haplotype is labeled as a risk haplotype. Unfortunately, the in-silico reconstruction of haplotypes might produce a proportion of false haplotypes which hamper the detection of rare but true haplotypes. Here, to address the issue, we propose an alternative approach: In Stage 1, we cluster genotypes instead of inferred haplotypes and estimate the risk genotypes based on a finite mixture model. In Stage 2, we infer risk haplotypes from risk genotypes inferred from the previous stage. To estimate the finite mixture model, we propose an EM algorithm with a novel data partition-based initialization. The performance of the proposed procedure is assessed by simulation studies and a real data analysis. Compared to the existing multiple Z-test procedure, we find that the power of genome-wide association studies can be increased by using the proposed procedure.
The experimental proton resonance data for the reaction P+48Ti have been used to calculate and evaluate the level density by employed the Gaussian Orthogonal Ensemble, GOE version of RMT, Constant Temperature, CT and Back Shifted Fermi Gas, BSFG models at certain spin-parity and at different proton energies. The results of GOE model are found in agreement with other, while the level density calculated using the BSFG Model showed less values with spin dependence more than parity, due the limitation in the parameters (level density parameter, a, Energy shift parameter, E1and spin cut off parameter, σc). Also, in the CT Model the level density results depend mainly on two parameters (T and ground state back shift energy, E0), which are app
... Show MoreIn this research, Artificial Neural Networks (ANNs) technique was applied in an attempt to predict the water levels and some of the water quality parameters at Tigris River in Wasit Government for five different sites. These predictions are useful in the planning, management, evaluation of the water resources in the area. Spatial data along a river system or area at different locations in a catchment area usually have missing measurements, hence an accurate prediction. model to fill these missing values is essential.
The selected sites for water quality data prediction were Sewera, Numania , Kut u/s, Kut d/s, Garaf observation sites. In these five sites models were built for prediction of the water level and water quality parameters.
In this article we study a single stochastic process model for the evaluate the assets pricing and stock.,On of the models le'vy . depending on the so –called Brownian subordinate as it has been depending on the so-called Normal Inverse Gaussian (NIG). this article aims as the estimate that the parameters of his model using my way (MME,MLE) and then employ those estimate of the parameters is the study of stock returns and evaluate asset pricing for both the united Bank and Bank of North which their data were taken from the Iraq stock Exchange.
which showed the results to a preference MLE on MME based on the standard of comparison the average square e
... Show MoreA novel robust finite time disturbance observer (RFTDO) based on an independent output-finite time composite control (FTCC) scheme is proposed for an air conditioning-system temperature and humidity regulation. The variable air volume (VAV) of the system is represented by two first-order mathematical models for the temperature and humidity dynamics. In the temperature loop dynamics, a RFTDO temperature (RFTDO-T) and an FTCC temperature (FTCC-T) are designed to estimate and reject the lumped disturbances of the temperature subsystem. In the humidity loop, a robust output of the FTCC humidity (FTCC-H) and RFTDO humidity (RFTDO-H) are also designed to estimate and reject the lumped disturbances of the humidity subsystem. Based on Lyapunov theo
... Show MoreThis research dealt with the subject of auditing bank credit risks in accordance with international auditing standards and aims to develop procedures and design a credit risk audit program in accordance with international auditing standards and demonstrate their impact on the truth, truthfulness and fairness of financial statements and on their overall performance and continuity in the banking sector Its importance lies in relying on international auditing standards to assess and measure bank credit risk and its impact on the financial situation as well as the ability to predict financial failure. A set of conclusions have been reached, the most important of which is that the bank faces difficulties in measuring credit risk in accordance
... Show MoreEach phenomenon contains several variables. Studying these variables, we find mathematical formula to get the joint distribution and the copula that are a useful and good tool to find the amount of correlation, where the survival function was used to measure the relationship of age with the level of cretonne in the remaining blood of the person. The Spss program was also used to extract the influencing variables from a group of variables using factor analysis and then using the Clayton copula function that is used to find the shared binary distributions using multivariate distributions, where the bivariate distribution was calculated, and then the survival function value was calculated for a sample size (50) drawn from Yarmouk Ho
... Show MoreBig data of different types, such as texts and images, are rapidly generated from the internet and other applications. Dealing with this data using traditional methods is not practical since it is available in various sizes, types, and processing speed requirements. Therefore, data analytics has become an important tool because only meaningful information is analyzed and extracted, which makes it essential for big data applications to analyze and extract useful information. This paper presents several innovative methods that use data analytics techniques to improve the analysis process and data management. Furthermore, this paper discusses how the revolution of data analytics based on artificial intelligence algorithms might provide
... Show More