A mixture model is used to model data that come from more than one component. In recent years, it became an effective tool in drawing inferences about the complex data that we might come across in real life. Moreover, it can represent a tremendous confirmatory tool in classification observations based on similarities amongst them. In this paper, several mixture regression-based methods were conducted under the assumption that the data come from a finite number of components. A comparison of these methods has been made according to their results in estimating component parameters. Also, observation membership has been inferred and assessed for these methods. The results showed that the flexible mixture model outperformed the others in most simulation scenarios according to the integrated mean square error and integrated classification error
In this study Microwave and conventional methods have been used to extract and estimate pectin and its degree of esterification from dried grapefruit and orange peels. Acidified solution water with nitric acid in pH (1.5) was used. In conventional method, different temperature degrees for extraction pectin from grape fruit and orange(85 ,90 , 95 and 100?C) for 1 h were used The results showed grapefruit peels contained 12.82, 17.05, 18.47, 15.89% respectively, while the corresponding values were 5.96, 6.74, 7.41 and 8.00 %, respectively in orange peels. In microwave method, times were 90, 100, 110 and 120 seconds. Grapefruit peels contain 13.86, 16.57, 18.69, and 17.87%, respectively, while the corresponding values were of 6.53, 6.68, 7.2
... Show MoreIn this paper, some commonly used hierarchical cluster techniques have been compared. A comparison was made between the agglomerative hierarchical clustering technique and the k-means technique, which includes the k-mean technique, the variant K-means technique, and the bisecting K-means, although the hierarchical cluster technique is considered to be one of the best clustering methods. It has a limited usage due to the time complexity. The results, which are calculated based on the analysis of the characteristics of the cluster algorithms and the nature of the data, showed that the bisecting K-means technique is the best compared to the rest of the other methods used.
Abstract
The current research aims to examine the effectiveness of a training program for children with autism and their mothers based on the Picture Exchange Communication System to confront some basic disorders in a sample of children with autism. The study sample was (16) children with autism and their mothers in the different centers in Taif city and Tabuk city. The researcher used the quasi-experimental approach, in which two groups were employed: an experimental group and a control group. Children aged ranged from (6-9) years old. In addition, it was used the following tools: a list of estimation of basic disorders for a child with autism between (6-9) years, and a training program for children with autism
... Show MoreTax justice is one of the important principles that any tax system seeks to achieve because it is an important and important pillar of the tax because it has negative repercussions on the tax accounting process on the one hand and the low tax revenues on the other hand. Therefore, countries have sought through their tax systems to use methods and methods In the process of estimating taxable incomes to achieve that justice depending on the awareness of their communities. The method adopted differs from one country to another according to the knowledge and understanding of the society of that state for taxation and its role in economic, social and political life. In Iraq, the General Authority for Taxation Methods of estimating taxable inc
... Show MoreThis study relates to the estimation of a simultaneous equations system for the Tobit model where the dependent variables ( ) are limited, and this will affect the method to choose the good estimator. So, we will use new estimations methods different from the classical methods, which if used in such a case, will produce biased and inconsistent estimators which is (Nelson-Olson) method and Two- Stage limited dependent variables(2SLDV) method to get of estimators that hold characteristics the good estimator .
That is , parameters will be estim
... Show MoreIn light of the development in computer science and modern technologies, the impersonation crime rate has increased. Consequently, face recognition technology and biometric systems have been employed for security purposes in a variety of applications including human-computer interaction, surveillance systems, etc. Building an advanced sophisticated model to tackle impersonation-related crimes is essential. This study proposes classification Machine Learning (ML) and Deep Learning (DL) models, utilizing Viola-Jones, Linear Discriminant Analysis (LDA), Mutual Information (MI), and Analysis of Variance (ANOVA) techniques. The two proposed facial classification systems are J48 with LDA feature extraction method as input, and a one-dimen
... Show MoreThis paper discusses the study of computer Russian language neologisms. Problems of studying computer terminology are constantly aggravated by the processes of computer technology that is introduced to all walks of life. The study identifies ways of word formation: the origin of the computer terms and the possibility of their usage in Russian language. The Internet is considered a worldwide tool of communication used extensively by students, housewives and professionals as well The Internet is a heterogeneous environment consisting of various hardware and software configurations that need to be configured to support the languages used. The development of Internet content and services is essential for expanding Internet usage. Some of the
... Show MoreCanonical correlation analysis is one of the common methods for analyzing data and know the relationship between two sets of variables under study, as it depends on the process of analyzing the variance matrix or the correlation matrix. Researchers resort to the use of many methods to estimate canonical correlation (CC); some are biased for outliers, and others are resistant to those values; in addition, there are standards that check the efficiency of estimation methods.
In our research, we dealt with robust estimation methods that depend on the correlation matrix in the analysis process to obtain a robust canonical correlation coefficient, which is the method of Biwe
... Show MoreMultilocus haplotype analysis of candidate variants with genome wide association studies (GWAS) data may provide evidence of association with disease, even when the individual loci themselves do not. Unfortunately, when a large number of candidate variants are investigated, identifying risk haplotypes can be very difficult. To meet the challenge, a number of approaches have been put forward in recent years. However, most of them are not directly linked to the disease-penetrances of haplotypes and thus may not be efficient. To fill this gap, we propose a mixture model-based approach for detecting risk haplotypes. Under the mixture model, haplotypes are clustered directly according to their estimated d