Orthogonal polynomials and their moments have significant role in image processing and computer vision field. One of the polynomials is discrete Hahn polynomials (DHaPs), which are used for compression, and feature extraction. However, when the moment order becomes high, they suffer from numerical instability. This paper proposes a fast approach for computing the high orders DHaPs. This work takes advantage of the multithread for the calculation of Hahn polynomials coefficients. To take advantage of the available processing capabilities, independent calculations are divided among threads. The research provides a distribution method to achieve a more balanced processing burden among the threads. The proposed methods are tested for various values of DHaPs parameters, sizes, and different values of threads. In comparison to the unthreaded situation, the results demonstrate an improvement in the processing time which increases as the polynomial size increases, reaching its maximum of 5.8 in the case of polynomial size and order of 8000 × 8000 (matrix size). Furthermore, the trend of continuously raising the number of threads to enhance performance is inconsistent and becomes invalid at some point when the performance improvement falls below the maximum. The number of threads that achieve the highest improvement differs according to the size, being in the range of 8 to 16 threads in 1000 × 1000 matrix size, whereas at 8000 × 8000 case it ranges from 32 to 160 threads.
In order to obtain a mixed model with high significance and accurate alertness, it is necessary to search for the method that performs the task of selecting the most important variables to be included in the model, especially when the data under study suffers from the problem of multicollinearity as well as the problem of high dimensions. The research aims to compare some methods of choosing the explanatory variables and the estimation of the parameters of the regression model, which are Bayesian Ridge Regression (unbiased) and the adaptive Lasso regression model, using simulation. MSE was used to compare the methods.
The aim of the research is to measure the efficiency of the companies in the industrial sector listed in the Iraqi Stock Exchange , by directing these companies to their resources (inputs) towards achieving the greatest possible returns (outputs) or reduce those resources while maintaining the level of returns to achieve the efficiency of these companies, therefore, in order to achieve the objectives of the research, it was used (Demerjian.et.al) model to measure the efficiency of companies and the factors influencing them. The researchers had got a number of conclusions , in which the most important of them is that 66.6% of the companies in the research sample do not possess relatively high efficiency and that the combined factors (the nat
... Show MoreThe aim of the research is to measure the efficiency of the companies in the industrial sector listed in the Iraqi Stock Exchange , by directing these companies to their resources (inputs) towards achieving the greatest possible returns (outputs) or reduce those resources while maintaining the level of returns to achieve the efficiency of these companies, therefore, in order to achieve the objectives of the research, it was used (Demerjian.et.al) model to measure the efficiency of companies and the factors influencing them. The researchers had got a number of conclusions , in which the most important of them is that 66.6% of the companies in the research sample do no
... Show MoreCost is the essence of any production process for it is one of the requirements for the continuity of activities so as to increase the profitability of the economic unit and to support the competitive situation in the market. Therefore, there should be an overall control to reduce the cost without compromising the product quality; to achieve this, the management should have detailed credible and reliable information about the cost to be measured, collected, understood and to analyze the causes for the spread of deviations and obstacles the management faces, and to search for the factors that trigger the emergence of these deviations and obstacles
Optimum perforation location selection is an important study to improve well production and hence in the reservoir development process, especially for unconventional high-pressure formations such as the formations under study. Reservoir geomechanics is one of the key factors to find optimal perforation location. This study aims to detect optimum perforation location by investigating the changes in geomechanical properties and wellbore stress for high-pressure formations and studying the difference in different stress type behaviors between normal and abnormal formations. The calculations are achieved by building one-dimensional mechanical earth model using the data of four deep abnormal wells located in Southern Iraqi oil fields. The magni
... Show MoreIn this paper we proposes the philosophy of the Darwinian selection as synthesis method called Genetic algorithm ( GA ), and include new merit function with simple form then its uses in other works for designing one of the kinds of multilayer optical filters called high reflection mirror. Here we intend to investigate solutions for many practical problems. This work appears designed high reflection mirror that have good performance with reduction the number of layers, which can enable one to controlling the errors effect of the thickness layers on the final product, where in this work we can yield such a solution in a very shorter time by controlling the length of the chromosome and optimal genetic operators . Res
... Show MoreThis paper proposes two hybrid feature subset selection approaches based on the combination (union or intersection) of both supervised and unsupervised filter approaches before using a wrapper, aiming to obtain low-dimensional features with high accuracy and interpretability and low time consumption. Experiments with the proposed hybrid approaches have been conducted on seven high-dimensional feature datasets. The classifiers adopted are support vector machine (SVM), linear discriminant analysis (LDA), and K-nearest neighbour (KNN). Experimental results have demonstrated the advantages and usefulness of the proposed methods in feature subset selection in high-dimensional space in terms of the number of selected features and time spe
... Show MoreAbstract
The current research aims to identify the analysis of the questions for the book of literary criticism for the preparatory stage according to Bloom's classification. The research community consists of (34) exercises and (45) questions. The researcher used the method of analyzing questions and prepared a preliminary list that includes criteria that are supposed to measure exercises, which were selected based on Bloom's classification and the extant literature related to the topic. The scales were exposed to a jury of experts and specialists in curricula and methods of teaching the Arabic language. The scales obtained a complete agreement. Thus, it was adapted to become a reliable instrument in this
... Show MoreIn light of the development in computer science and modern technologies, the impersonation crime rate has increased. Consequently, face recognition technology and biometric systems have been employed for security purposes in a variety of applications including human-computer interaction, surveillance systems, etc. Building an advanced sophisticated model to tackle impersonation-related crimes is essential. This study proposes classification Machine Learning (ML) and Deep Learning (DL) models, utilizing Viola-Jones, Linear Discriminant Analysis (LDA), Mutual Information (MI), and Analysis of Variance (ANOVA) techniques. The two proposed facial classification systems are J48 with LDA feature extraction method as input, and a one-dimen
... Show More