Objectives. The current study aimed to predict the combined mesiodistal crown widths of maxillary and mandibular canines and premolars from the combined mesiodistal crown widths of maxillary and mandibular incisors and first molars. Materials and Methods. This retrospective study utilized 120 dental models from Iraqi Arab young adult subjects with normal dental relationships. The mesiodistal crown widths of all teeth (except the second molars) were measured at the level of contact points using digital electronic calipers. The relation between the sum mesiodistal crown widths of the maxillary and mandibular incisors and first molars and the combined mesiodistal crown widths of the maxillary and mandibular canines and premolars was assessed using Pearson’s correlation coefficient test. Based on this relation, regression equations were developed to predict the sum widths of maxillary and mandibular canines and premolars; then, the predicted mesiodistal crown sum widths were compared with the actual one using a paired sample t-test. Results. Statistically, the predicted mesiodistal crown sum widths were nonsignificantly different from the actual ones. Conclusions. The combined mesiodistal widths of maxillary and mandibular canines and premolars can be predicted successfully from the combined mesiodistal widths of the maxillary and mandibular incisors and first molars with a high degree of accuracy reaching to more than 86%.
Association rules mining (ARM) is a fundamental and widely used data mining technique to achieve useful information about data. The traditional ARM algorithms are degrading computation efficiency by mining too many association rules which are not appropriate for a given user. Recent research in (ARM) is investigating the use of metaheuristic algorithms which are looking for only a subset of high-quality rules. In this paper, a modified discrete cuckoo search algorithm for association rules mining DCS-ARM is proposed for this purpose. The effectiveness of our algorithm is tested against a set of well-known transactional databases. Results indicate that the proposed algorithm outperforms the existing metaheuristic methods.
Used automobile oils were subjected to filtration to remove solid material and dehydration to remove water, gasoline and light components by using vacuum distillation under moderate pressure, and then the dehydrated waste oil is subjected to extraction by using liquid solvents. Two solvents, namely n-butanol and n-hexane were used to extract base oil from automobile used oil, so that the expensive base oil can be reused again.
The recovered base oil by using n-butanol solvent gives (88.67%) reduction in carbon residue, (75.93%) reduction in ash content, (93.73%) oil recovery, (95%) solvent recovery and (100.62) viscosity index, at (5:1) solvent to used oil ratio and (40 oC) extraction temperature, while using n-hexane solvent gives (6
The complexity of multimedia contents is significantly increasing in the current world. This leads to an exigent demand for developing highly effective systems to satisfy human needs. Until today, handwritten signature considered an important means that is used in banks and businesses to evidence identity, so there are many works tried to develop a method for recognition purpose. This paper introduced an efficient technique for offline signature recognition depending on extracting the local feature by utilizing the haar wavelet subbands and energy. Three different sets of features are utilized by partitioning the signature image into non overlapping blocks where different block sizes are used. CEDAR signature database is used as a dataset f
... Show MoreOver the past few years, ear biometrics has attracted a lot of attention. It is a trusted biometric for the identification and recognition of humans due to its consistent shape and rich texture variation. The ear presents an attractive solution since it is visible, ear images are easily captured, and the ear structure remains relatively stable over time. In this paper, a comprehensive review of prior research was conducted to establish the efficacy of utilizing ear features for individual identification through the employment of both manually-crafted features and deep-learning approaches. The objective of this model is to present the accuracy rate of person identification systems based on either manually-crafted features such as D
... Show MoreThe area of character recognition has received a considerable attention by researchers all over the world during the last three decades. However, this research explores best sets of feature extraction techniques and studies the accuracy of well-known classifiers for Arabic numeral using the Statistical styles in two methods and making comparison study between them. First method Linear Discriminant function that is yield results with accuracy as high as 90% of original grouped cases correctly classified. In the second method, we proposed algorithm, The results show the efficiency of the proposed algorithms, where it is found to achieve recognition accuracy of 92.9% and 91.4%. This is providing efficiency more than the first method.
: Sound forecasts are essential elements of planning, especially for dealing with seasonality, sudden changes in demand levels, strikes, large fluctuations in the economy, and price-cutting manoeuvres for competition. Forecasting can help decision maker to manage these problems by identifying which technologies are appropriate for their needs. The proposal forecasting model is utilized to extract the trend and cyclical component individually through developing the Hodrick–Prescott filter technique. Then, the fit models of these two real components are estimated to predict the future behaviour of electricity peak load. Accordingly, the optimal model obtained to fit the periodic component is estimated using spectrum analysis and Fourier mod
... Show MoreThe feature extraction step plays major role for proper object classification and recognition, this step depends mainly on correct object detection in the given scene, the object detection algorithms may result with some noises that affect the final object shape, a novel approach is introduced in this paper for filling the holes in that object for better object detection and for correct feature extraction, this method is based on the hole definition which is the black pixel surrounded by a connected boundary region, and hence trying to find a connected contour region that surrounds the background pixel using roadmap racing algorithm, the method shows a good results in 2D space objects.
Keywords: object filling, object detection, objec
teen sites Baghdad are made. The sites are divided into two groups, one in Karkh and the other in Rusafa. Assessing the underground conditions can be occurred by drilling vertical holes called exploratory boring into the ground, obtaining soil (disturbed and undisturbed) samples, and testing these samples in a laboratory (civil engineering laboratory /University of Baghdad). From disturbed, the tests involved the grain size analysis and then classified the soil, Atterberg limit, chemical test (organic content, sulphate content, gypsum content and chloride content). From undisturbed samples, the test involved the consolidation test (from this test, the following parameters can be obtained: initial void ratio eo, compression index cc, swel
... Show MoreInvestigating the human mobility patterns is a highly interesting field in the 21th century, and it takes vast attention from multi-disciplinary scientists in physics, economic, social, computer, engineering…etc. depending on the concept that relates between human mobility patterns and their communications. Hence, the necessity for a rich repository of data has emerged. Therefore, the most powerful solution is the usage of GSM network data, which gives millions of Call Details Records gained from urban regions. However, the available data still have shortcomings, because it gives only the indication of spatio-temporal data at only the moment of mobile communication activities. In th
This paper is concerned with introducing and studying the first new approximation operators using mixed degree system and second new approximation operators using mixed degree system which are the core concept in this paper. In addition, the approximations of graphs using the operators first lower and first upper are accurate then the approximations obtained by using the operators second lower and second upper sincefirst accuracy less then second accuracy. For this reason, we study in detail the properties of second lower and second upper in this paper. Furthermore, we summarize the results for the properties of approximation operators second lower and second upper when the graph G is arbitrary, serial 1, serial 2, reflexive, symmetric, tra
... Show More