Background: This study was conducted to evaluate the hard palate bone density and thickness during 3rd and 4th decades and their relationships with body mass index (BMI) and compositions, to allow more accurate mini-implant placement. Materials and method: Computed tomographic (CT) images were obtained for 60 patients (30 males and 30 females) with age range 20-39 years. The hard palate bone density and thickness were measured at 20 sites at the intersection of five anterioposterior and four mediolateral reference lines with 6 and 3 mm intervals from incisive foramen and mid-palatal suture respectively. Diagnostic scale operates according to the bioelectric impedance analysis principle was used to measure body weight; percentages of body fat, water, and muscle; bone mass; and basal and active metabolic rates. Results: No significant difference in overall bone density and thickness of hard palate during 3rd and 4th decades. The gender should be considered in regard to bone thickness. Cortical bone density and thickness showed a tendency to decrease posteriorly, while the cancellous bone density showed a tendency to increase posteriorly. In the mediolateral areas, no specific patterns were observed. With increasing BMI, the cortical bone density was increased. The relationships of bone density and thickness with most scale measurements were not significant. Conclusion: Mini-implants for orthodontic anchorage can be effectively placed in most areas of hard palate regarding the bone density. While regarding bone thickness, care should be taken during the planning of their placement in hard palate. A new classification for bone thickness of hard palate has been developed.
The division partitioning technique has been used to analyze the four electron systems into six-pairs electronic wave functions for ( for the Beryllium atom in its excited state (1s2 2s 3s ) and like ions ( B+1 ,C+2 ) using Hartree-Fock wave functions . The aim of this work is to study atomic scattering form factor f(s) for and nuclear magnetic shielding constant. The results are obtained numerically by using the computer software (Mathcad).
Research problem:
Problem of current research can determine the dimensions to answer the following question: The effect of teaching using the six thinking hats on academic achievement for students in the second grade average in the subject of Family Education. The importance of research: research is gaining importance in terms of:
1. That this research is the first of its kind in the researcher's knowledge _ which deals with the teaching of Family Education by using the six hats, the researcher hopes to fill a gap in the educational field and serve in other studies serve the materials home economics. 2. Keep pace with the new field of modern education and strategies. 3. Highlight on the educational strategy in the field of creative
Compressing the speech reduces the data storage requirements, leading to reducing the time of transmitting the digitized speech over long-haul links like internet. To obtain best performance in speech compression, wavelet transforms require filters that combine a number of desirable properties, such as orthogonality and symmetry.The MCT bases functions are derived from GHM bases function using 2D linear convolution .The fast computation algorithm methods introduced here added desirable features to the current transform. We further assess the performance of the MCT in speech compression application. This paper discusses the effect of using DWT and MCT (one and two dimension) on speech compression. DWT and MCT performances in terms of comp
... Show MoreNeural cryptography deals with the problem of “key exchange” between two neural networks by using the mutual learning concept. The two networks exchange their outputs (in bits) and the key between two communicating parties ar eventually represented in the final learned weights, when the two networks are said to be synchronized. Security of neural synchronization is put at risk if an attacker is capable of synchronizing with any of the two parties during the training process.
Shadow detection and removal is an important task when dealing with color outdoor images. Shadows are generated by a local and relative absence of light. Shadows are, first of all, a local decrease in the amount of light that reaches a surface. Secondly, they are a local change in the amount of light rejected by a surface toward the observer. Most shadow detection and segmentation methods are based on image analysis. However, some factors will affect the detection result due to the complexity of the circumstances. In this paper a method of segmentation test present to detect shadows from an image and a function concept is used to remove the shadow from an image.
The penalized least square method is a popular method to deal with high dimensional data ,where the number of explanatory variables is large than the sample size . The properties of penalized least square method are given high prediction accuracy and making estimation and variables selection
At once. The penalized least square method gives a sparse model ,that meaning a model with small variables so that can be interpreted easily .The penalized least square is not robust ,that means very sensitive to the presence of outlying observation , to deal with this problem, we can used a robust loss function to get the robust penalized least square method ,and get robust penalized estimator and
... Show MoreWater/oil emulsion is considered as the most refractory mixture to separate because of the interference of the two immiscible liquids, water and oil. This research presents a study of dewatering of water / kerosene emulsion using hydrocyclone. The effects of factors such as: feed flow rate (3, 5, 7, 9, and 11 L/min), inlet water concentration of the emulsion (5%, 7.5%, 10%, 12.5%, and 15% by volume), and split ratio (0.1, 0.3, 0.5, 0.7, and 0.9) on the separation efficiency and pressure drop were studied. Dimensional analysis using Pi theorem was applied for the first time to model the hydrocyclone based on the experimental data. It was shown that the maximum separation efficiency; at split ratio 0.1, was 94.3% at 10% co
... Show MoreDiabetes is one of the increasing chronic diseases, affecting millions of people around the earth. Diabetes diagnosis, its prediction, proper cure, and management are compulsory. Machine learning-based prediction techniques for diabetes data analysis can help in the early detection and prediction of the disease and its consequences such as hypo/hyperglycemia. In this paper, we explored the diabetes dataset collected from the medical records of one thousand Iraqi patients. We applied three classifiers, the multilayer perceptron, the KNN and the Random Forest. We involved two experiments: the first experiment used all 12 features of the dataset. The Random Forest outperforms others with 98.8% accuracy. The second experiment used only five att
... Show MoreEye Detection is used in many applications like pattern recognition, biometric, surveillance system and many other systems. In this paper, a new method is presented to detect and extract the overall shape of one eye from image depending on two principles Helmholtz & Gestalt. According to the principle of perception by Helmholz, any observed geometric shape is perceptually "meaningful" if its repetition number is very small in image with random distribution. To achieve this goal, Gestalt Principle states that humans see things either through grouping its similar elements or recognize patterns. In general, according to Gestalt Principle, humans see things through genera
... Show More