Objective: The study aimed to screen the prepubertal children for idiopathic scoliosis at earlier stages, and find
out the relationship between idiopathic scoliosis and demographic data such as age, sex, body mass index,
heavy backpacks, and heart & lung diseases.
Methodology: A descriptive study was conducted on screening program for prepubertal children in primary
schools at Baghdad city, starting from 24th of February to the end of October 2010. Non- probability
(purposive) sample of 510 prepubertal children were chosen from primary schools of both sides of Al-Karkh
and Al-Russafa sectors. Data was collected through a specially constructed questionnaire format include (24)
items multiple choice questions, and researcher observation. The validity of the questionnaire was determined
through a panel of experts related to the field of the study, and the reliability through a pilot study. The data
were analyzed through the application of descriptive statistical analysis frequency, & percentages, and
inferential statistical analysis, chi-square, are used.
Results: The study results revealed that most of the prepubertal children have idiopathic scoliosis, two third of
the sample (88.4%) were at age 10-12 years and mostly boys. There is highly significant association with (low
Body Mass Index & carry of the school backpack) but no significant association with the age, gender, and lung
& heart diseases. There is highly significant association between prepubertal children's idiopathic scoliosis signs
& the researcher observation for the prepubertal body feature, and Adam's Bending Forward Test which
revealed highly significant association with their idiopathic scoliosis. The results of the study reflect that the
majority of prepubertal children's idiopathic scoliosis deformities have significant association at early detection
than the other spinal deformities (kyphosis & kyphoscoliosis).
Recommendation: The researchers recommended that Ministry Of Health should activate the screening program
of scoliosis within school health service programs, and Ministry of Education should be involved their teachers in
the screening & training program.
In this paper, the methods of weighted residuals: Collocation Method (CM), Least Squares Method (LSM) and Galerkin Method (GM) are used to solve the thin film flow (TFF) equation. The weighted residual methods were implemented to get an approximate solution to the TFF equation. The accuracy of the obtained results is checked by calculating the maximum error remainder functions (MER). Moreover, the outcomes were examined in comparison with the 4th-order Runge-Kutta method (RK4) and good agreements have been achieved. All the evaluations have been successfully implemented by using the computer system Mathematica®10.
The unconventional techniques called “the quick look techniques”, have been developed to present well log data calculations, so that they may be scanned easily to identify the zones that warrant a more detailed analysis, these techniques have been generated by service companies at the well site which are among the useful, they provide the elements of information needed for making decisions quickly when time is of essence. The techniques used in this paper are:
- Apparent resistivity Rwa
- Rxo /Rt
The above two methods had been used to evaluate Nasiriyah oil field formations (well-NS-3) to discover the hydrocarbon bearing formations. A compu
... Show MoreThe digital camera which contain light unit inside it is useful with low illumination but not for high. For different intensity; the quality of the image will not stay good but it will have dark or low intensity so we can not change the contrast and the intensity in order to increase the losses information in the bright and the dark regions. . In this search we study the regular illumination on the images using the tungsten light by changing the intensities. The result appears that the tungsten light gives nearly far intensity for the three color bands(RGB) and the illuminated band(L).the result depend on the statistical properties which represented by the voltage ,power and intensities and the effect of this parameter on the digital
... Show MoreAmplitude variation with offset (AVO) analysis is an 1 efficient tool for hydrocarbon detection and identification of elastic rock properties and fluid types. It has been applied in the present study using reprocessed pre-stack 2D seismic data (1992, Caulerpa) from north-west of the Bonaparte Basin, Australia. The AVO response along the 2D pre-stack seismic data in the Laminaria High NW shelf of Australia was also investigated. Three hypotheses were suggested to investigate the AVO behaviour of the amplitude anomalies in which three different factors; fluid substitution, porosity and thickness (Wedge model) were tested. The AVO models with the synthetic gathers were analysed using log information to find which of these is the
... Show MoreThis paper demonstrates the design of an algorithm to represent the design stages of fixturing system that serve in increasing the flexibility and automation of fixturing system planning for uniform polyhedral part. This system requires building a manufacturing feature recognition algorithm to present or describe inputs such as (configuration of workpiece) and built database system to represents (production plan and fixturing system exiting) to this algorithm. Also knowledge – base system was building or developed to find the best fixturing analysis (workpiece setup, constraints of workpiece and arrangement the contact on this workpiece) to workpiece.
The influx of data in bioinformatics is primarily in the form of DNA, RNA, and protein sequences. This condition places a significant burden on scientists and computers. Some genomics studies depend on clustering techniques to group similarly expressed genes into one cluster. Clustering is a type of unsupervised learning that can be used to divide unknown cluster data into clusters. The k-means and fuzzy c-means (FCM) algorithms are examples of algorithms that can be used for clustering. Consequently, clustering is a common approach that divides an input space into several homogeneous zones; it can be achieved using a variety of algorithms. This study used three models to cluster a brain tumor dataset. The first model uses FCM, whic
... Show MoreTraffic classification is referred to as the task of categorizing traffic flows into application-aware classes such as chats, streaming, VoIP, etc. Most systems of network traffic identification are based on features. These features may be static signatures, port numbers, statistical characteristics, and so on. Current methods of data flow classification are effective, they still lack new inventive approaches to meet the needs of vital points such as real-time traffic classification, low power consumption, ), Central Processing Unit (CPU) utilization, etc. Our novel Fast Deep Packet Header Inspection (FDPHI) traffic classification proposal employs 1 Dimension Convolution Neural Network (1D-CNN) to automatically learn more representational c
... Show More