The study relied on data about the health sector in Iraq in 2006 in cooperation with the Ministry of Health and the Central Bureau of Statistics and Information Technology in 2007 Included the estimates of the population distribution of the Baghdad province and the country depending on the population distribution for 1997,evaluate the health sector which included health institutions, and health staff, and other health services. The research Aimis; Measurement an amount and size of the growth of health services (increase and decrease) and the compare of verified in Iraq and Baghdad, and evaluate the effectiveness of the distribution of supplies and health services (physical and human) of the size of the population distribution and health indicators with the verified and the compare, and evaluate under the International indicators of the World Health Organization (WHO) , using the analysis of variance by statigraph program. Through significant differences statistic test between the expected and actual results we find the differences for all the indicators of adequacy of health and at the level of significance 0.05 substantial and large and signifigent. And comparing the sufficient of healthy indicators with international indicators for the World Health Organization (WHO) we see clearly an imbalance for all levels of healthy sufficient indicators.
The research tagged (Perceived Organizational Support in High Performance) deals with identifying the extent of the impact of perceived organizational support as an explanatory variable on high performance as a response variable for the purpose of reaching appropriate mechanisms that enable colleges of the University of Baghdad to exploit the perceived organizational support in achieving the required high performance and pursuit of its goals. The researcher relied on the descriptive and analytical approach in carrying out the research. An intentional sample was selected and reached (70) persons from the higher leadership of the colleges represented by (deans, assistants deans, heads of departments) that r
... Show MoreThis paper present the fast and robust approach of English text encryption and decryption based on Pascal matrix. The technique of encryption the Arabic or English text or both and show the result when apply this method on plain text (original message) and how will form the intelligible plain text to be unintelligible plain text in order to secure information from unauthorized access and from steel information, an encryption scheme usually uses a pseudo-random enecryption key generated by an algorithm. All this done by using Pascal matrix. Encryption and decryption are done by using MATLAB as programming language and notepad ++to write the input text.This paper present the fast and robust approach of English text encryption and decryption b
... Show MoreAnomaly detection is still a difficult task. To address this problem, we propose to strengthen DBSCAN algorithm for the data by converting all data to the graph concept frame (CFG). As is well known that the work DBSCAN method used to compile the data set belong to the same species in a while it will be considered in the external behavior of the cluster as a noise or anomalies. It can detect anomalies by DBSCAN algorithm can detect abnormal points that are far from certain set threshold (extremism). However, the abnormalities are not those cases, abnormal and unusual or far from a specific group, There is a type of data that is do not happen repeatedly, but are considered abnormal for the group of known. The analysis showed DBSCAN using the
... Show MoreIn data mining, classification is a form of data analysis that can be used to extract models describing important data classes. Two of the well known algorithms used in data mining classification are Backpropagation Neural Network (BNN) and Naïve Bayesian (NB). This paper investigates the performance of these two classification methods using the Car Evaluation dataset. Two models were built for both algorithms and the results were compared. Our experimental results indicated that the BNN classifier yield higher accuracy as compared to the NB classifier but it is less efficient because it is time-consuming and difficult to analyze due to its black-box implementation.
The majority of systems dealing with natural language processing (NLP) and artificial intelligence (AI) can assist in making automated and automatically-supported decisions. However, these systems may face challenges and difficulties or find it confusing to identify the required information (characterization) for eliciting a decision by extracting or summarizing relevant information from large text documents or colossal content. When obtaining these documents online, for instance from social networking or social media, these sites undergo a remarkable increase in the textual content. The main objective of the present study is to conduct a survey and show the latest developments about the implementation of text-mining techniqu
... Show MoreMissing data is one of the problems that may occur in regression models. This problem is usually handled by deletion mechanism available in statistical software. This method reduces statistical inference values because deletion affects sample size. In this paper, Expectation Maximization algorithm (EM), Multicycle-Expectation-Conditional Maximization algorithm (MC-ECM), Expectation-Conditional Maximization Either (ECME), and Recurrent Neural Networks (RNN) are used to estimate multiple regression models when explanatory variables have some missing values. Experimental dataset were generated using Visual Basic programming language with missing values of explanatory variables according to a missing mechanism at random general pattern and s
... Show MoreIn data transmission a change in single bit in the received data may lead to miss understanding or a disaster. Each bit in the sent information has high priority especially with information such as the address of the receiver. The importance of error detection with each single change is a key issue in data transmission field.
The ordinary single parity detection method can detect odd number of errors efficiently, but fails with even number of errors. Other detection methods such as two-dimensional and checksum showed better results and failed to cope with the increasing number of errors.
Two novel methods were suggested to detect the binary bit change errors when transmitting data in a noisy media.Those methods were: 2D-Checksum me
In this paper, simulation studies and applications of the New Weibull-Inverse Lomax (NWIL) distribution were presented. In the simulation studies, different sample sizes ranging from 30, 50, 100, 200, 300, to 500 were considered. Also, 1,000 replications were considered for the experiment. NWIL is a fat tail distribution. Higher moments are not easily derived except with some approximations. However, the estimates have higher precisions with low variances. Finally, the usefulness of the NWIL distribution was illustrated by fitting two data sets
Maintaining the manuscript, keep on national identity are a means to write history, so states seek to maintain, through a set of legal rules that criminalize the abuse as well as the development of national and international legal mechanisms. Algeria, was a state of the colony