Crime is considered as an unlawful activity of all kinds and it is punished by law. Crimes have an impact on a society's quality of life and economic development. With a large rise in crime globally, there is a necessity to analyze crime data to bring down the rate of crime. This encourages the police and people to occupy the required measures and more effectively restricting the crimes. The purpose of this research is to develop predictive models that can aid in crime pattern analysis and thus support the Boston department's crime prevention efforts. The geographical location factor has been adopted in our model, and this is due to its being an influential factor in several situations, whether it is traveling to a specific area or living in it to assist people in recognizing between a secured and an unsecured environment. Geo-location, combined with new approaches and techniques, can be extremely useful in crime investigation. The aim is focused on comparative study between three supervised learning algorithms. Where learning used data sets to train and test it to get desired results on them. Various machine learning algorithms on the dataset of Boston city crime are Decision Tree, Naïve Bayes and Logistic Regression classifiers have been used here to predict the type of crime that happens in the area. The outputs of these methods are compared to each other to find the one model best fits this type of data with the best performance. From the results obtained, the Decision Tree demonstrated the highest result compared to Naïve Bayes and Logistic Regression.
The aim of this study is to estimate the parameters and reliability function for kumaraswamy distribution of this two positive parameter (a,b > 0), which is a continuous probability that has many characterstics with the beta distribution with extra advantages.
The shape of the function for this distribution and the most important characterstics are explained and estimated the two parameter (a,b) and the reliability function for this distribution by using the maximum likelihood method (MLE) and Bayes methods. simulation experiments are conducts to explain the behaviour of the estimation methods for different sizes depending on the mean squared error criterion the results show that the Bayes is bet
... Show MoreThe objective of the study: To diagnose the reality of the relationship between the fluctuations in world oil prices and their reflection on the trends of government spending on the various economic sectors.
The research found: that public expenditures contribute to the increase of national consumption through the purchase of consumer goods by the state for the performance of the state's duties or the payment of wages to employees in the public sector and thus have a direct impact on national consumption
The results of the standard tests showed that there is no common integration between the oil price fluctuations and the government expenditure on the security sector through the A
... Show MoreThe objectives of this research are to determine and find out the reality of crops structure of greenhouses in association of Al-Watan in order to stand on the optimal use of economic resources available for the purpose of reaching a crop structure optimization of the farm that achieves maximize profit and gross and net farm incomes , using the method of linear programming to choose the farm optimal plan with the highest net income , as well as identifying production plans farm efficient with (income - deviation) optimal (E-A) of the Association and derived, which takes into account the margin risk wich derived from each plan using the model( MOTAD), as a model of models of linear programming alternative programming m
... Show MoreFace recognition is a crucial biometric technology used in various security and identification applications. Ensuring accuracy and reliability in facial recognition systems requires robust feature extraction and secure processing methods. This study presents an accurate facial recognition model using a feature extraction approach within a cloud environment. First, the facial images undergo preprocessing, including grayscale conversion, histogram equalization, Viola-Jones face detection, and resizing. Then, features are extracted using a hybrid approach that combines Linear Discriminant Analysis (LDA) and Gray-Level Co-occurrence Matrix (GLCM). The extracted features are encrypted using the Data Encryption Standard (DES) for security
... Show MoreGross domestic product (GDP) is an important measure of the size of the economy's production. Economists use this term to determine the extent of decline and growth in the economies of countries. It is also used to determine the order of countries and compare them to each other. The research aims at describing and analyzing the GDP during the period from 1980 to 2015 and for the public and private sectors and then forecasting GDP in subsequent years until 2025. To achieve this goal, two methods were used: linear and nonlinear regression. The second method in the time series analysis of the Box-Jenkins models and the using of statistical package (Minitab17), (GRETLW32)) to extract the results, and then comparing the two methods, T
... Show MoreThe need to create the optimal water quality management process has motivated researchers to pursue prediction modeling development. One of the widely important forecasting models is the sessional autoregressive integrated moving average (SARIMA) model. In the present study, a SARIMA model was developed in R software to fit a time series data of monthly fluoride content collected from six stations on Tigris River for the period from 2004 to 2014. The adequate SARIMA model that has the least Akaike's information criterion (AIC) and mean squared error (MSE) was found to be SARIMA (2,0,0) (0,1,1). The model parameters were identified and diagnosed to derive the forecasting equations at each selected location. The correlation coefficien
... Show MoreThe need to create the optimal water quality management process has motivated researchers to pursue prediction modeling development. One of the widely important forecasting models is the sessional autoregressive integrated moving average (SARIMA) model. In the present study, a SARIMA model was developed in R software to fit a time series data of monthly fluoride content collected from six stations on Tigris River for the period from 2004 to 2014. The adequate SARIMA model that has the least Akaike's information criterion (AIC) and mean squared error (MSE) was found to be SARIMA (2, 0, 0) (0,1,1). The model parameters were identified and diagnosed to derive the forecasting equations at each selected location. The correlat
... Show MoreIn spite of the disappearing of a clear uniform textbook for teaching ESP at different departments and different colleges in both scientific and humanistic studies, the practitioners at those departments and colleges have to teach translation as one of the important requirements to pass the English language exam. The lack of defined translation activities is a noticeable problem therefore; the problem of teaching translation is diagnosed in that the students lack the ability to comprehend the text in English language and other translation knowledge and skills.
The study aims to suggest a translation strategy and then find out the effect of the translation strategy on ESP learners’ achievement in translation. A sample of 50 stud
... Show More