Crime is considered as an unlawful activity of all kinds and it is punished by law. Crimes have an impact on a society's quality of life and economic development. With a large rise in crime globally, there is a necessity to analyze crime data to bring down the rate of crime. This encourages the police and people to occupy the required measures and more effectively restricting the crimes. The purpose of this research is to develop predictive models that can aid in crime pattern analysis and thus support the Boston department's crime prevention efforts. The geographical location factor has been adopted in our model, and this is due to its being an influential factor in several situations, whether it is traveling to a specific area or living in it to assist people in recognizing between a secured and an unsecured environment. Geo-location, combined with new approaches and techniques, can be extremely useful in crime investigation. The aim is focused on comparative study between three supervised learning algorithms. Where learning used data sets to train and test it to get desired results on them. Various machine learning algorithms on the dataset of Boston city crime are Decision Tree, Naïve Bayes and Logistic Regression classifiers have been used here to predict the type of crime that happens in the area. The outputs of these methods are compared to each other to find the one model best fits this type of data with the best performance. From the results obtained, the Decision Tree demonstrated the highest result compared to Naïve Bayes and Logistic Regression.
This work implements an Electroencephalogram (EEG) signal classifier. The implemented method uses Orthogonal Polynomials (OP) to convert the EEG signal samples to moments. A Sparse Filter (SF) reduces the number of converted moments to increase the classification accuracy. A Support Vector Machine (SVM) is used to classify the reduced moments between two classes. The proposed method’s performance is tested and compared with two methods by using two datasets. The datasets are divided into 80% for training and 20% for testing, with 5 -fold used for cross-validation. The results show that this method overcomes the accuracy of other methods. The proposed method’s best accuracy is 95.6% and 99.5%, respectively. Finally, from the results, it
... Show MoreThe research aims to identify the tax policy strategy adopted in Iraq after the change of the tax system in 2003 and beyond, and then make a comparison of the two strategies on corporate data whether they are charged with progressive tax rates and after the change of the system as the tax rates became fixed, and then indicate the changes In the tax proceeds, and knowing the dimensions of the approved tax policy, is it a tax reform strategy or a strategy to attract investments.
The research started from the problem of exposure of the Iraqi tax system to several changes in the tax strategy from 2003 until now, as this led to a reflection on the technical organization of taxes, in terms of the tax exemption.And these many amendments
... Show MoreFor many years controlled shot peening was considered as a surface treatment. It is now clear that the performance of control shot peening in terms of fatigue depends on the balance between its beneficial (compressive residual stress and work hardening) and beneficial effects (surface hardening).
The overall aim of this paper is to study the effects of aggressive shot peening on fatigue life of 7075 – T6 aluminum alloy. The fatigue life reduction factor (LRF) due to the aggressive shot peening was established and empirical relations were proposed to describe the behavior of LRF, roughness and fatigue life. The benefits of shot peering in terms of fatigue life are dependent on the shot peening time (SPT).
... Show MoreTesting is a vital phase in software development, and having the right amount of test data is an important aspect in speeding up the process. As a result of the integrationist optimization challenge, extensive testing may not always be practicable. There is also a shortage of resources, expenses, and schedules that impede the testing process. One way to explain combinational testing (CT) is as a basic strategy for creating new test cases. CT has been discussed by several scholars while establishing alternative tactics depending on the interactions between parameters. Thus, an investigation into current CT methods was started in order to better understand their capabilities and limitations. In this study, 97 publications were evalua
... Show MoreThe researchers of the present study have conducted a genre analysis of two political debates between American presidential nominees in the 2016 and 2020 elections. The current study seeks to analyze the cognitive construction of political debates to evaluate the typical moves and strategies politicians use to express their communicative intentions and to reveal the language manifestations of those moves and strategies. To achieve the study’s aims, the researchers adopt Bhatia’s (1993) framework of cognitive construction supported by van Emeren’s (2010) pragma-dialectic framework. The study demonstrates that both presidents adhere to this genre structuring to further their political agendas. For a positive and promising image
... Show MoreEarly detection of brain tumors is critical for enhancing treatment options and extending patient survival. Magnetic resonance imaging (MRI) scanning gives more detailed information, such as greater contrast and clarity than any other scanning method. Manually dividing brain tumors from many MRI images collected in clinical practice for cancer diagnosis is a tough and time-consuming task. Tumors and MRI scans of the brain can be discovered using algorithms and machine learning technologies, making the process easier for doctors because MRI images can appear healthy when the person may have a tumor or be malignant. Recently, deep learning techniques based on deep convolutional neural networks have been used to analyze med
... Show MoreThis paper presents a method to classify colored textural images of skin tissues. Since medical images havehighly heterogeneity, the development of reliable skin-cancer detection process is difficult, and a mono fractaldimension is not sufficient to classify images of this nature. A multifractal-based feature vectors are suggested hereas an alternative and more effective tool. At the same time multiple color channels are used to get more descriptivefeatures.Two multifractal based set of features are suggested here. The first set measures the local roughness property, whilethe second set measure the local contrast property.A combination of all the extracted features from the three colormodels gives a highest classification accuracy with 99.4
... Show MoreAbstract
The current research aims to identify the analysis of the questions for the book of literary criticism for the preparatory stage according to Bloom's classification. The research community consists of (34) exercises and (45) questions. The researcher used the method of analyzing questions and prepared a preliminary list that includes criteria that are supposed to measure exercises, which were selected based on Bloom's classification and the extant literature related to the topic. The scales were exposed to a jury of experts and specialists in curricula and methods of teaching the Arabic language. The scales obtained a complete agreement. Thus, it was adapted to become a reliable instrument in this
... Show More