Crime is a threat to any nation’s security administration and jurisdiction. Therefore, crime analysis becomes increasingly important because it assigns the time and place based on the collected spatial and temporal data. However, old techniques, such as paperwork, investigative judges, and statistical analysis, are not efficient enough to predict the accurate time and location where the crime had taken place. But when machine learning and data mining methods were deployed in crime analysis, crime analysis and predication accuracy increased dramatically. In this study, various types of criminal analysis and prediction using several machine learning and data mining techniques, based on the percentage of an accuracy measure of the previous work, are surveyed and introduced, with the aim of producing a concise review of using these algorithms in crime prediction. It is expected that this review study will be helpful for presenting such techniques to crime researchers in addition to supporting future research to develop these techniques for crime analysis by presenting some crime definition, prediction systems challenges and classifications with a comparative study. It was proved though literature, that supervised learning approaches were used in more studies for crime prediction than other approaches, and Logistic Regression is the most powerful method in predicting crime.
The key objective of the study is to understand the best processes that are currently used in managing talent in Australian higher education (AHE) and design a quantitative measurement of talent management processes (TMPs) for the higher education (HE) sector.
The three qualitative multi-method studies that are commonly used in empirical studies, namely, brainstorming, focus group discussions and semi-structured individual interviews were considered. Twenty
This study involved the effect of anew nickel (II) complexs with formla [NiL2(H2O)2].2.5ETOH where L=Bis[5-(p-nitrophenyL)-4-phenyL-1,2,4-traizole-3-dithocarbamato hydrazide] diaqua. nickel(II). Ethanol(2.5).and anti-cancer drug cyclophosphamide on specific actifity of two Liver enzymes (GOT,GPT) in the (Liver,kidney) tissues and on the creatinine Level in the kidney byUtilizing an invivosystem in femalmice.The result showed that inhibition in the activity of GPT and GOT enzymes in theLiver and in both nickel (II) complex and cyclophosphamide drug (CP) . mice weretreated with three doses (90,180,320) µg/mouse for three days for each group.The Liver show's the highest rate of GPT inhibition was about 97.43% at180µg/mouse regarding the ki
... Show MoreThe financial markets are one of the sectors whose data is characterized by continuous movement in most of the times and it is constantly changing, so it is difficult to predict its trends , and this leads to the need of methods , means and techniques for making decisions, and that pushes investors and analysts in the financial markets to use various and different methods in order to reach at predicting the movement of the direction of the financial markets. In order to reach the goal of making decisions in different investments, where the algorithm of the support vector machine and the CART regression tree algorithm are used to classify the stock data in order to determine
... Show MoreA Survey Study Of A Sample Of The Public Of Baghdad Governorate
The current study aimed to identify the most prominent psychological and behavioral repercussions of the exposure of the elderly to the news of the Corona pandemic and to determine the mechanisms of their exposure. On an intended sample on both sides of (Al-Karkh and Al-Rasafa) and the simple random sample was adopted to choose the place of distribution of the questionnaire and the intentional sample.
The research concluded several results, most TV news is still a primary source of information and that most of the sample move between stations to see more information about the pandemic and that the presentation of views confuses the elderly ،There
... Show MoreThe rapid development of telemedicine services and the requirements for exchanging medical information between physicians, consultants, and health institutions have made the protection of patients’ information an important priority for any future e-health system. The protection of medical information, including the cover (i.e. medical image), has a specificity that slightly differs from the requirements for protecting other information. It is necessary to preserve the cover greatly due to its importance on the reception side as medical staff use this information to provide a diagnosis to save a patient's life. If the cover is tampered with, this leads to failure in achieving the goal of telemedicine. Therefore, this work provides an in
... Show MoreA preventing shield for neutrons and gamma rays was designed using alternate layers of water and iron with pre-fixed dimensions in order to study the possibility of attenuating both neutrons and gamma-rays. ANISN CODE was prepared and adapted for the shield calculation using radiation doses calculation: Two groups of cross-section were used for each of neutrons and gamma-rays that rely on the one – dimensional transport equation using discrete ordinate's method, and through transforming cross-section values to values that are independent on the number of groups. The memory size required for the applied code was reduced and the results obtained were in agreement with those of standard acceptable document samples of cross –section, this a
... Show MoreThis study aimed to measure the accounting conservatism and the lemited factors which affected on it in the annual financial reports of insurance companies which listed on the Amman Stock Exchange during the period from 2005 to 2016, these factors were represented by firm age, firm debt and firm size.
Using the market value model (MV) To book value ( BV) Beaver and Ryan (2000) The level of the accounting conservatism was measured. The study found that the insurance companies which are listed on the ASE exercise the accounting conservatism when they were preparing financial reports. And when conducting a process of the test of the affected of the factors (The age of the
... Show MoreA skip list data structure is really just a simulation of a binary search tree. Skip lists algorithm are simpler, faster and use less space. this data structure conceptually uses parallel sorted linked lists. Searching in a skip list is more difficult than searching in a regular sorted linked list. Because a skip list is a two dimensional data structure, it is implemented using a two dimensional network of nodes with four pointers. the implementation of the search, insert and delete operation taking a time of upto . The skip list could be modified to implement the order statistic operations of RANKand SEARCH BY RANK while maintaining the same expected time. Keywords:skip list , parallel linked list , randomized algorithm , rank.