Crime is a threat to any nation’s security administration and jurisdiction. Therefore, crime analysis becomes increasingly important because it assigns the time and place based on the collected spatial and temporal data. However, old techniques, such as paperwork, investigative judges, and statistical analysis, are not efficient enough to predict the accurate time and location where the crime had taken place. But when machine learning and data mining methods were deployed in crime analysis, crime analysis and predication accuracy increased dramatically. In this study, various types of criminal analysis and prediction using several machine learning and data mining techniques, based on the percentage of an accuracy measure of the previous work, are surveyed and introduced, with the aim of producing a concise review of using these algorithms in crime prediction. It is expected that this review study will be helpful for presenting such techniques to crime researchers in addition to supporting future research to develop these techniques for crime analysis by presenting some crime definition, prediction systems challenges and classifications with a comparative study. It was proved though literature, that supervised learning approaches were used in more studies for crime prediction than other approaches, and Logistic Regression is the most powerful method in predicting crime.
Poly (3-hydroxybutyrate) (PHB) is a typical microbial bio-polyester reserve material; known as “green plastics”, which produced under controlled conditions as intracellular products of the secondary metabolism of diverse gram-negative/positive bacteria and various extremophiles archaea. Although PHB has properties allowing being very attractive, it is too expensive to compete with conventional and non-biodegradable plastics. Feasibility of this research to evaluate the suitability of using a watermelon-derived media as an alternative substrate for PHB synthesis under stress conditions was examined. Results, include the most nutrients extraction, indicated that the watermelon seeds contain a high content of nutrients makes them a promisi
... Show MoreData Driven Requirement Engineering (DDRE) represents a vision for a shift from the static traditional methods of doing requirements engineering to dynamic data-driven user-centered methods. Data available and the increasingly complex requirements of system software whose functions can adapt to changing needs to gain the trust of its users, an approach is needed in a continuous software engineering process. This need drives the emergence of new challenges in the discipline of requirements engineering to meet the required changes. The problem in this study was the method in data discrepancies which resulted in the needs elicitation process being hampered and in the end software development found discrepancies and could not meet the need
... Show MoreIn this paper, the process of comparison between the tree regression model and the negative binomial regression. As these models included two types of statistical methods represented by the first type "non parameter statistic" which is the tree regression that aims to divide the data set into subgroups, and the second type is the "parameter statistic" of negative binomial regression, which is usually used when dealing with medical data, especially when dealing with large sample sizes. Comparison of these methods according to the average mean squares error (MSE) and using the simulation of the experiment and taking different sample
... Show MoreImage compression is a suitable technique to reduce the storage space of an image, increase the area of storage in the device, and speed up the transmission process. In this paper, a new idea for image compression is proposed to improve the performance of the Absolute Moment Block Truncation Coding (AMBTC) method depending on Weber's law condition to distinguish uniform blocks (i.e., low and constant details blocks) from non-uniform blocks in original images. Then, all elements in the bitmap of each uniform block are represented by zero. After that, the lossless method, which is Run Length method, is used for compressing the bits more, which represent the bitmap of these uniform blocks. Via this simple idea, the result is improving
... Show MoreGypseous soils are spread in several regions in the world including Iraq, where it covers more than 28.6% [1] of the surface region of the country. This soil, with high gypsum content causes different problems in construction and strategic projects. As a result of water flow through the soil mass, permeability and chemical arrangement of these soils vary over time due to the solubility and leaching of gypsum. In this study the soil of 36% gypsum content, is taken from one location about 100 km (62 mi) southwest of Baghdad, where the sample is taken from depth (0.5 - 1) m below the natural ground surface and mixed with (3%, 6%, 9%) of Copolymer and Styrene-butadiene Rubber to improve t
Adsorption techniques are widely used to remove certain classes of pollutants from wastewater. Phenolic compounds represent one of the problematic groups. Na-Y zeolite has been synthesized from locally available Iraqi kaolin clay. Characterization of the prepared zeolite was made by XRD and surface area measurement using N2 adsorption. Both synthetic Na-Y zeolite and kaolin clay have been tested for adsorption of 4-Nitro-phenol in batch mode experiments. Maximum removal efficiencies of 90% and 80% were obtained using the prepared zeolite and kaolin clay, respectively. Kinetics and equilibrium adsorption isotherms were investigated. Investigations showed that both Langmuir and Freundlich isotherms fit the experimental data quite well. On the
... Show MoreThe assessment of data quality from different sources can be considered as a key challenge in supporting effective geospatial data integration and promoting collaboration in mapping projects. This paper presents a methodology for assessing positional and shape quality for authoritative large-scale data, such as Ordnance Survey (OS) UK data and General Directorate for Survey (GDS) Iraq data, and Volunteered Geographic Information (VGI), such as OpenStreetMap (OSM) data, with the intention of assessing possible integration. It is based on the measurement of discrepancies among the datasets, addressing positional accuracy and shape fidelity, using standard procedures and also directional statistics. Line feature comparison has been und
... Show MoreThis research aims to investigate the color distribution of a huge sample of 613654 galaxies from the Sloan Digital Sky Survey (SDSS). Those galaxies are at a redshift of 0.001 - 0.5 and have magnitudes of g = 17 - 20. Five subsamples of galaxies at redshifts of (0.001 - 0.1), (0.1 - 0.2), (0.2 - 0.3), (0.3 - 0.4) and (0.4 - 0.5) have been extracted from the main sample. The color distributions (u-g), (g-r) and (u-r) have been produced and analysed using a Matlab code for the main sample as well as all five subsamples. Then a bimodal Gaussian fit to color distributions of data that have been carried out using minimum chi-square in Microsoft Office Excel. The results showed that the color distributions of the main sample and
... Show More