With the escalation of cybercriminal activities, the demand for forensic investigations into these crimeshas grown significantly. However, the concept of systematic pre-preparation for potential forensicexaminations during the software design phase, known as forensic readiness, has only recently gainedattention. Against the backdrop of surging urban crime rates, this study aims to conduct a rigorous andprecise analysis and forecast of crime rates in Los Angeles, employing advanced Artificial Intelligence(AI) technologies. This research amalgamates diverse datasets encompassing crime history, varioussocio-economic indicators, and geographical locations to attain a comprehensive understanding of howcrimes manifest within the city. Leveraging sophisticated AI algorithms, the study focuses on scrutinizingsubtle periodic patterns and uncovering relationships among the collected datasets. Through thiscomprehensive analysis, the research endeavors to pinpoint crime hotspots, detect fluctuations infrequency, and identify underlying causes of criminal activities. Furthermore, the research evaluates theefficacy of the AI model in generating productive insights and providing the most accurate predictionsof future criminal trends. These predictive insights are poised to revolutionize the strategies of lawenforcement agencies, enabling them to adopt proactive and targeted approaches. Emphasizing ethicalconsiderations, this research ensures the continued feasibility of AI use while safeguarding individuals'constitutional rights, including privacy. The anticipated outcomes of this research are anticipated tofurnish actionable intelligence for law enforcement, policymakers, and urban planners, aiding in theidentification of effective crime prevention strategies. By harnessing the potential of AI, this researchcontributes to the promotion of proactive strategies and data-driven models in crime analysis andprediction, offering a promising avenue for enhancing public security in Los Angeles and othermetropolitan areas.
A strong sign language recognition system can break down the barriers that separate hearing and speaking members of society from speechless members. A novel fast recognition system with low computational cost for digital American Sign Language (ASL) is introduced in this research. Different image processing techniques are used to optimize and extract the shape of the hand fingers in each sign. The feature extraction stage includes a determination of the optimal threshold based on statistical bases and then recognizing the gap area in the zero sign and calculating the heights of each finger in the other digits. The classification stage depends on the gap area in the zero signs and the number of opened fingers in the other signs as well as
... Show MoreThis investigation aims to study some properties of lightweight aggregate concrete reinforced by mono or hybrid fibers of different sizes and types. In this research, the considered lightweight aggregate was Light Expanded Clay Aggregate while the adopted fibers included hooked, straight, polypropylene, and glass. Eleven lightweight concrete mixes were considered, These mixes comprised of; one plain concrete mix (without fibers), two reinforced concrete mixtures of mono fiber (hooked or straight fibers), six reinforced concrete mixtures of double hybrid fibers, and two reinforced concrete mixtures of triple hybrid fibers. Hardened concrete properties were investigated in this study. G
The health of Roadway pavement surface is considered as one of the major issues for safe driving. Pavement surface condition is usually referred to micro and macro textures which enhances the friction between the pavement surface and vehicular tires, while it provides a proper drainage for heavy rainfall water. Measurement of the surface texture is not yet standardized, and many different techniques are implemented by various road agencies around the world based on the availability of equipment’s, skilled technicians’ and funds. An attempt has been made in this investigation to model the surface macro texture measured from sand patch method (SPM), and the surface micro texture measured from out flow time (OFT) and British pendul
... Show MoreAbstract: Facial defects resulting from neoplasms, congenital, acquired malformations or trauma can be restored with facial prosthesis using different materials and retention methods to achieve life-like look and function. A nasal prosthesis can re-establish aesthetic form and anatomic contours for mid-facial defects, often more effectively than by surgical reconstruction as the nose is relatively immobile structure. For successful results, lot of factors such as harmony, texture, color matching and blending of tissue interface with the prosthesis are important. The aim of this study is to describe the non-surgical rehabilitation with nasal prosthesis for an Iraqi patient who received rhinectomy as a result of squamous cell carcinoma of the
... Show MoreTwo unsupervised classifiers for optimum multithreshold are presented; fast Otsu and k-means. The unparametric methods produce an efficient procedure to separate the regions (classes) by select optimum levels, either on the gray levels of image histogram (as Otsu classifier), or on the gray levels of image intensities(as k-mean classifier), which are represent threshold values of the classes. In order to compare between the experimental results of these classifiers, the computation time is recorded and the needed iterations for k-means classifier to converge with optimum classes centers. The variation in the recorded computation time for k-means classifier is discussed.
Numerical simulations are carried out to assess the quality of the circular and square apodize apertures in observing extrasolar planets. The logarithmic scale of the normalized point spread function of these apertures showed sharp decline in the radial frequency components reaching to 10-36 and 10-34 respectively and demonstrating promising results. This decline is associated with an increase in the full width of the point spread function. A trade off must be done between this full width and the radial frequency components to overcome the problem of imaging extrasolar planets.
In this research two algorithms are applied, the first is Fuzzy C Means (FCM) algorithm and the second is hard K means (HKM) algorithm to know which of them is better than the others these two algorithms are applied on a set of data collected from the Ministry of Planning on the water turbidity of five areas in Baghdad to know which of these areas are less turbid in clear water to see which months during the year are less turbid in clear water in the specified area.
Segmentation is the process of partition digital images into different parts depending on texture, color, or intensity, and can be used in different fields in order to segment and isolate the area to be partitioned. In this work images of the Moon obtained through observations in Astronomy and space dep. College of science university of Baghdad by ( Toward space telescopes and widespread used of a CCD camera) . Different segmentation methods were used to segment lunar craters. Different celestial objects cause craters when they crash into the surface of the Moon like asteroids and meteorites. Thousands of craters appears on the Moon's surface with ranges in size from meter to many kilometers, it provide insights into the age and ge
... Show More