With the escalation of cybercriminal activities, the demand for forensic investigations into these crimeshas grown significantly. However, the concept of systematic pre-preparation for potential forensicexaminations during the software design phase, known as forensic readiness, has only recently gainedattention. Against the backdrop of surging urban crime rates, this study aims to conduct a rigorous andprecise analysis and forecast of crime rates in Los Angeles, employing advanced Artificial Intelligence(AI) technologies. This research amalgamates diverse datasets encompassing crime history, varioussocio-economic indicators, and geographical locations to attain a comprehensive understanding of howcrimes manifest within the city. Leveraging sophisticated AI algorithms, the study focuses on scrutinizingsubtle periodic patterns and uncovering relationships among the collected datasets. Through thiscomprehensive analysis, the research endeavors to pinpoint crime hotspots, detect fluctuations infrequency, and identify underlying causes of criminal activities. Furthermore, the research evaluates theefficacy of the AI model in generating productive insights and providing the most accurate predictionsof future criminal trends. These predictive insights are poised to revolutionize the strategies of lawenforcement agencies, enabling them to adopt proactive and targeted approaches. Emphasizing ethicalconsiderations, this research ensures the continued feasibility of AI use while safeguarding individuals'constitutional rights, including privacy. The anticipated outcomes of this research are anticipated tofurnish actionable intelligence for law enforcement, policymakers, and urban planners, aiding in theidentification of effective crime prevention strategies. By harnessing the potential of AI, this researchcontributes to the promotion of proactive strategies and data-driven models in crime analysis andprediction, offering a promising avenue for enhancing public security in Los Angeles and othermetropolitan areas.
String matching is seen as one of the essential problems in computer science. A variety of computer applications provide the string matching service for their end users. The remarkable boost in the number of data that is created and kept by modern computational devices influences researchers to obtain even more powerful methods for coping with this problem. In this research, the Quick Search string matching algorithm are adopted to be implemented under the multi-core environment using OpenMP directive which can be employed to reduce the overall execution time of the program. English text, Proteins and DNA data types are utilized to examine the effect of parallelization and implementation of Quick Search string matching algorithm on multi-co
... Show More<p>Combating the COVID-19 epidemic has emerged as one of the most promising healthcare the world's challenges have ever seen. COVID-19 cases must be accurately and quickly diagnosed to receive proper medical treatment and limit the pandemic. Imaging approaches for chest radiography have been proven in order to be more successful in detecting coronavirus than the (RT-PCR) approach. Transfer knowledge is more suited to categorize patterns in medical pictures since the number of available medical images is limited. This paper illustrates a convolutional neural network (CNN) and recurrent neural network (RNN) hybrid architecture for the diagnosis of COVID-19 from chest X-rays. The deep transfer methods used were VGG19, DenseNet121
... Show MoreHuman interaction technology based on motion capture (MoCap) systems is a vital tool for human kinematics analysis, with applications in clinical settings, animations, and video games. We introduce a new method for analyzing and estimating dorsal spine movement using a MoCap system. The captured data by the MoCap system are processed and analyzed to estimate the motion kinematics of three primary regions; the shoulders, spine, and hips. This work contributes a non-invasive and anatomically guided framework that enables region-specific analysis of spinal motion which could be used as a clinical alternative to invasive measurement techniques. The hierarchy of our model consists of five main levels; motion capture system settings, marker data
... Show MoreThe purpose of this paper to discriminate between the poetic poems of each poet depending on the characteristics and attribute of the Arabic letters. Four categories used for the Arabic letters, letters frequency have been included in a multidimensional contingency table and each dimension has two or more levels, then contingency coefficient calculated.
The paper sample consists of six poets from different historical ages, and each poet has five poems. The method was programmed using the MATLAB program, the efficiency of the proposed method is 53% for the whole sample, and between 90% and 95% for each poet's poems.
Phase change materials (PCMs) such as paraffin wax can be used to store or release large amount of energy at certain temperature at which their solid-liquid phase changes occurs. Paraffin wax that used in latent heat thermal energy storage (LHTES) has low thermal conductivity. In this study, the thermal conductivity of paraffin wax has been enhanced by adding different mass concentration (1wt.%, 3wt.%, 5wt.%) of (TiO2) nano-particles with about (10nm) diameter. It is found that the phase change temperature varies with adding (TiO2) nanoparticles in to the paraffin wax. The thermal conductivity of the composites is found to decrease with increasing temperature. The increase in thermal conductivity ha
... Show MoreThe finishing operation of the electrochemical finishing technology (ECF) for tube of steel was investigated In this study. Experimental procedures included qualitative
and quantitative analyses for surface roughness and material removal. Qualitative analyses utilized finishing optimization of a specific specimen in various design and operating conditions; value of gap from 0.2 to 10mm, flow rate of electrolytes from 5 to 15liter/min, finishing time from 1 to 4min and the applied voltage from 6 to 12v, to find out the value of surface roughness and material removal at each electrochemical state. From the measured material removal for each process state was used to verify the relationship with finishing time of work piece. Electrochemi
In this study, gold nanoparticles were synthesized in a single step biosynthetic method using aqueous leaves extract of thymus vulgaris L. It acts as a reducing and capping agent. The characterizations of nanoparticles were carried out using UV-Visible spectra, X-ray diffraction (XRD) and FTIR. The surface plasmon resonance of the as-prepared gold nanoparticles (GNPs) showed the surface plasmon resonance centered at 550[Formula: see text]nm. The XRD pattern showed that the strong four intense peaks indicated the crystalline nature and the face centered cubic structure of the gold nanoparticles. The average crystallite size of the AuNPs was 14.93[Formula: see text]nm. Field emission scanning electron microscope (FESEM) was used to s
... Show MoreInterface evaluation has been the subject of extensive study and research in human-computer interaction (HCI). It is a crucial tool for promoting the idea that user engagement with computers should resemble casual conversations and interactions between individuals, according to specialists in the field. Researchers in the HCI field initially focused on making various computer interfaces more usable, thus improving the user experience. This study's objectives were to evaluate and enhance the user interface of the University of Baghdad's implementation of an online academic management system using the effectiveness, time-based efficiency, and satisfaction rates that comply with the task questionnaire process. We made a variety of interfaces f
... Show MoreIn aspect-based sentiment analysis ABSA, implicit aspects extraction is a fine-grained task aim for extracting the hidden aspect in the in-context meaning of the online reviews. Previous methods have shown that handcrafted rules interpolated in neural network architecture are a promising method for this task. In this work, we reduced the needs for the crafted rules that wastefully must be articulated for the new training domains or text data, instead proposing a new architecture relied on the multi-label neural learning. The key idea is to attain the semantic regularities of the explicit and implicit aspects using vectors of word embeddings and interpolate that as a front layer in the Bidirectional Long Short-Term Memory Bi-LSTM. First, we
... Show More