Image recognition is one of the most important applications of information processing, in this paper; a comparison between 3-level techniques based image recognition has been achieved, using discrete wavelet (DWT) and stationary wavelet transforms (SWT), stationary-stationary-stationary (sss), stationary-stationary-wavelet (ssw), stationary-wavelet-stationary (sws), stationary-wavelet-wavelet (sww), wavelet-stationary- stationary (wss), wavelet-stationary-wavelet (wsw), wavelet-wavelet-stationary (wws) and wavelet-wavelet-wavelet (www). A comparison between these techniques has been implemented. according to the peak signal to noise ratio (PSNR), root mean square error (RMSE), compression ratio (CR) and the coding noise e (n) of each third level. The two techniques that have the best results which are (sww and www) are chosen, then image recognition is applied to these two techniques using Euclidean distance and Manhattan distance and a comparison between them has been implemented., it is concluded that, sww technique is better than www technique in image recognition because it has a higher match performance (100%) for Euclidean distance and Manhattan distance than that in www..
The research problem arose from the researchers’ sense of the importance of Digital Intelligence (DI), as it is a basic requirement to help students engage in the digital world and be disciplined in using technology and digital techniques, as students’ ideas are sufficiently susceptible to influence at this stage in light of modern technology. The research aims to determine the level of DI among university students using Artificial Intelligence (AI) techniques. To verify this, the researchers built a measure of DI. The measure in its final form consisted of (24) items distributed among (8) main skills, and the validity and reliability of the tool were confirmed. It was applied to a sample of 139 male and female students who were chosen
... Show MoreSurface modeling utilizing Bezier technique is one of the more important tool in computer aided geometric design (CAD). The aim of this work is to design and implement multi-patches Bezier free-form surface. The technique has an effective contribution in technology domains and in ships, aircrafts, and cars industry, moreover for its wide utilization in making the molds. This work is includes the synthesis of these patches in a method that is allow the participation of these control point for the merge of the patches, and the confluence of patches at similar degree sides due to degree variation per patch. The model has been implemented to represent the surface. The interior data of the desired surfaces designed by M
... Show MoreThe major objective of this study is to establish a network of Ground Control Points-GCPs which can use it as a reference for any engineering project. Total Station (type: Nikon Nivo 5.C), Optical Level and Garmin Navigator GPS were used to perform traversing. Traversing measurement was achieved by using nine points covered the selected area irregularly. Near Civil Engineering Department at Baghdad University Al-jadiriya, an attempt has been made to assess the accuracy of GPS by comparing the data obtained from the Total Station. The average error of this method is 3.326 m with the highest coefficient of determination (R2) is 0.077 m observed in Northing. While in
Attention-Deficit Hyperactivity Disorder (ADHD), a neurodevelopmental disorder affecting millions of people globally, is defined by symptoms of hyperactivity, impulsivity, and inattention that can significantly affect an individual's daily life. The diagnostic process for ADHD is complex, requiring a combination of clinical assessments and subjective evaluations. However, recent advances in artificial intelligence (AI) techniques have shown promise in predicting ADHD and providing an early diagnosis. In this study, we will explore the application of two AI techniques, K-Nearest Neighbors (KNN) and Adaptive Boosting (AdaBoost), in predicting ADHD using the Python programming language. The classification accuracies obtained w
... Show MoreLocalization is an essential demand in wireless sensor networks (WSNs). It relies on several types of measurements. This paper focuses on positioning in 3-D space using time-of-arrival- (TOA-) based distance measurements between the target node and a number of anchor nodes. Central localization is assumed and either RF, acoustic or UWB signals are used for distance measurements. This problem is treated by using iterative gradient descent (GD), and an iterative GD-based algorithm for localization of moving sensors in a WSN has been proposed. To localize a node in 3-D space, at least four anchors are needed. In this work, however, five anchors are used to get better accuracy. In GD localization of a moving sensor, the algo
... Show MoreThis booklet contains the basic data and graphs forCOVID-19 in Iraq during the first three months of thepandemic ( 24 February to 19 May - 2020 ) , It isperformed to help researchers regarding this health problem (PDF) Information Booklet COVID-19 Graphs For Iraq First 3 Months. Available from: https://www.researchgate.net/publication/341655944_Information_Booklet_COVID-19_Graphs_For_Iraq_First_3_Months#fullTextFileContent [accessed Oct 26 2024].
Recently, a new secure steganography algorithm has been proposed, namely, the secure Block Permutation Image Steganography (BPIS) algorithm. The new algorithm consists of five main steps, these are: convert the secret message to a binary sequence, divide the binary sequence into blocks, permute each block using a key-based randomly generated permutation, concatenate the permuted blocks forming a permuted binary sequence, and then utilize a plane-based Least-Significant-Bit (LSB) approach to embed the permuted binary sequence into BMP image file format. The performance of algorithm was given a preliminary evaluation through estimating the PSNR (Peak Signal-to-Noise Ratio) of the stego image for limited number of experiments comprised hiding
... Show MoreThe recent emergence of sophisticated Large Language Models (LLMs) such as GPT-4, Bard, and Bing has revolutionized the domain of scientific inquiry, particularly in the realm of large pre-trained vision-language models. This pivotal transformation is driving new frontiers in various fields, including image processing and digital media verification. In the heart of this evolution, our research focuses on the rapidly growing area of image authenticity verification, a field gaining immense relevance in the digital era. The study is specifically geared towards addressing the emerging challenge of distinguishing between authentic images and deep fakes – a task that has become critically important in a world increasingly reliant on digital med
... Show More