This study investigated the ability of using crushed glass solid wastes in water filtration by using a pilot plant, constructed in Al-Wathba water treatment plant in Baghdad. Different depths and different grain sizes of crushed glass were used as mono and dual media with sand and porcelaniate in the filtration process. The mathematical model by Tufenkji and Elimelech was used to evaluate the initial collection efficiency η of these filters. The results indicated that the collection efficiency varied inversely with the filtration rate. For the mono media filters the theoretical ηth values were more than the practical values ηprac calculated from
the experimental work. In the glass filter ηprac was obtained by multiplying ηth by a factor 0.945 where this factor was 0.714 for the sand filter. All the dual filters showed that ηth was less than ηprac. Whereas the dual filter 35cm porcelanite and 35cm glass showed the highest collection efficiency. To obtain ηprac in the dual filter glass and sand, ηth is multiplied by 1.374, as for the dual filters porcelanite and glass the factor was 1.168 and 1.204.
The use of credit cards for online purchases has significantly increased in recent years, but it has also led to an increase in fraudulent activities that cost businesses and consumers billions of dollars annually. Detecting fraudulent transactions is crucial for protecting customers and maintaining the financial system's integrity. However, the number of fraudulent transactions is less than legitimate transactions, which can result in a data imbalance that affects classification performance and bias in the model evaluation results. This paper focuses on processing imbalanced data by proposing a new weighted oversampling method, wADASMO, to generate minor-class data (i.e., fraudulent transactions). The proposed method is based on th
... Show MoreThe transmitting and receiving of data consume the most resources in Wireless Sensor Networks (WSNs). The energy supplied by the battery is the most important resource impacting WSN's lifespan in the sensor node. Therefore, because sensor nodes run from their limited battery, energy-saving is necessary. Data aggregation can be defined as a procedure applied for the elimination of redundant transmissions, and it provides fused information to the base stations, which in turn improves the energy effectiveness and increases the lifespan of energy-constrained WSNs. In this paper, a Perceptually Important Points Based Data Aggregation (PIP-DA) method for Wireless Sensor Networks is suggested to reduce redundant data before sending them to the
... Show MoreThe aim of this paper is to measure the characteristics properties of 3 m radio telescope that installed inside Baghdad University campus. The measurements of this study cover some of the fundamental parameters at 1.42 GHz. These parameters concentrated principally on, the system noise temperature, signal to noise ratio and sensitivity, half power beam width, aperture efficiency, and effective area. These parameters are estimated via different radio sources observation like Cas-A, full moon, sky background, and solar drift scan observations. From the results of these observations, these parameters are found to be approximately 64 K, 1.2, 0.9 Jansky, 3.7°, 0.54, and 3.8 m2 respectively. The parameters values have vital affect to quantitativ
... Show MoreRecently, the phenomenon of the spread of fake news or misinformation in most fields has taken on a wide resonance in societies. Combating this phenomenon and detecting misleading information manually is rather boring, takes a long time, and impractical. It is therefore necessary to rely on the fields of artificial intelligence to solve this problem. As such, this study aims to use deep learning techniques to detect Arabic fake news based on Arabic dataset called the AraNews dataset. This dataset contains news articles covering multiple fields such as politics, economy, culture, sports and others. A Hybrid Deep Neural Network has been proposed to improve accuracy. This network focuses on the properties of both the Text-Convolution Neural
... Show MoreAn intrusion detection system (IDS) is key to having a comprehensive cybersecurity solution against any attack, and artificial intelligence techniques have been combined with all the features of the IoT to improve security. In response to this, in this research, an IDS technique driven by a modified random forest algorithm has been formulated to improve the system for IoT. To this end, the target is made as one-hot encoding, bootstrapping with less redundancy, adding a hybrid features selection method into the random forest algorithm, and modifying the ranking stage in the random forest algorithm. Furthermore, three datasets have been used in this research, IoTID20, UNSW-NB15, and IoT-23. The results are compared with the three datasets men
... Show MoreSoftware-defined networking (SDN) presents novel security and privacy risks, including distributed denial-of-service (DDoS) attacks. In response to these threats, machine learning (ML) and deep learning (DL) have emerged as effective approaches for quickly identifying and mitigating anomalies. To this end, this research employs various classification methods, including support vector machines (SVMs), K-nearest neighbors (KNNs), decision trees (DTs), multiple layer perceptron (MLP), and convolutional neural networks (CNNs), and compares their performance. CNN exhibits the highest train accuracy at 97.808%, yet the lowest prediction accuracy at 90.08%. In contrast, SVM demonstrates the highest prediction accuracy of 95.5%. As such, an
... Show MoreThe recent emergence of sophisticated Large Language Models (LLMs) such as GPT-4, Bard, and Bing has revolutionized the domain of scientific inquiry, particularly in the realm of large pre-trained vision-language models. This pivotal transformation is driving new frontiers in various fields, including image processing and digital media verification. In the heart of this evolution, our research focuses on the rapidly growing area of image authenticity verification, a field gaining immense relevance in the digital era. The study is specifically geared towards addressing the emerging challenge of distinguishing between authentic images and deep fakes – a task that has become critically important in a world increasingly reliant on digital med
... Show MoreGeomechanical modelling and simulation are introduced to accurately determine the combined effects of hydrocarbon production and changes in rock properties due to geomechanical effects. The reservoir geomechanical model is concerned with stress-related issues and rock failure in compression, shear, and tension induced by reservoir pore pressure changes due to reservoir depletion. In this paper, a rock mechanical model is constructed in geomechanical mode, and reservoir geomechanics simulations are run for a carbonate gas reservoir. The study begins with assessment of the data, construction of 1D rock mechanical models along the well trajectory, the generation of a 3D mechanical earth model, and runni
This work proposes a new video buffer framework (VBF) to acquire a favorable quality of experience (QoE) for video streaming in cellular networks. The proposed framework consists of three main parts: client selection algorithm, categorization method, and distribution mechanism. The client selection algorithm was named independent client selection algorithm (ICSA), which is proposed to select the best clients who have less interfering effects on video quality and recognize the clients’ urgency based on buffer occupancy level. In the categorization method, each frame in the video buffer is given a specific number for better estimation of the playout outage probability, so it can efficiently handle so many frames from different video
... Show More