A fault is an error that has effects on system behaviour. A software metric is a value that represents the degree to which software processes work properly and where faults are more probable to occur. In this research, we study the effects of removing redundancy and log transformation based on threshold values for identifying faults-prone classes of software. The study also contains a comparison of the metric values of an original dataset with those after removing redundancy and log transformation. E-learning and system dataset were taken as case studies. The fault ratio ranged from 1%-31% and 0%-10% for the original dataset and 1%-10% and 0%-4% after removing redundancy and log transformation, respectively. These results impacted directly the number of classes detected, which ranged between 1-20 and 1-7 for the original dataset and 1-7 and 0-3) after removing redundancy and log transformation. The Skewness of the dataset was deceased after applying the proposed model. The classified faulty classes need more attention in the next versions in order to reduce the ratio of faults or to do refactoring to increase the quality and performance of the current version of the software.
Deep learning convolution neural network has been widely used to recognize or classify voice. Various techniques have been used together with convolution neural network to prepare voice data before the training process in developing the classification model. However, not all model can produce good classification accuracy as there are many types of voice or speech. Classification of Arabic alphabet pronunciation is a one of the types of voice and accurate pronunciation is required in the learning of the Qur’an reading. Thus, the technique to process the pronunciation and training of the processed data requires specific approach. To overcome this issue, a method based on padding and deep learning convolution neural network is proposed to
... Show MoreThe conventional procedures of clustering algorithms are incapable of overcoming the difficulty of managing and analyzing the rapid growth of generated data from different sources. Using the concept of parallel clustering is one of the robust solutions to this problem. Apache Hadoop architecture is one of the assortment ecosystems that provide the capability to store and process the data in a distributed and parallel fashion. In this paper, a parallel model is designed to process the k-means clustering algorithm in the Apache Hadoop ecosystem by connecting three nodes, one is for server (name) nodes and the other two are for clients (data) nodes. The aim is to speed up the time of managing the massive sc
... Show MoreBlockchain is an innovative technology that has gained interest in all sectors in the era of digital transformation where it manages transactions and saves them in a database. With the increasing financial transactions and the rapidly developed society with growing businesses many people looking for the dream of a better financially independent life, stray from large corporations and organizations to form startups and small businesses. Recently, the increasing demand for employees or institutes to prepare and manage contracts, papers, and the verifications process, in addition to human mistakes led to the emergence of a smart contract. The smart contract has been developed to save time and provide more confidence while dealing, as well a
... Show MoreClassification of imbalanced data is an important issue. Many algorithms have been developed for classification, such as Back Propagation (BP) neural networks, decision tree, Bayesian networks etc., and have been used repeatedly in many fields. These algorithms speak of the problem of imbalanced data, where there are situations that belong to more classes than others. Imbalanced data result in poor performance and bias to a class without other classes. In this paper, we proposed three techniques based on the Over-Sampling (O.S.) technique for processing imbalanced dataset and redistributing it and converting it into balanced dataset. These techniques are (Improved Synthetic Minority Over-Sampling Technique (Improved SMOTE), Border
... Show MoreOne of the most important features of the Amazon Web Services (AWS) cloud is that the program can be run and accessed from any location. You can access and monitor the result of the program from any location, saving many images and allowing for faster computation. This work proposes a face detection classification model based on AWS cloud aiming to classify the faces into two classes: a non-permission class, and a permission class, by training the real data set collected from our cameras. The proposed Convolutional Neural Network (CNN) cloud-based system was used to share computational resources for Artificial Neural Networks (ANN) to reduce redundant computation. The test system uses Internet of Things (IoT) services through our ca
... Show MoreFuture wireless systems aim to provide higher transmission data rates, improved spectral efficiency and greater capacity. In this paper a spectral efficient two dimensional (2-D) parallel code division multiple access (CDMA) system is proposed for generating and transmitting (2-D CDMA) symbols through 2-D Inter-Symbol Interference (ISI) channel to increase the transmission speed. The 3D-Hadamard matrix is used to generate the 2-D spreading codes required to spread the two-dimensional data for each user row wise and column wise. The quadrature amplitude modulation (QAM) is used as a data mapping technique due to the increased spectral efficiency offered. The new structure simulated using MATLAB and a comparison of performance for ser
... Show MoreNasiriyah oilfield is located in the southern part of Iraq. It represents one of the promising oilfields. Mishrif Formation is considered as the main oil-bearing carbonate reservoir in Nasiriyah oilfield, containing heavy oil (API 25o(. The study aimed to calculate and model the petrophysical properties and build a three dimensional geological model for Mishrif Formation, thus estimating the oil reserve accurately and detecting the optimum locations for hydrocarbon production.
Fourteen vertical oil wells were adopted for constructing the structural and petrophysical models. The available well logs data, including density, neutron, sonic, gamma ray, self-potential, caliper and resistivity logs were used to calculate the
... Show MoreCommunication is one of the vast and rapidly growing fields of engineering, where
increasing the efficiency of communication by overcoming the external
electromagnetic sources and noise is considered a challenging task. To achieve
confidentiality for color image transmission over the noisy communication channels
a proposed algorithm is presented for image encryption using AES algorithm. This
algorithm combined with error detections using Cyclic Redundancy Check (CRC) to
preserve the integrity of the encrypted data. This paper presents an error detection
method uses Cyclic Redundancy Check (CRC), the CRC value can be generated by
two methods: Serial and Parallel CRC Implementation. The proposed algorithm for
the
It is very difficult to obtain the value of a rock strength along the wellbore. The value of Rock strength utilizing to perform different analysis, for example, preventing failure of the wellbore, deciding a completion design and, control the production of sand. In this study, utilizing sonic log data from (Bu-50) and (BU-47) wells at Buzurgan oil field. Five formations have been studied (Mishrif, Sadia, Middle lower Kirkuk, Upper Kirkuk, and Jaddala) Firstly, calculated unconfined compressive strength (UCS) for each formation, using a sonic log method. Then, the derived confined compressive rock strengthens from (UCS) by entering the effect of bore and hydrostatic pressure for each formation. Evaluations th
... Show More