In regression testing, Test case prioritization (TCP) is a technique to arrange all the available test cases. TCP techniques can improve fault detection performance which is measured by the average percentage of fault detection (APFD). History-based TCP is one of the TCP techniques that consider the history of past data to prioritize test cases. The issue of equal priority allocation to test cases is a common problem for most TCP techniques. However, this problem has not been explored in history-based TCP techniques. To solve this problem in regression testing, most of the researchers resort to random sorting of test cases. This study aims to investigate equal priority in history-based TCP techniques. The first objective is to implement different history-based TCP techniques. The second objective is to explore the problem of equal priority in history-based TCP techniques. The third objective is to explore random sorting as a solution to the problem of equal priority in history-based TCP techniques. Datasets of historical records of test cases from conventional and modern sources were collected. History-based TCP techniques were applied to different datasets. The History-based TCP techniques were checked for the problem of equal priority. Then random sorting was used as a solution to the problem of equal priority. Finally, the results were elaborated in terms of APFD and execution time. The results indicate that history-based techniques also suffer from the problem of equal priority like other types of TCP techniques. Secondly, random sorting does not produce optimal results while trying to solve the problem of equal priority in history-based TCP. Furthermore, random sorting deteriorates the results of history-based TCP techniques when employed to solve the problem of equal priority. One should resort to random sorting if no other solution exists. The decision to choose the best solution requires a cost-benefit analysis keeping in view the context and solution under consideration.
One of the most important features of the Amazon Web Services (AWS) cloud is that the program can be run and accessed from any location. You can access and monitor the result of the program from any location, saving many images and allowing for faster computation. This work proposes a face detection classification model based on AWS cloud aiming to classify the faces into two classes: a non-permission class, and a permission class, by training the real data set collected from our cameras. The proposed Convolutional Neural Network (CNN) cloud-based system was used to share computational resources for Artificial Neural Networks (ANN) to reduce redundant computation. The test system uses Internet of Things (IoT) services through our ca
... Show MoreThis study employs wavelet transforms to address the issue of boundary effects. Additionally, it utilizes probit transform techniques, which are based on probit functions, to estimate the copula density function. This estimation is dependent on the empirical distribution function of the variables. The density is estimated within a transformed domain. Recent research indicates that the early implementations of this strategy may have been more efficient. Nevertheless, in this work, we implemented two novel methodologies utilizing probit transform and wavelet transform. We then proceeded to evaluate and contrast these methodologies using three specific criteria: root mean square error (RMSE), Akaike information criterion (AIC), and log
... Show MoreIn the field of data security, the critical challenge of preserving sensitive information during its transmission through public channels takes centre stage. Steganography, a method employed to conceal data within various carrier objects such as text, can be proposed to address these security challenges. Text, owing to its extensive usage and constrained bandwidth, stands out as an optimal medium for this purpose. Despite the richness of the Arabic language in its linguistic features, only a small number of studies have explored Arabic text steganography. Arabic text, characterized by its distinctive script and linguistic features, has gained notable attention as a promising domain for steganographic ventures. Arabic text steganography harn
... Show MoreBlockchain has garnered the most attention as the most important new technology that supports recent digital transactions via e-government. The most critical challenge for public e-government systems is reducing bureaucracy and increasing the efficiency and performance of administrative processes in these systems since blockchain technology can play a role in a decentralized environment and execute a high level of security transactions and transparency. So, the main objectives of this work are to survey different proposed models for e-government system architecture based on blockchain technology implementation and how these models are validated. This work studies and analyzes some research trends focused on blockchain
... Show MoreFree-Space Optical (FSO) can provide high-speed communications when the effect of turbulence is not serious. However, Space-Time-Block-Code (STBC) is a good candidate to mitigate this seriousness. This paper proposes a hybrid of an Optical Code Division Multiple Access (OCDMA) and STBC in FSO communication for last mile solutions, where access to remote areas is complicated. The main weakness effecting a FSO link is the atmospheric turbulence. The feasibility of employing STBC in OCDMA is to mitigate these effects. The current work evaluates the Bit-Error-Rate (BER) performance of OCDMA operating under the scintillation effect, where this effect can be described by the gamma-gamma model. The most obvious finding to emerge from the analysis
... Show MoreThis work implements an Electroencephalogram (EEG) signal classifier. The implemented method uses Orthogonal Polynomials (OP) to convert the EEG signal samples to moments. A Sparse Filter (SF) reduces the number of converted moments to increase the classification accuracy. A Support Vector Machine (SVM) is used to classify the reduced moments between two classes. The proposed method’s performance is tested and compared with two methods by using two datasets. The datasets are divided into 80% for training and 20% for testing, with 5 -fold used for cross-validation. The results show that this method overcomes the accuracy of other methods. The proposed method’s best accuracy is 95.6% and 99.5%, respectively. Finally, from the results, it
... Show MoreThe fingerprints are the more utilized biometric feature for person identification and verification. The fingerprint is easy to understand compare to another existing biometric type such as voice, face. It is capable to create a very high recognition rate for human recognition. In this paper the geometric rotation transform is applied on fingerprint image to obtain a new level of features to represent the finger characteristics and to use for personal identification; the local features are used for their ability to reflect the statistical behavior of fingerprint variation at fingerprint image. The proposed fingerprint system contains three main stages, they are: (i) preprocessing, (ii) feature extraction, and (iii) matching. The preprocessi
... Show MoreThis study presents an adaptive control scheme based on synergetic control theory for suppressing the vibration of building structures due to earthquake. The control key for the proposed controller is based on a magneto-rheological (MR) damper, which supports the building. According to Lyapunov-based stability analysis, an adaptive synergetic control (ASC) strategy was established under variation of the stiffness and viscosity coefficients in the vibrated building. The control and adaptive laws of the ASC were developed to ensure the stability of the controlled structure. The proposed controller addresses the suppression problem of a single-degree-of-freedom (SDOF) building model, and an earthquake control scenario was conducted and simulat
... Show MoreIn this paper, a handwritten digit classification system is proposed based on the Discrete Wavelet Transform and Spike Neural Network. The system consists of three stages. The first stage is for preprocessing the data and the second stage is for feature extraction, which is based on Discrete Wavelet Transform (DWT). The third stage is for classification and is based on a Spiking Neural Network (SNN). To evaluate the system, two standard databases are used: the MADBase database and the MNIST database. The proposed system achieved a high classification accuracy rate with 99.1% for the MADBase database and 99.9% for the MNIST database
This paper reports on the laser emission properties of the BBQ dye in poly (methyl meth-acrylate)(PMMA). This host material combines the advantages of an organic environment for dye with the thermoptical mechanical properties of an organic dye. A BBQ dye solid solution in PMMA polymer. A nitrogen laser in untuned laser cavity has pumped thin films. We developed the concentration and the thickness to get high efficiency. The laser efficiency had been increased from 7% at thickness 1.5 m to 16.5% at thickness 3.5m, and from 1% to 10% when concentration increased from 1x10-5M to 1x10-3 M