A fault is an error that has effects on system behaviour. A software metric is a value that represents the degree to which software processes work properly and where faults are more probable to occur. In this research, we study the effects of removing redundancy and log transformation based on threshold values for identifying faults-prone classes of software. The study also contains a comparison of the metric values of an original dataset with those after removing redundancy and log transformation. E-learning and system dataset were taken as case studies. The fault ratio ranged from 1%-31% and 0%-10% for the original dataset and 1%-10% and 0%-4% after removing redundancy and log transformation, respectively. These results impacted directly the number of classes detected, which ranged between 1-20 and 1-7 for the original dataset and 1-7 and 0-3) after removing redundancy and log transformation. The Skewness of the dataset was deceased after applying the proposed model. The classified faulty classes need more attention in the next versions in order to reduce the ratio of faults or to do refactoring to increase the quality and performance of the current version of the software.
Like the digital watermark, which has been highlighted in previous studies, the quantum watermark aims to protect the copyright of any image and to validate its ownership using visible or invisible logos embedded in the cover image. In this paper, we propose a method to include an image logo in a cover image based on quantum fields, where a certain amount of texture is encapsulated to encode the logo image before it is included in the cover image. The method also involves transforming wavelets such as Haar base transformation and geometric transformation. These combination methods achieve a high degree of security and robustness for watermarking technology. The digital results obtained from the experiment show that the values of Peak Sig
... Show MoreIn this paper, an algorithm through which we can embed more data than the
regular methods under spatial domain is introduced. We compressed the secret data
using Huffman coding and then this compressed data is embedded using laplacian
sharpening method.
We used Laplace filters to determine the effective hiding places, then based on
threshold value we found the places with the highest values acquired from these filters
for embedding the watermark. In this work our aim is increasing the capacity of
information which is to be embedded by using Huffman code and at the same time
increasing the security of the algorithm by hiding data in the places that have highest
values of edges and less noticeable.
The perform
Hand gestures are currently considered one of the most accurate ways to communicate in many applications, such as sign language, controlling robots, the virtual world, smart homes, and the field of video games. Several techniques are used to detect and classify hand gestures, for instance using gloves that contain several sensors or depending on computer vision. In this work, computer vision is utilized instead of using gloves to control the robot's movement. That is because gloves need complicated electrical connections that limit user mobility, sensors may be costly to replace, and gloves can spread skin illnesses between users. Based on computer vision, the MediaPipe (MP) method is used. This method is a modern method that is discover
... Show MoreGlaucoma is one of the most dangerous eye diseases. It occurs as a result of an imbalance in the drainage and flow of the retinal fluid. Consequently, intraocular pressure is generated, which is a significant risk factor for glaucoma. Intraocular pressure causes progressive damage to the optic nerve head, thus leading to vision loss in the advanced stages. Glaucoma does not give any signs of disease in the early stages, so it is called "the Silent Thief of Sight". Therefore, early diagnosis and treatment of retinal eye disease is extremely important to prevent vision loss. Many articles aim to analyze fundus retinal images and diagnose glaucoma. This review can be used as a guideline to help diagnose glaucoma. It presents 63 artic
... Show MoreAn experiment was carried out in the fields of Agriculture College-Baghdad University during spring and autumn of 2015 by using a randomized complete blocks design with three replications. The first season hybridization was established among three pure cultivars of cowpea (Vigna uniguiculata L.) which: Ramshorn, California black eye and Rahawya in full diallel crosses according to Griffing with first method and fixed model (3 parents+ 3 diallel hybrids +3 reciprocal hybrids) and a comparison experiment was in autumn season. The result of statistical analysis showed that there was a significant difference among the parents and their hybrids for all the studied characters. The parent 1 was the higher for root nodules number , leaf number, pod
... Show MoreGumbel distribution was dealt with great care by researchers and statisticians. There are traditional methods to estimate two parameters of Gumbel distribution known as Maximum Likelihood, the Method of Moments and recently the method of re-sampling called (Jackknife). However, these methods suffer from some mathematical difficulties in solving them analytically. Accordingly, there are other non-traditional methods, like the principle of the nearest neighbors, used in computer science especially, artificial intelligence algorithms, including the genetic algorithm, the artificial neural network algorithm, and others that may to be classified as meta-heuristic methods. Moreover, this principle of nearest neighbors has useful statistical featu
... Show MoreRecently there has been an urgent need to identify the ages from their personal pictures and to be used in the field of security of personal and biometric, interaction between human and computer, security of information, law enforcement. However, in spite of advances in age estimation, it stills a difficult problem. This is because the face old age process is determined not only by radical factors, e.g. genetic factors, but also by external factors, e.g. lifestyle, expression, and environment. This paper utilized machine learning technique to intelligent age estimation from facial images using support vector machine (SVM) on FG_NET dataset. The proposed work consists of three phases: the first phase is image preprocessing include four st
... Show MoreIn this study, we review the ARIMA (p, d, q), the EWMA and the DLM (dynamic linear moodelling) procedures in brief in order to accomdate the ac(autocorrelation) structure of data .We consider the recursive estimation and prediction algorithms based on Bayes and KF (Kalman filtering) techniques for correlated observations.We investigate the effect on the MSE of these procedures and compare them using generated data.
In this paper, we introduce three robust fuzzy estimators of a location parameter based on Buckley’s approach, in the presence of outliers. These estimates were compared using the variance of fuzzy numbers criterion, all these estimates were best of Buckley’s estimate. of these, the fuzzy median was the best in the case of small and medium sample size, and in large sample size, the fuzzy trimmed mean was the best.