A fault is an error that has effects on system behaviour. A software metric is a value that represents the degree to which software processes work properly and where faults are more probable to occur. In this research, we study the effects of removing redundancy and log transformation based on threshold values for identifying faults-prone classes of software. The study also contains a comparison of the metric values of an original dataset with those after removing redundancy and log transformation. E-learning and system dataset were taken as case studies. The fault ratio ranged from 1%-31% and 0%-10% for the original dataset and 1%-10% and 0%-4% after removing redundancy and log transformation, respectively. These results impacted directly the number of classes detected, which ranged between 1-20 and 1-7 for the original dataset and 1-7 and 0-3) after removing redundancy and log transformation. The Skewness of the dataset was deceased after applying the proposed model. The classified faulty classes need more attention in the next versions in order to reduce the ratio of faults or to do refactoring to increase the quality and performance of the current version of the software.
The futuristic age requires progress in handwork or even sub-machine dependency and Brain-Computer Interface (BCI) provides the necessary BCI procession. As the article suggests, it is a pathway between the signals created by a human brain thinking and the computer, which can translate the signal transmitted into action. BCI-processed brain activity is typically measured using EEG. Throughout this article, further intend to provide an available and up-to-date review of EEG-based BCI, concentrating on its technical aspects. In specific, we present several essential neuroscience backgrounds that describe well how to build an EEG-based BCI, including evaluating which signal processing, software, and hardware techniques to use. Individu
... Show MoreMetaheuristics under the swarm intelligence (SI) class have proven to be efficient and have become popular methods for solving different optimization problems. Based on the usage of memory, metaheuristics can be classified into algorithms with memory and without memory (memory-less). The absence of memory in some metaheuristics will lead to the loss of the information gained in previous iterations. The metaheuristics tend to divert from promising areas of solutions search spaces which will lead to non-optimal solutions. This paper aims to review memory usage and its effect on the performance of the main SI-based metaheuristics. Investigation has been performed on SI metaheuristics, memory usage and memory-less metaheuristics, memory char
... Show MoreSilicon (Si)-based materials are sought in different engineering applications including Civil, Mechanical, Chemical, Materials, Energy and Minerals engineering. Silicon and Silicon dioxide are processed extensively in the industries in granular form, for example to develop durable concrete, shock and fracture resistant materials, biological, optical, mechanical and electronic devices which offer significant advantages over existing technologies. Here we focus on the constitutive behaviour of Si-based granular materials under mechanical shearing. In the recent times, it is widely recognised in the literature that the microscopic origin of shear strength in granular assemblies are associated with their
A super pixel can be defined as a group of pixels, which have similar characteristics, which can be very helpful for image segmentation. It is generally color based segmentation as well as other features like texture, statistics…etc .There are many algorithms available to segment super pixels like Simple Linear Iterative Clustering (SLIC) super pixels and Density-Based Spatial Clustering of Application with Noise (DBSCAN). SLIC algorithm essentially relay on choosing N random or regular seeds points covering the used image for segmentation. In this paper Split and Merge algorithm was used instead to overcome determination the seed point's location and numbers as well as other used parameters. The overall results were better from the SL
... Show MoreAgent technology has a widespread usage in most of computerized systems. In this paper agent technology has been applied to monitor wear test for an aluminium silicon alloy which is used in automotive parts and gears of light loads. In addition to wear test monitoring، porosity effect on
wear resistance has been investigated. To get a controlled amount of porosity, the specimens have
been made by powder metallurgy process with various pressures (100, 200 and 600) MPa. The aim of
this investigation is a proactive step to avoid the failure occurrence by the porosity.
A dry wear tests have been achieved by subjecting three reciprocated loads (1000, 1500 and 2000)g
for three periods (10, 45 and 90)min. The weight difference a
In this paper, a method is proposed to increase the compression ratio for the color images by
dividing the image into non-overlapping blocks and applying different compression ratio for these
blocks depending on the importance information of the block. In the region that contain important
information the compression ratio is reduced to prevent loss of the information, while in the
smoothness region which has not important information, high compression ratio is used .The
proposed method shows better results when compared with classical methods(wavelet and DCT).
As a result of recent developments in highway research as well as the increased use of vehicles, there has been a significant interest paid to the most current, effective, and precise Intelligent Transportation System (ITS). In the field of computer vision or digital image processing, the identification of specific objects in an image plays a crucial role in the creation of a comprehensive image. There is a challenge associated with Vehicle License Plate Recognition (VLPR) because of the variation in viewpoints, multiple formats, and non-uniform lighting conditions at the time of acquisition of the image, shape, and color, in addition, the difficulties like poor image resolution, blurry image, poor lighting, and low contrast, these
... Show MoreIntelligent systems can be used to build systems that simulate human behavior. One such system is lip reading. Hence, lip reading is considered one of the hardest problems in image analysis, and thus machine learning is used to solve this problem, which achieves remarkable results, especially when using a deep neural network, in which it dives deeply into the texture of any input. Microlearning is the new trend in E-learning. It is based on small pieces of information to make the learning process easier and more productive. In this paper, a proposed system for multi-layer lip reading is presented. The proposed system is based on micro content (letters) to achieve the lip reading process using deep learning and auto-correction mo
... Show More