A fault is an error that has effects on system behaviour. A software metric is a value that represents the degree to which software processes work properly and where faults are more probable to occur. In this research, we study the effects of removing redundancy and log transformation based on threshold values for identifying faults-prone classes of software. The study also contains a comparison of the metric values of an original dataset with those after removing redundancy and log transformation. E-learning and system dataset were taken as case studies. The fault ratio ranged from 1%-31% and 0%-10% for the original dataset and 1%-10% and 0%-4% after removing redundancy and log transformation, respectively. These results impacted directly the number of classes detected, which ranged between 1-20 and 1-7 for the original dataset and 1-7 and 0-3) after removing redundancy and log transformation. The Skewness of the dataset was deceased after applying the proposed model. The classified faulty classes need more attention in the next versions in order to reduce the ratio of faults or to do refactoring to increase the quality and performance of the current version of the software.
Performance issues could be appearing from anywhere in a computer system, finding the root cause of those issues is a troublesome issue due to the complexity of the modern systems and applications. Microsoft builds multiple mechanisms to make their engineers understand what is happening inside All Windows versions including Windows 10 Home and the behavior of any application working on it whether Microsoft services or even third-party applications, one of those mechanisms is the Event Tracing for Windows (ETW) which is the core of logging and tracing in Windows operating system to trace the internal events of the system and its applications. This study goes deep into internal process activities to investigat
... Show MoreThis paper proposes a new encryption method. It combines two cipher algorithms, i.e., DES and AES, to generate hybrid keys. This combination strengthens the proposed W-method by generating high randomized keys. Two points can represent the reliability of any encryption technique. Firstly, is the key generation; therefore, our approach merges 64 bits of DES with 64 bits of AES to produce 128 bits as a root key for all remaining keys that are 15. This complexity increases the level of the ciphering process. Moreover, it shifts the operation one bit only to the right. Secondly is the nature of the encryption process. It includes two keys and mixes one round of DES with one round of AES to reduce the performance time. The W-method deals with
... Show MoreTwelve compounds containing a sulphur- or oxygen-based heterocyclic core, 1,3- oxazole or 1,3-thiazole ring with hydroxy, methoxy and methyl terminal substituent, were synthesized and characterized. The molecular structures of these compounds were performed by elemental analysis and different spectroscopic tequniques. The liquid crystalline behaviors were studied by using hot-stage optical polarizing microscopy and differential scanning calorimetry. All compounds of 1,4- disubstituted benzene core with oxazole ring display liquid crystalline smectic A (SmA) mesophase. The compounds of 1,3- and 1,4-disubstituted benzene core with thiazole ring exhibit exclusively enantiotropic nematic liquid crystal phases.
Traffic management at road intersections is a complex requirement that has been an important topic of research and discussion. Solutions have been primarily focused on using vehicular ad hoc networks (VANETs). Key issues in VANETs are high mobility, restriction of road setup, frequent topology variations, failed network links, and timely communication of data, which make the routing of packets to a particular destination problematic. To address these issues, a new dependable routing algorithm is proposed, which utilizes a wireless communication system between vehicles in urban vehicular networks. This routing is position-based, known as the maximum distance on-demand routing algorithm (MDORA). It aims to find an optimal route on a hop-by-ho
... Show MoreThe concept of the active contour model has been extensively utilized in the segmentation and analysis of images. This technology has been effectively employed in identifying the contours in object recognition, computer graphics and vision, biomedical processing of images that is normal images or medical images such as Magnetic Resonance Images (MRI), X-rays, plus Ultrasound imaging. Three colleagues, Kass, Witkin and Terzopoulos developed this energy, lessening “Active Contour Models” (equally identified as Snake) back in 1987. Being curved in nature, snakes are characterized in an image field and are capable of being set in motion by external and internal forces within image data and the curve itself in that order. The present s
... Show MoreIn modern times face recognition is one of the vital sides for computer vision. This is due to many reasons involving availability and accessibility of technologies and commercial applications. Face recognition in a brief statement is robotically recognizing a person from an image or video frame. In this paper, an efficient face recognition algorithm is proposed based on the benefit of wavelet decomposition to extract the most important and distractive features for the face and Eigen face method to classify faces according to the minimum distance with feature vectors. Faces94 data base is used to test the method. An excellent recognition with minimum computation time is obtained with accuracy reaches to 100% and recognition time decrease
... Show More