A fault is an error that has effects on system behaviour. A software metric is a value that represents the degree to which software processes work properly and where faults are more probable to occur. In this research, we study the effects of removing redundancy and log transformation based on threshold values for identifying faults-prone classes of software. The study also contains a comparison of the metric values of an original dataset with those after removing redundancy and log transformation. E-learning and system dataset were taken as case studies. The fault ratio ranged from 1%-31% and 0%-10% for the original dataset and 1%-10% and 0%-4% after removing redundancy and log transformation, respectively. These results impacted directly the number of classes detected, which ranged between 1-20 and 1-7 for the original dataset and 1-7 and 0-3) after removing redundancy and log transformation. The Skewness of the dataset was deceased after applying the proposed model. The classified faulty classes need more attention in the next versions in order to reduce the ratio of faults or to do refactoring to increase the quality and performance of the current version of the software.
Semantic segmentation realization and understanding is a stringent task not just for computer vision but also in the researches of the sciences of earth, semantic segmentation decompose compound architectures in one elements, the most mutual object in a civil outside or inside senses must classified then reinforced with information meaning of all object, it’s a method for labeling and clustering point cloud automatically. Three dimensions natural scenes classification need a point cloud dataset to representation data format as input, many challenge appeared with working of 3d data like: little number, resolution and accurate of three Dimensional dataset . Deep learning now is the po
Currently, with the huge increase in modern communication and network applications, the speed of transformation and storing data in compact forms are pressing issues. Daily an enormous amount of images are stored and shared among people every moment, especially in the social media realm, but unfortunately, even with these marvelous applications, the limited size of sent data is still the main restriction's, where essentially all these applications utilized the well-known Joint Photographic Experts Group (JPEG) standard techniques, in the same way, the need for construction of universally accepted standard compression systems urgently required to play a key role in the immense revolution. This review is concerned with Different
... Show MoreAccurate emotion categorization is an important and challenging task in computer vision and image processing fields. Facial emotion recognition system implies three important stages: Prep-processing and face area allocation, feature extraction and classification. In this study a new system based on geometric features (distances and angles) set derived from the basic facial components such as eyes, eyebrows and mouth using analytical geometry calculations. For classification stage feed forward neural network classifier is used. For evaluation purpose the Standard database "JAFFE" have been used as test material; it holds face samples for seven basic emotions. The results of conducted tests indicate that the use of suggested distances, angles
... Show MoreRecently personal recommender system has spread fast, because of its role in helping users to make their decision. Location-based recommender systems are one of these systems. These systems are working by sensing the location of the person and suggest the best services to him in his area. Unfortunately, these systems that depend on explicit user rating suffering from cold start and sparsity problems. The proposed system depends on the current user position to recommend a hotel to him, and on reviews analysis. The hybrid sentiment analyzer consists of supervised sentiment analyzer and the second stage is lexicon sentiment analyzer. This system has a contribute over the sentiment analyzer by extracting the aspects that users have been ment
... Show MoreVisible light communication (VLC) is an upcoming wireless technology for next-generation communication for high-speed data transmission. It has the potential for capacity enhancement due to its characteristic large bandwidth. Concerning signal processing and suitable transceiver design for the VLC application, an amplification-based optical transceiver is proposed in this article. The transmitter consists of a driver and laser diode as the light source, while the receiver contains a photodiode and signal amplifying circuit. The design model is proposed for its simplicity in replacing the trans-impedance and transconductance circuits of the conventional modules by a simple amplification circuit and interface converter. Th
... Show MorePC-based controller is an approach to control systems with Real-Time parameters by controlling selected manipulating variable to accomplish the objectives. Shell and tube heat exchanger have been identified as process models that are inherently nonlinear and hard to control due to unavailability of the exact models’ descriptions. PC and analogue input output card will be used as the controller that controls the heat exchanger hot stream to the desired temperature.
The control methodology by using four speed pump as manipulating variable to control the temperature of the hot stream to cool to the desired temperature.
In this work, the dynamics of cross flow shell and tube heat exchanger is modeled from step changes in cold water f
Steganography involves concealing information by embedding data within cover media and it can be categorized into two main domains: spatial and frequency. This paper presents two distinct methods. The first is operating in the spatial domain which utilizes the least significant bits (LSBs) to conceal a secret message. The second method is the functioning in the frequency domain which hides the secret message within the LSBs of the middle-frequency band of the discrete cosine transform (DCT) coefficients. These methods enhance obfuscation by utilizing two layers of randomness: random pixel embedding and random bit embedding within each pixel. Unlike other available methods that embed data in sequential order with a fixed amount.
... Show MoreThis research includes the synthesis, characterization, and investigation of liquid crystalline properties of new rod-shaped liquid crystal compounds 1,4- phenylene bis(2-(5-(four-alkoxybenzylidene)-2,4-dioxothiazolidin-3- yl)acetate), prepared thiazolidine-2,4-dione (I) by the thiourea reaction with chloroacetic acid and water in the presence of the concentrated hydrochloric acid. The n-alkoxy benzaldehyde (II)n synthesized from the reacted 4- hydreoxybenzaldehyde and n-alkyl bromide with potassium hydroxide, and then the compound (I) was reacted with (II)n in the presence of piperidine to produce compounds (III)n. Also, hydroquinone was converted into a corresponding compound (IV) by refluxing with two moles of chloracetyl chloride in pyr
... Show More