A fault is an error that has effects on system behaviour. A software metric is a value that represents the degree to which software processes work properly and where faults are more probable to occur. In this research, we study the effects of removing redundancy and log transformation based on threshold values for identifying faults-prone classes of software. The study also contains a comparison of the metric values of an original dataset with those after removing redundancy and log transformation. E-learning and system dataset were taken as case studies. The fault ratio ranged from 1%-31% and 0%-10% for the original dataset and 1%-10% and 0%-4% after removing redundancy and log transformation, respectively. These results impacted directly the number of classes detected, which ranged between 1-20 and 1-7 for the original dataset and 1-7 and 0-3) after removing redundancy and log transformation. The Skewness of the dataset was deceased after applying the proposed model. The classified faulty classes need more attention in the next versions in order to reduce the ratio of faults or to do refactoring to increase the quality and performance of the current version of the software.
The aim of the research is to design educational software based on Web Quests and to measure its effectiveness in developing information search skills of students at the Department of Educational and Psychological Sciences. The research is experimental in nature using pre-post measurement. The research sample consisted of (91) male and female students from the second grade in the Department of Educational and Psychological Sciences, they were divided into two equal groups; the experimental group consisted of (47) students who adopted the educational software as a studying method, and the control group consisted of (44) students who follow the traditional method. The researchers prepared a list of skills for searching information and they
... Show MoreMany developments happened in Service Oriented architecture models but with no details in its technology and requirement. This paper presents a new Service Oriented Architecture (SOA) to all Service Enterprise (SE) according to their demands. Therefore, the goal is to build a new complete architecture model for SOA methodologies according to current technology and business requirements that could be used in a real Enterprise environment. To do this, new types of services and new model called Lego Model are explained in details, and the results of the proposed architecture model in analyzed. Consequently, the complications are reduced to support business domains of enterprise and to start associating SOA methodologies in their corporate s
... Show MoreRESRAD is a computer model designed to estimate risks and radiation doses from residual radioactive materials in soil. Thirty seven soil samples were collected from the area around the berms of Al-Tuwaitha site and two samples as background taken from an area about 3 km north of the site. The samples were measured by gamma-ray spectrometry system using high purity germanium (HPGe) detector. The results of samples measurements showed that three contaminated area with 238U and 235U found in the study area. Two scenarios were applied for each contaminated area to estimate the dose using RESRAD (onsite) version 7.0 code. The total dose of resident farmer scenario for area A, B and C are 0.854, 0.033 and 2.15×10-3 mSv.yr-1, respectively. Whi
... Show MoreIn this study, we present different methods of estimating fuzzy reliability of a two-parameter Rayleigh distribution via the maximum likelihood estimator, median first-order statistics estimator, quartile estimator, L-moment estimator, and mixed Thompson-type estimator. The mean-square error MSE as a measurement for comparing the considered methods using simulation through different values for the parameters and unalike sample sizes is used. The results of simulation show that the fuzziness values are better than the real values for all sample sizes, as well as the fuzzy reliability at the estimation of the Maximum likelihood Method, and Mixed Thompson Method perform better than the other methods in the sense of MSE, so that
... Show MoreWireless channels are typically much more noisy than wired links and subjected to fading due to multipath propagation which result in ISI and hence high error rate. Adaptive modulation is a powerful technique to improve the tradeoff between spectral efficiency and Bit Error Rate (BER). In order to adjust the transmission rate, channel state information (CSI) is required at the transmitter side.
In this paper the performance enhancement of using linear prediction along with channel estimation to track the channel variations and adaptive modulation were examined. The simulation results shows that the channel estimation is sufficient for low Doppler frequency shifts (<30 Hz), while channel prediction is much more suited at
... Show MoreChannel estimation and synchronization are considered the most challenging issues in Orthogonal Frequency Division Multiplexing (OFDM) system. OFDM is highly affected by synchronization errors that cause reduction in subcarriers orthogonality, leading to significant performance degradation. The synchronization errors cause two issues: Symbol Time Offset (STO), which produces inter symbol interference (ISI) and Carrier Frequency Offset (CFO), which results in inter carrier interference (ICI). The aim of the research is to simulate Comb type pilot based channel estimation for OFDM system showing the effect of pilot numbers on the channel estimation performance and propose a modified estimation method for STO with less numb
... Show MoreIn this paper, the researcher suggested using the Genetic algorithm method to estimate the parameters of the Wiener degradation process, where it is based on the Wiener process in order to estimate the reliability of high-efficiency products, due to the difficulty of estimating the reliability of them using traditional techniques that depend only on the failure times of products. Monte Carlo simulation has been applied for the purpose of proving the efficiency of the proposed method in estimating parameters; it was compared with the method of the maximum likelihood estimation. The results were that the Genetic algorithm method is the best based on the AMSE comparison criterion, then the reliab
... Show More