In this paper, the error distribution function is estimated for the single index model by the empirical distribution function and the kernel distribution function. Refined minimum average variance estimation (RMAVE) method is used for estimating single index model. We use simulation experiments to compare the two estimation methods for error distribution function with different sample sizes, the results show that the kernel distribution function is better than the empirical distribution function.
The estimation of the regular regression model requires several assumptions to be satisfied such as "linearity". One problem occurs by partitioning the regression curve into two (or more) parts and then joining them by threshold point(s). This situation is regarded as a linearity violation of regression. Therefore, the multiphase regression model is received increasing attention as an alternative approach which describes the changing of the behavior of the phenomenon through threshold point estimation. Maximum likelihood estimator "MLE" has been used in both model and threshold point estimations. However, MLE is not resistant against violations such as outliers' existence or in case of the heavy-tailed error distribution. The main goal of t
... Show MoreThis paper deals with constructing mixed probability distribution from mixing exponential
In this paper, we present multiple bit error correction coding scheme based on extended Hamming product code combined with type II HARQ using shared resources for on chip interconnect. The shared resources reduce the hardware complexity of the encoder and decoder compared to the existing three stages iterative decoding method for on chip interconnects. The proposed method of decoding achieves 20% and 28% reduction in area and power consumption respectively, with only small increase in decoder delay compared to the existing three stage iterative decoding scheme for multiple bit error correction. The proposed code also achieves excellent improvement in residual flit error rate and up to 58% of total power consumption compared to the other err
... Show MoreRock engineers widely use the uniaxial compressive strength (UCS) of rocks in designing
surface and underground structures. The procedure for measuring this rock strength has been
standardized by both the International Society for Rock Mechanics (ISRM) and American Society
for Testing and Materials (ASTM), Akram and Bakar(2007).
In this paper, an experimental study was performed to correlate of Point Load Index ( Is(50))
and Pulse Wave Velocity (Vp) to the Unconfined Compressive Strength (UCS) of Rocks. The effect
of several parameters was studied. Point load test, Unconfined Compressive Strength (UCS) and
Pulse Wave Velocity (Vp) were used for testing several rock samples with different diameters.
The predicted e
In data transmission a change in single bit in the received data may lead to miss understanding or a disaster. Each bit in the sent information has high priority especially with information such as the address of the receiver. The importance of error detection with each single change is a key issue in data transmission field.
The ordinary single parity detection method can detect odd number of errors efficiently, but fails with even number of errors. Other detection methods such as two-dimensional and checksum showed better results and failed to cope with the increasing number of errors.
Two novel methods were suggested to detect the binary bit change errors when transmitting data in a noisy media.Those methods were: 2D-Checksum me
Each phenomenon contains several variables. Studying these variables, we find mathematical formula to get the joint distribution and the copula that are a useful and good tool to find the amount of correlation, where the survival function was used to measure the relationship of age with the level of cretonne in the remaining blood of the person. The Spss program was also used to extract the influencing variables from a group of variables using factor analysis and then using the Clayton copula function that is used to find the shared binary distributions using multivariate distributions, where the bivariate distribution was calculated, and then the survival function value was calculated for a sample size (50) drawn from Yarmouk Ho
... Show MoreThe two parameters of Exponential-Rayleigh distribution were estimated using the maximum likelihood estimation method (MLE) for progressively censoring data. To find estimated values for these two scale parameters using real data for COVID-19 which was taken from the Iraqi Ministry of Health and Environment, AL-Karkh General Hospital. Then the Chi-square test was utilized to determine if the sample (data) corresponded with the Exponential-Rayleigh distribution (ER). Employing the nonlinear membership function (s-function) to find fuzzy numbers for these parameters estimators. Then utilizing the ranking function transforms the fuzzy numbers into crisp numbers. Finally, using mean square error (MSE) to compare the outcomes of the survival
... Show MoreThe present paper concerns with the problem of estimating the reliability system in the stress – strength model under the consideration non identical and independent of stress and strength and follows Lomax Distribution. Various shrinkage estimation methods were employed in this context depend on Maximum likelihood, Moment Method and shrinkage weight factors based on Monte Carlo Simulation. Comparisons among the suggested estimation methods have been made using the mean absolute percentage error criteria depend on MATLAB program.
Interval methods for verified integration of initial value problems (IVPs) for ODEs have been used for more than 40 years. For many classes of IVPs, these methods have the ability to compute guaranteed error bounds for the flow of an ODE, where traditional methods provide only approximations to a solution. Overestimation, however, is a potential drawback of verified methods. For some problems, the computed error bounds become overly pessimistic, or integration even breaks down. The dependency problem and the wrapping effect are particular sources of overestimations in interval computations. Berz (see [1]) and his co-workers have developed Taylor model methods, which extend interval arithmetic with symbolic computations. The latter is an ef
... Show More