The undetected error probability is an important measure to assess the communication reliability provided by any error coding scheme. Two error coding schemes namely, Joint crosstalk avoidance and Triple Error Correction (JTEC) and JTEC with Simultaneous Quadruple Error Detection (JTEC-SQED), provide both crosstalk reduction and multi-bit error correction/detection features. The available undetected error probability model yields an upper bound value which does not give accurate estimation on the reliability provided. This paper presents an improved mathematical model to estimate the undetected error probability of these two joint coding schemes. According to the decoding algorithm the errors are classified into patterns and their decoding result is checked for failures. The probabilities of the failing patterns are used to build the new models. The improved models have less than 1% error
A new algorithm is proposed to compress speech signals using wavelet transform and linear predictive coding. Signal compression based on the concept of selecting a small number of approximation coefficients after they are compressed by the wavelet decomposition (Haar and db4) at a suitable chosen level and ignored details coefficients, and then approximation coefficients are windowed by a rectangular window and fed to the linear predictor. Levinson Durbin algorithm is used to compute LP coefficients, reflection coefficients and predictor error. The compress files contain LP coefficients and previous sample. These files are very small in size compared to the size of the original signals. Compression ratio is calculated from the size of th
... Show MoreEnsuring reliable data transmission in Network on Chip (NoC) is one of the most challenging tasks, especially in noisy environments. As crosstalk, interference, and radiation were increased with manufacturers' increasing tendency to reduce the area, increase the frequencies, and reduce the voltages. So many Error Control Codes (ECC) were proposed with different error detection and correction capacities and various degrees of complexity. Code with Crosstalk Avoidance and Error Correction (CCAEC) for network-on-chip interconnects uses simple parity check bits as the main technique to get high error correction capacity. Per this work, this coding scheme corrects up to 12 random errors, representing a high correction capac
... Show MoreIn this paper, an algorithm for binary codebook design has been used in vector quantization technique, which is used to improve the acceptability of the absolute moment block truncation coding (AMBTC) method. Vector quantization (VQ) method is used to compress the bitmap (the output proposed from the first method (AMBTC)). In this paper, the binary codebook can be engender for many images depending on randomly chosen to the code vectors from a set of binary images vectors, and this codebook is then used to compress all bitmaps of these images. The chosen of the bitmap of image in order to compress it by using this codebook based on the criterion of the average bitmap replacement error (ABPRE). This paper is suitable to reduce bit rates
... Show MoreThe control of prostheses and their complexities is one of the greatest challenges limiting wide amputees’ use of upper limb prostheses. The main challenges include the difficulty of extracting signals for controlling the prostheses, limited number of degrees of freedom (DoF), and cost-prohibitive for complex controlling systems. In this study, a real-time hybrid control system, based on electromyography (EMG) and voice commands (VC) is designed to render the prosthesis more dexterous with the ability to accomplish amputee’s daily activities proficiently. The voice and EMG systems were combined in three proposed hybrid strategies, each strategy had different number of movements depending on the combination protocol between voic
... Show MoreIn this paper, some estimators for the unknown shape parameters and reliability function of Basic Gompertz distribution were obtained, such as Maximum likelihood estimator and some Bayesian estimators under Squared log error loss function by using Gamma and Jefferys priors. Monte-Carlo simulation was conducted to compare the performance of all estimates of the shape parameter and Reliability function, based on mean squared errors (MSE) and integrated mean squared errors (IMSE's), respectively. Finally, the discussion is provided to illustrate the results that are summarized in tables.
In this paper, the effect of changes in bank deposits on the money supply in Iraq was studied by estimating the error correction model (ECM) for monthly time series data for the period (2010-2015) . The Philips Perron was used to test the stationarity and also we used Engle and Granger to test the cointegration . we used cubic spline and local polynomial estimator to estimate regression function .The result show that local polynomial was better than cubic spline with the first level of cointegration.
Breast cancer has got much attention in the recent years as it is a one of the complex diseases that can threaten people lives. It can be determined from the levels of secreted proteins in the blood. In this project, we developed a method of finding a threshold to classify the probability of being affected by it in a population based on the levels of the related proteins in relatively small case-control samples. We applied our method to simulated and real data. The results showed that the method we used was accurate in estimating the probability of being diseased in both simulation and real data. Moreover, we were able to calculate the sensitivity and specificity under the null hypothesis of our research question of being diseased o
... Show MoreThis paper provides a four-stage Trigonometrically Fitted Improved Runge-Kutta (TFIRK4) method of four orders to solve oscillatory problems, which contains an oscillatory character in the solutions. Compared to the traditional Runge-Kutta method, the Improved Runge-Kutta (IRK) method is a natural two-step method requiring fewer steps. The suggested method extends the fourth-order Improved Runge-Kutta (IRK4) method with trigonometric calculations. This approach is intended to integrate problems with particular initial value problems (IVPs) using the set functions and for trigonometrically fitted. To improve the method's accuracy, the problem primary frequency is used. The novel method is more accurate than the conventional Runge-Ku
... Show MoreDecision-making in Operations Research is the main point in various studies in our real-life applications. However, these different studies focus on this topic. One drawback some of their studies are restricted and have not addressed the nature of values in terms of imprecise data (ID). This paper thus deals with two contributions. First, decreasing the total costs by classifying subsets of costs. Second, improving the optimality solution by the Hungarian assignment approach. This newly proposed method is called fuzzy sub-Triangular form (FS-TF) under ID. The results obtained are exquisite as compared with previous methods including, robust ranking technique, arithmetic operations, magnitude ranking method and centroid ranking method. This
... Show More