The undetected error probability is an important measure to assess the communication reliability provided by any error coding scheme. Two error coding schemes namely, Joint crosstalk avoidance and Triple Error Correction (JTEC) and JTEC with Simultaneous Quadruple Error Detection (JTEC-SQED), provide both crosstalk reduction and multi-bit error correction/detection features. The available undetected error probability model yields an upper bound value which does not give accurate estimation on the reliability provided. This paper presents an improved mathematical model to estimate the undetected error probability of these two joint coding schemes. According to the decoding algorithm the errors are classified into patterns and their decoding result is checked for failures. The probabilities of the failing patterns are used to build the new models. The improved models have less than 1% error
In this paper, our aim is to solve analytically a nonlinear social epidemic model as an initial value problem (IVP) of ordinary differential equations. The mathematical social epidemic model under study is applied to alcohol consumption model in Spain. The economic cost of alcohol consumption in Spain is affected by the amount of alcohol consumed. This paper refers to the study of alcohol consumption using some analytical methods. Adomian decomposition and variation iteration methods for solving alcohol consumption model have used. Finally, a compression between the analytic solutions of the two used methods and the previous actual values from 1997 to 2007 years is obtained using the absolute and
... Show MoreMonaural source separation is a challenging issue due to the fact that there is only a single channel available; however, there is an unlimited range of possible solutions. In this paper, a monaural source separation model based hybrid deep learning model, which consists of convolution neural network (CNN), dense neural network (DNN) and recurrent neural network (RNN), will be presented. A trial and error method will be used to optimize the number of layers in the proposed model. Moreover, the effects of the learning rate, optimization algorithms, and the number of epochs on the separation performance will be explored. Our model was evaluated using the MIR-1K dataset for singing voice separation. Moreover, the proposed approach achi
... Show MoreThe purpose of this paper is to develop a hybrid conceptual model for building information modelling (BIM) adoption in facilities management (FM) through the integration of the technology task fit (TTF) and the unified theory of acceptance and use of technology (UTAUT) theories. The study also aims to identify the influence factors of BIM adoption and usage in FM and identify gaps in the existing literature and to provide a holistic picture of recent research in technology acceptance and adoption in the construction industry and FM sector.
Circular data (circular sightings) are periodic data and are measured on the unit's circle by radian or grades. They are fundamentally different from those linear data compatible with the mathematical representation of the usual linear regression model due to their cyclical nature. Circular data originate in a wide variety of fields of scientific, medical, economic and social life. One of the most important statistical methods that represents this data, and there are several methods of estimating angular regression, including teachers and non-educationalists, so the letter included the use of three models of angular regression, two of which are teaching models and one of which is a model of educators. ) (DM) (MLE) and circular shrinkage mod
... Show MoreIn this study, we made a comparison between LASSO & SCAD methods, which are two special methods for dealing with models in partial quantile regression. (Nadaraya & Watson Kernel) was used to estimate the non-parametric part ;in addition, the rule of thumb method was used to estimate the smoothing bandwidth (h). Penalty methods proved to be efficient in estimating the regression coefficients, but the SCAD method according to the mean squared error criterion (MSE) was the best after estimating the missing data using the mean imputation method
An accurate assessment of the pipes’ conditions is required for effective management of the trunk sewers. In this paper the semi-Markov model was developed and tested using the sewer dataset from the Zublin trunk sewer in Baghdad, Iraq, in order to evaluate the future performance of the sewer. For the development of this model the cumulative waiting time distribution of sewers was used in each condition that was derived directly from the sewer condition class and age data. Results showed that the semi-Markov model was inconsistent with the data by adopting ( 2 test) and also, showed that the error in prediction is due to lack of data on the sewer waiting times at each condition state which can be solved by using successive conditi
... Show MoreThe aim of this research is to estimate the parameters of the linear regression model with errors following ARFIMA model by using wavelet method depending on maximum likelihood and approaching general least square as well as ordinary least square. We use the estimators in practical application on real data, which were the monthly data of Inflation and Dollar exchange rate obtained from the (CSO) Central Statistical organization for the period from 1/2005 to 12/2015. The results proved that (WML) was the most reliable and efficient from the other estimators, also the results provide that the changing of fractional difference parameter (d) doesn’t effect on the results.
In this research, some probability characteristics functions (probability density, characteristic, correlation and spectral density) are derived depending upon the smallest variance of the exact solution of supposing stochastic non-linear Fredholm integral equation of the second kind found by Adomian decomposition method (A.D.M)
The objective of the current research is to find an optimum design of hybrid laminated moderate thick composite plates with static constraint. The stacking sequence and ply angle is required for optimization to achieve minimum deflection for hybrid laminated composite plates consist of glass and carbon long fibers reinforcements that impeded in epoxy matrix with known plates dimension and loading. The analysis of plate is by adopting the first-order shear deformation theory and using Navier's solution with Genetic Algorithm to approach the current objective. A program written with MATLAB to find best stacking sequence and ply angles that give minimum deflection, and the results comparing with ANSYS.