In data transmission a change in single bit in the received data may lead to miss understanding or a disaster. Each bit in the sent information has high priority especially with information such as the address of the receiver. The importance of error detection with each single change is a key issue in data transmission field.
The ordinary single parity detection method can detect odd number of errors efficiently, but fails with even number of errors. Other detection methods such as two-dimensional and checksum showed better results and failed to cope with the increasing number of errors.
Two novel methods were suggested to detect the binary bit change errors when transmitting data in a noisy media.Those methods were: 2D-Checksum method and Modified 2D-Checksum. In 2D-checksum method, summing process was done for 7×7 patterns in row direction and then in column direction to result 8×8 patterns. While in modified method, an additional parity diagonal vector was added to the pattern to be 8×9. By combining the benefits of using single parity (detecting odd number of error bits) and the benefits of checksum (reducing the effect of 4-bit errors) and combining them in 2D shape, the detection process was improved. By contaminating any sample of data with up to 33% of noise (change 0 to 1 and vice versa), the detecting process in first method was improved by approximately 50% compared to the ordinary traditional two dimensional-parity method and gives best detection results in second novel method
Data <span>transmission in orthogonal frequency division multiplexing (OFDM) system needs source and channel coding, the transmitted data suffers from the bad effect of large peak to average power ratio (PAPR). Source code and channel codes can be joined using different joined codes. Variable length error correcting code (VLEC) is one of these joined codes. VLEC is used in mat lab simulation for image transmission in OFDM system, different VLEC code length is used and compared to find that the PAPR decreased with increasing the code length. Several techniques are used and compared for PAPR reduction. The PAPR of OFDM signal is measured for image coding with VLEC and compared with image coded by Huffman source coding and Bose-
... Show MoreIn this study, gamma ray transmission method have been used to determine the total porosity in four samples: pure Alumina ( Al2O3 ), Al2O3 + (0.2wt%)MgO , Al2O3 + (0.6wt% )Y2O3 and Al2O3+ (8wt% ) ZrO2 .
The experimental setup for the gamma ray transmission consist of 137Cs gamma source ( 662 KeV ), a NaI (Tl) scintillation detector measured the attenuation of strongly collimated gamma beam through alumina samples.
The porosity obtained by the gamma ray transmission method were compare
... Show MoreSoftware-defined networking (SDN) presents novel security and privacy risks, including distributed denial-of-service (DDoS) attacks. In response to these threats, machine learning (ML) and deep learning (DL) have emerged as effective approaches for quickly identifying and mitigating anomalies. To this end, this research employs various classification methods, including support vector machines (SVMs), K-nearest neighbors (KNNs), decision trees (DTs), multiple layer perceptron (MLP), and convolutional neural networks (CNNs), and compares their performance. CNN exhibits the highest train accuracy at 97.808%, yet the lowest prediction accuracy at 90.08%. In contrast, SVM demonstrates the highest prediction accuracy of 95.5%. As such, an
... Show MoreA total of 243 serum samples were tested for the presence of
Chlamydia antibodies by ind irect immunofluorescent antibody test.Ninety
nine females were suffering from abortions, 64 were infertile and other 80 were none aborted women. The incidence of Ch lamydia were (15%,
9.4%) and (3.8%) in abortion, infertile and non aborted group,
respecti vely. The results also showed a difference in prevalence rate between the age groups. The highest incidence was found in the age group 20-39 &
... Show MoreMultiple eliminations (de-multiple) are one of seismic processing steps to remove their effects and delineate the correct primary refractors. Using normal move out to flatten primaries is the way to eliminate multiples through transforming these data to frequency-wavenumber domain. The flatten primaries are aligned with zero axis of the frequency-wavenumber domain and any other reflection types (multiples and random noise) are distributed elsewhere. Dip-filter is applied to pass the aligned data and reject others will separate primaries from multiple after transforming the data back from frequency-wavenumber domain to time-distance domain. For that, a suggested name for this technique as normal move out- frequency-wavenumber domain
... Show MoreIn the present paper, three reliable iterative methods are given and implemented to solve the 1D, 2D and 3D Fisher’s equation. Daftardar-Jafari method (DJM), Temimi-Ansari method (TAM) and Banach contraction method (BCM) are applied to get the exact and numerical solutions for Fisher's equations. The reliable iterative methods are characterized by many advantages, such as being free of derivatives, overcoming the difficulty arising when calculating the Adomian polynomial boundaries to deal with nonlinear terms in the Adomian decomposition method (ADM), does not request to calculate Lagrange multiplier as in the Variational iteration method (VIM) and there is no need to create a homotopy like in the Homotopy perturbation method (H
... Show MoreDeep learning has recently received a lot of attention as a feasible solution to a variety of artificial intelligence difficulties. Convolutional neural networks (CNNs) outperform other deep learning architectures in the application of object identification and recognition when compared to other machine learning methods. Speech recognition, pattern analysis, and image identification, all benefit from deep neural networks. When performing image operations on noisy images, such as fog removal or low light enhancement, image processing methods such as filtering or image enhancement are required. The study shows the effect of using Multi-scale deep learning Context Aggregation Network CAN on Bilateral Filtering Approximation (BFA) for d
... Show MoreMultiple linear regressions are concerned with studying and analyzing the relationship between the dependent variable and a set of explanatory variables. From this relationship the values of variables are predicted. In this paper the multiple linear regression model and three covariates were studied in the presence of the problem of auto-correlation of errors when the random error distributed the distribution of exponential. Three methods were compared (general least squares, M robust, and Laplace robust method). We have employed the simulation studies and calculated the statistical standard mean squares error with sample sizes (15, 30, 60, 100). Further we applied the best method on the real experiment data representing the varieties of
... Show MoreA remarkable correlation between chaotic systems and cryptography has been established with sensitivity to initial states, unpredictability, and complex behaviors. In one development, stages of a chaotic stream cipher are applied to a discrete chaotic dynamic system for the generation of pseudorandom bits. Some of these generators are based on 1D chaotic map and others on 2D ones. In the current study, a pseudorandom bit generator (PRBG) based on a new 2D chaotic logistic map is proposed that runs side-by-side and commences from random independent initial states. The structure of the proposed model consists of the three components of a mouse input device, the proposed 2D chaotic system, and an initial permutation (IP) table. Statist
... Show MoreAbstract
Bivariate time series modeling and forecasting have become a promising field of applied studies in recent times. For this purpose, the Linear Autoregressive Moving Average with exogenous variable ARMAX model is the most widely used technique over the past few years in modeling and forecasting this type of data. The most important assumptions of this model are linearity and homogenous for random error variance of the appropriate model. In practice, these two assumptions are often violated, so the Generalized Autoregressive Conditional Heteroscedasticity (ARCH) and (GARCH) with exogenous varia
... Show More