In this paper, we present multiple bit error correction coding scheme based on extended Hamming product code combined with type II HARQ using shared resources for on chip interconnect. The shared resources reduce the hardware complexity of the encoder and decoder compared to the existing three stages iterative decoding method for on chip interconnects. The proposed method of decoding achieves 20% and 28% reduction in area and power consumption respectively, with only small increase in decoder delay compared to the existing three stage iterative decoding scheme for multiple bit error correction. The proposed code also achieves excellent improvement in residual flit error rate and up to 58% of total power consumption compared to the other error control schemes. The low complexity and excellent residual flit error rate make the proposed code suitable for on chip interconnection links.
The widespread of internet allover the world, in addition to the increasing of the huge number of users that they exchanged important information over it highlights the need for a new methods to protect these important information from intruders' corruption or modification. This paper suggests a new method that ensures that the texts of a given document cannot be modified by the intruders. This method mainly consists of mixture of three steps. The first step which barrows some concepts of "Quran" security system to detect some type of change(s) occur in a given text. Where a key of each paragraph in the text is extracted from a group of letters in that paragraph which occur as multiply of a given prime number. This step cannot detect the ch
... Show MoreIn information security, fingerprint verification is one of the most common recent approaches for verifying human identity through a distinctive pattern. The verification process works by comparing a pair of fingerprint templates and identifying the similarity/matching among them. Several research studies have utilized different techniques for the matching process such as fuzzy vault and image filtering approaches. Yet, these approaches are still suffering from the imprecise articulation of the biometrics’ interesting patterns. The emergence of deep learning architectures such as the Convolutional Neural Network (CNN) has been extensively used for image processing and object detection tasks and showed an outstanding performance compare
... Show More<span>As a result of numerous applications and low installation costs, wireless sensor networks (WSNs) have expanded excessively. The main concern in the WSN environment is to lower energy consumption amidst nodes while preserving an acceptable level of service quality. Using multi-mobile sinks to reduce the nodes' energy consumption have been considered as an efficient strategy. In such networks, the dynamic network topology created by the sinks mobility makes it a challenging task to deliver the data to the sinks. Thus, in order to provide efficient data dissemination, the sensor nodes will have to readjust the routes to the current position of the mobile sinks. The route re-adjustment process could result in a significant m
... Show MoreIn this study, an efficient compression system is introduced, it is based on using wavelet transform and two types of 3Dimension (3D) surface representations (i.e., Cubic Bezier Interpolation (CBI)) and 1 st order polynomial approximation. Each one is applied on different scales of the image; CBI is applied on the wide area of the image in order to prune the image components that show large scale variation, while the 1 st order polynomial is applied on the small area of residue component (i.e., after subtracting the cubic Bezier from the image) in order to prune the local smoothing components and getting better compression gain. Then, the produced cubic Bezier surface is subtracted from the image signal to get the residue component. Then, t
... Show MoreThis study dedicates to provide an information of shell model calculations, limited to fp-shell with an accuracy and applicability. The estimations depend on the evaluation of Hamiltoian’s eigenvalues, that’s compatible with positive parity of energy levels up to (10MeV) for most isotopes of Ca, and the Hamiltonian eigenvectors transition strength probability and inelastic electron-nucleus scattering. The Hamiltonian is effective in the regions where we have experimented. The known experimental data of the same were confirmed and proposed a new nuclear level for others.
The calculations are done with the help of OXBASH code. The results show good agreement with experimental energy states
... Show MoreThis investigation aims to study some properties of lightweight aggregate concrete reinforced by mono or hybrid fibers of different sizes and types. In this research, the considered lightweight aggregate was Light Expanded Clay Aggregate while the adopted fibers included hooked, straight, polypropylene, and glass. Eleven lightweight concrete mixes were considered, These mixes comprised of; one plain concrete mix (without fibers), two reinforced concrete mixtures of mono fiber (hooked or straight fibers), six reinforced concrete mixtures of double hybrid fibers, and two reinforced concrete mixtures of triple hybrid fibers. Hardened concrete properties were investigated in this study. G
This investigation aims to study some properties of lightweight aggregate concrete reinforced by mono or hybrid fibers of different sizes and types. In this research, the considered lightweight aggregate was Light Expanded Clay Aggregate while the adopted fibers included hooked, straight, polypropylene, and glass. Eleven lightweight concrete mixes were considered, These mixes comprised of; one plain concrete mix (without fibers), two reinforced concrete mixtures of mono fiber (hooked or straight fibers), six reinforced concrete mixtures of double hybrid fibers, and two reinforced concrete mixtures of triple hybrid fibers. Hardened concrete properties were investigated in this study. G
Storing, transferring, and processing high-dimensional electroencephalogram (EGG) signals is a critical challenge. The goal of EEG compression is to remove redundant data in EEG signals. Medical signals like EEG must be of high quality for medical diagnosis. This paper uses a compression system with near-zero Mean Squared Error (MSE) based on Discrete Cosine Transform (DCT) and double shift coding for fast and efficient EEG data compression. This paper investigates and compares the use or non-use of delta modulation, which is applied to the transformed and quantized input signal. Double shift coding is applied after mapping the output to positive as a final step. The system performance is tested using EEG data files from the C
... Show More