In this study, an efficient compression system is introduced, it is based on using wavelet transform and two types of 3Dimension (3D) surface representations (i.e., Cubic Bezier Interpolation (CBI)) and 1 st order polynomial approximation. Each one is applied on different scales of the image; CBI is applied on the wide area of the image in order to prune the image components that show large scale variation, while the 1 st order polynomial is applied on the small area of residue component (i.e., after subtracting the cubic Bezier from the image) in order to prune the local smoothing components and getting better compression gain. Then, the produced cubic Bezier surface is subtracted from the image signal to get the residue component. Then, thebi-orthogonal wavelet transform is applied on the produced Bezier residue component. The resulting transform coefficients are quantized using progressive scalar quantization and the 1 st order polynomial is applied on the quantized LL subband to produce the polynomial surface, then the produced polynomial surface is subtracted from the LL subband to get the residue component (high frequency component). Then, the quantized values are represented using quad tree encoding to prune the sparse blocks, followed by high order shift coding algorithm to handle the remaining statistical redundancy and to attain efficient compression performance. The conducted tests indicated that the introduced system leads to promising compression gain.
The effect of the initial pressure upon the laminar flame speed, for a methane-air mixtures, has been detected paractically, for a wide range of equivalence ratio. In this work, a measurement system is designed in order to measure the laminar flame speed using a constant volume method with a thermocouples technique. The laminar burning velocity is measured, by using the density ratio method. The comparison of the present work results and the previous ones show good agreement between them. This indicates that the measurements and the calculations employed in the present work are successful and precise
Methods of speech recognition have been the subject of several studies over the past decade. Speech recognition has been one of the most exciting areas of the signal processing. Mixed transform is a useful tool for speech signal processing; it is developed for its abilities of improvement in feature extraction. Speech recognition includes three important stages, preprocessing, feature extraction, and classification. Recognition accuracy is so affected by the features extraction stage; therefore different models of mixed transform for feature extraction were proposed. The properties of the recorded isolated word will be 1-D, which achieve the conversion of each 1-D word into a 2-D form. The second step of the word recognizer requires, the
... Show MoreData hiding is the process of encoding extra information in an image by making small modification to its pixels. To be practical, the hidden data must be perceptually invisible yet robust to common signal processing operations. This paper introduces a scheme for hiding a signature image that could be as much as 25% of the host image data and hence could be used both in digital watermarking as well as image/data hiding. The proposed algorithm uses orthogonal discrete wavelet transforms with two zero moments and with improved time localization called discrete slantlet transform for both host and signature image. A scaling factor ? in frequency domain control the quality of the watermarked images. Experimental results of signature image
... Show MoreI n this paper ,we 'viii consider the density questions associC;lted with the single hidden layer feed forward model. We proved that a FFNN with one hidden layer can uniformly approximate any continuous function in C(k)(where k is a compact set in R11 ) to any required accuracy.
However, if the set of basis function is dense then the ANN's can has al most one hidden layer. But if the set of basis function non-dense, then we need more hidden layers. Also, we have shown that there exist localized functions and that there is no t
... Show MoreThe encoding of long low density parity check (LDPC) codes presents a challenge compared to its decoding. The Quasi Cyclic (QC) LDPC codes offer the advantage for reducing the complexity for both encoding and decoding due to its QC structure. Most QC-LDPC codes have rank deficient parity matrix and this introduces extra complexity over the codes with full rank parity matrix. In this paper an encoding scheme of QC-LDPC codes is presented that is suitable for codes with full rank parity matrix and rank deficient parity matrx. The extra effort required by the codes with rank deficient parity matrix over the codes of full rank parity matrix is investigated.
An intuitionistic fuzzy set was exhibited by Atanassov in 1986 as a generalization of the fuzzy set. So, we introduce cubic intuitionistic structures on a KU-semigroup as a generalization of the fuzzy set of a KU-semigroup. A cubic intuitionistic k-ideal and some related properties are introduced. Also, a few characterizations of a cubic intuitionistic k-ideal are discussed and new cubic intuitionistic fuzzy sets in a KU-semigroup are defined.
The current research aims to study the extent to which the Independent High Electoral Commission applies to information security risk management by the international standard (ISO / IEC27005) in terms of policies, administrative and technical procedures, and techniques used in managing information security risks, based on the opinions of experts in the sector who occupy positions (General Manager The directorate, department heads and their agents, project managers, heads of divisions, and those authorized to access systems and software). The importance of the research comes by giving a clear picture of the field of information security risk management in the organization in question because of its significant role in identifying risks and s
... Show MoreProducts’ quality inspection is an important stage in every production route, in which the quality of the produced goods is estimated and compared with the desired specifications. With traditional inspection, the process rely on manual methods that generates various costs and large time consumption. On the contrary, today’s inspection systems that use modern techniques like computer vision, are more accurate and efficient. However, the amount of work needed to build a computer vision system based on classic techniques is relatively large, due to the issue of manually selecting and extracting features from digital images, which also produces labor costs for the system engineers.
 
... Show MoreProducts’ quality inspection is an important stage in every production route, in which the quality of the produced goods is estimated and compared with the desired specifications. With traditional inspection, the process rely on manual methods that generates various costs and large time consumption. On the contrary, today’s inspection systems that use modern techniques like computer vision, are more accurate and efficient. However, the amount of work needed to build a computer vision system based on classic techniques is relatively large, due to the issue of manually selecting and extracting features from digital images, which also produces labor costs for the system engineers. In this research, we pr
... Show More