The searching process using a binary codebook of combined Block Truncation Coding (BTC) method and Vector Quantization (VQ), i.e. a full codebook search for each input image vector to find the best matched code word in the codebook, requires a long time. Therefore, in this paper, after designing a small binary codebook, we adopted a new method by rotating each binary code word in this codebook into 900 to 2700 step 900 directions. Then, we systematized each code word depending on its angle to involve four types of binary code books (i.e. Pour when , Flat when , Vertical when, or Zigzag). The proposed scheme was used for decreasing the time of the coding procedure, with very small distortion per block, by designing small binary codebook, then rotating each block in it. Moreover, it can be used for improving the efficiency of the coding process even further with the decrease in the bit rate (i.e. increasing the compression ratio(.
An oil spill is a leakage of pipelines, vessels, oil rigs, or tankers that leads to the release of petroleum products into the marine environment or on land that happened naturally or due to human action, which resulted in severe damages and financial loss. Satellite imagery is one of the powerful tools currently utilized for capturing and getting vital information from the Earth's surface. But the complexity and the vast amount of data make it challenging and time-consuming for humans to process. However, with the advancement of deep learning techniques, the processes are now computerized for finding vital information using real-time satellite images. This paper applied three deep-learning algorithms for satellite image classification
... Show MoreThe increased size of grayscale images or upscale plays a central role in various fields such as medicine, satellite imagery, and photography. This paper presents a technique for improving upscaling gray images using a new mixing wavelet generation by tensor product. The proposed technique employs a multi-resolution analysis provided by a new mixing wavelet transform algorithm to decompose the input image into different frequency components. After processing, the low-resolution input image is effectively transformed into a higher-resolution representation by adding a zeroes matrix. Discrete wavelets transform (Daubechies wavelet Haar) as a 2D matrix is used but is mixed using tensor product with another wavelet matrix’s size. MATLAB R2021
... Show MoreOne of the most difficult issues in the history of communication technology is the transmission of secure images. On the internet, photos are used and shared by millions of individuals for both private and business reasons. Utilizing encryption methods to change the original image into an unintelligible or scrambled version is one way to achieve safe image transfer over the network. Cryptographic approaches based on chaotic logistic theory provide several new and promising options for developing secure Image encryption methods. The main aim of this paper is to build a secure system for encrypting gray and color images. The proposed system consists of two stages, the first stage is the encryption process, in which the keys are genera
... Show MoreA new algorithm is proposed to compress speech signals using wavelet transform and linear predictive coding. Signal compression based on the concept of selecting a small number of approximation coefficients after they are compressed by the wavelet decomposition (Haar and db4) at a suitable chosen level and ignored details coefficients, and then approximation coefficients are windowed by a rectangular window and fed to the linear predictor. Levinson Durbin algorithm is used to compute LP coefficients, reflection coefficients and predictor error. The compress files contain LP coefficients and previous sample. These files are very small in size compared to the size of the original signals. Compression ratio is calculated from the size of th
... Show More<span>Digital audio is required to transmit large sizes of audio information through the most common communication systems; in turn this leads to more challenges in both storage and archieving. In this paper, an efficient audio compressive scheme is proposed, it depends on combined transform coding scheme; it is consist of i) bi-orthogonal (tab 9/7) wavelet transform to decompose the audio signal into low & multi high sub-bands, ii) then the produced sub-bands passed through DCT to de-correlate the signal, iii) the product of the combined transform stage is passed through progressive hierarchical quantization, then traditional run-length encoding (RLE), iv) and finally LZW coding to generate the output mate bitstream.
... Show Moremixtures of cyclohexane + n-decane and cyclohexane + 1-pentanol have been measured at 298.15, 308.15, 318.15, and 328.15 K over the whole mole fraction range. From these results, excess molar volumes, VE , have been calculated and fitted to the Flory equations. The VE values are negative and positive over the whole mole fraction range and at all temperatures. The excess refractive indices nE and excess viscosities ?E have been calculated from experimental refractive indices and viscosity measurements at different temperature and fitted to the mixing rules equations and Heric – Coursey equation respectively to predict theoretical refractive indices, we found good agreement between them for binary mixtures in this study. The variation of th
... Show MoreIn the present study, a powder mixture of elements Ti and Ni was mechanically alloyed in a high energy ball mill. Microstructure of the nanosized amorphous milled product in different stages of milling has been characterized by X- ray diffraction, scanning electron microscopy and differential thermal analysis. We found that time of mechanical alloying is more significant to convert all crystalline structure to the amorphous phase. Nanocrystalline phase was achieved as a result of the mechanical alloying process. The results also indicates that the phase transformation and the grain size occurs in these alloys are controlled by ball milling time
The recent development in statistics has made statistical distributions the focus of researchers in the process of compensating for some distribution parameters with fixed values and obtaining a new distribution, in this study, the distribution of Kumaraswamy was studied from the constant distributions of the two parameters. The characteristics of the distribution were discussed through the presentation of the probability density function (p.d.f), the cumulative distribution function (c.d.f.), the ratio of r, the reliability function and the hazard function. The parameters of the Kumaraswamy distribution were estimated using MLE, ME, LSEE by using the simulation method for different sampling sizes and using preli
... Show More