In this paper, a new high-performance lossy compression technique based on DCT is proposed. The image is partitioned into blocks of a size of NxN (where N is multiple of 2), each block is categorized whether it is high frequency (uncorrelated block) or low frequency (correlated block) according to its spatial details, this done by calculating the energy of block by taking the absolute sum of differential pulse code modulation (DPCM) differences between pixels to determine the level of correlation by using a specified threshold value. The image blocks will be scanned and converted into 1D vectors using horizontal scan order. Then, 1D-DCT is applied for each vector to produce transform coefficients. The transformed coefficients will be quantized with different quantization values according to the energy of the block. Finally, an enhanced entropy encoder technique is applied to store the quantized coefficients. To test the level of compression, the quantitative measures of the peak signal-to-noise ratio (PSNR) and compression ratio (CR) is used to ensure the effectiveness of the suggested system. The PSNR values of the reconstructed images are taken between the intermediate range from 28dB to 40dB, the best attained compression gain on standard Lena image has been increased to be around (96.60 %). Also, the results were compared to those of the standard JPEG system utilized in the “ACDSee Ultimate 2020†software to evaluate the performance of the proposed system.
Steganography involves concealing information by embedding data within cover media and it can be categorized into two main domains: spatial and frequency. This paper presents two distinct methods. The first is operating in the spatial domain which utilizes the least significant bits (LSBs) to conceal a secret message. The second method is the functioning in the frequency domain which hides the secret message within the LSBs of the middle-frequency band of the discrete cosine transform (DCT) coefficients. These methods enhance obfuscation by utilizing two layers of randomness: random pixel embedding and random bit embedding within each pixel. Unlike other available methods that embed data in sequential order with a fixed amount.
... Show MoreEntropy define as uncertainty measure has been transfared by using the cumulative distribution function and reliability function for the Burr type – xii. In the case of data which suffer from volatility to build a model the probability distribution on every failure of a sample after achieving limitations function, probabilistic distribution. Has been derived formula probability distribution of the new transfer application entropy on the probability distribution of continuous Burr Type-XII and tested a new function and found that it achieved the conditions function probability, been derived mean and function probabilistic aggregate in order to be approved in the generation of data for the purpose of implementation of simulation
... Show More'Steganography is the science of hiding information in the cover media', a force in the context of information sec, IJSR, Call for Papers, Online Journal
During the two last decades ago, audio compression becomes the topic of many types of research due to the importance of this field which reflecting on the storage capacity and the transmission requirement. The rapid development of the computer industry increases the demand for audio data with high quality and accordingly, there is great importance for the development of audio compression technologies, lossy and lossless are the two categories of compression. This paper aims to review the techniques of the lossy audio compression methods, summarize the importance and the uses of each method.
Median filter is adopted to match the noise statistics of the degradation seeking good quality smoothing images. Two methods are suggested in this paper(Pentagonal-Hexagonal mask and Scan Window Mask), the study involved modified median filter for improving noise suppression, the modification is considered toward more reliable results. Modification median filter (Pentagonal-Hexagonal mask) was found gave better results (qualitatively and quantitatively ) than classical median filters and another suggested method (Scan Window Mask), but this will be on the account of the time required. But sometimes when the noise is line type the cross 3x3 filter preferred to another one Pentagonal-Hexagonal with few variation. Scan Window Mask gave bett
... Show MoreIn education, exams are used to asses students’ acquired knowledge; however, the manual assessment of exams consumes a lot of teachers’ time and effort. In addition, educational institutions recently leaned toward distance education and e-learning due the Coronavirus pandemic. Thus, they needed to conduct exams electronically, which requires an automated assessment system. Although it is easy to develop an automated assessment system for objective questions. However, subjective questions require answers comprised of free text and are harder to automatically assess since grading them needs to semantically compare the students’ answers with the correct ones. In this paper, we present an automatic short answer grading metho
... Show MoreThis research introduces a proposed hybrid Spam Filtering System (SFS) which consists of Ant Colony System (ACS), information gain (IG) and Naïve Bayesian (NB). The aim of the proposed hybrid spam filtering is to classify the e-mails with high accuracy. The hybrid spam filtering consists of three consequence stages. In the first stage, the information gain (IG) for each attributes (i.e. weight for each feature) is computed. Then, the Ant Colony System algorithm selects the best features that the most intrinsic correlated attributes in classification. Finally, the third stage is dedicated to classify the e-mail using Naïve Bayesian (NB) algorithm. The experiment is conducted on spambase dataset. The result shows that the accuracy of NB
... Show MoreThe expansion of web applications like e-commerce and other services yields an exponential increase in offers and choices in the web. From these needs, the recommender system applications have arisen. This research proposed a recommender system that uses user's reviews as implicit feedback to extract user preferences from their reviews to enhance personalization in addition to the explicit ratings. Diversity also improved by using k-furthest neighbor algorithm upon user's clusters. The system tested using Douban movie standard dataset from Kaggle, and show good performance.