Image compression is a serious issue in computer storage and transmission, that simply makes efficient use of redundancy embedded within an image itself; in addition, it may exploit human vision or perception limitations to reduce the imperceivable information Polynomial coding is a modern image compression technique based on modelling concept to remove the spatial redundancy embedded within the image effectively that composed of two parts, the mathematical model and the residual. In this paper, two stages proposed technqies adopted, that starts by utilizing the lossy predictor model along with multiresolution base and thresholding techniques corresponding to first stage. Latter by incorporating the near lossless compression scheme of first stage that corresponding to second stage. The tested results shown are promising in both two stages, that implicilty enhanced the performance of traditional polynomial model in terms of compression ratio , and preresving image quality.
A common approach to the color image compression was started by transform
the red, green, and blue or (RGB) color model to a desire color model, then applying
compression techniques, and finally retransform the results into RGB model In this
paper, a new color image compression method based on multilevel block truncation
coding (MBTC) and vector quantization is presented. By exploiting human visual
system response for color, bit allocation process is implemented to distribute the bits
for encoding in more effective away.
To improve the performance efficiency of vector quantization (VQ),
modifications have been implemented. To combines the simple computational and
edge preservation properties of MBTC with high c
In all applications and specially in real time applications, image processing and compression plays in modern life a very important part in both storage and transmission over internet for example, but finding orthogonal matrices as a filter or transform in different sizes is very complex and importance to using in different applications like image processing and communications systems, at present, new method to find orthogonal matrices as transform filter then used for Mixed Transforms Generated by using a technique so-called Tensor Product based for Data Processing, these techniques are developed and utilized. Our aims at this paper are to evaluate and analyze this new mixed technique in Image Compression using the Discrete Wavelet Transfo
... Show MoreImage compression is one of the data compression types applied to digital images in order to reduce their high cost for storage and/or transmission. Image compression algorithms may take the benefit of visual sensitivity and statistical properties of image data to deliver superior results in comparison with generic data compression schemes, which are used for other digital data. In the first approach, the input image is divided into blocks, each of which is 16 x 16, 32 x 32, or 64 x 64 pixels. The blocks are converted first into a string; then, encoded by using a lossless and dictionary-based algorithm known as arithmetic coding. The more occurrence of the pixels values is codded in few bits compare with pixel values of less occurre
... Show MoreArtificial intelligence (AI) is entering many fields of life nowadays. One of these fields is biometric authentication. Palm print recognition is considered a fundamental aspect of biometric identification systems due to the inherent stability, reliability, and uniqueness of palm print features, coupled with their non-invasive nature. In this paper, we develop an approach to identify individuals from palm print image recognition using Orange software in which a hybrid of AI methods: Deep Learning (DL) and traditional Machine Learning (ML) methods are used to enhance the overall performance metrics. The system comprises of three stages: pre-processing, feature extraction, and feature classification or matching. The SqueezeNet deep le
... Show MoreArtificial intelligence (AI) is entering many fields of life nowadays. One of these fields is biometric authentication. Palm print recognition is considered a fundamental aspect of biometric identification systems due to the inherent stability, reliability, and uniqueness of palm print features, coupled with their non-invasive nature. In this paper, we develop an approach to identify individuals from palm print image recognition using Orange software in which a hybrid of AI methods: Deep Learning (DL) and traditional Machine Learning (ML) methods are used to enhance the overall performance metrics. The system comprises of three stages: pre-processing, feature extraction, and feature classification or matching. The SqueezeNet deep le
... Show MoreGeneral Background: Deep image matting is a fundamental task in computer vision, enabling precise foreground extraction from complex backgrounds, with applications in augmented reality, computer graphics, and video processing. Specific Background: Despite advancements in deep learning-based methods, preserving fine details such as hair and transparency remains a challenge. Knowledge Gap: Existing approaches struggle with accuracy and efficiency, necessitating novel techniques to enhance matting precision. Aims: This study integrates deep learning with fusion techniques to improve alpha matte estimation, proposing a lightweight U-Net model incorporating color-space fusion and preprocessing. Results: Experiments using the AdobeComposition-1k
... Show MoreIn regression testing, Test case prioritization (TCP) is a technique to arrange all the available test cases. TCP techniques can improve fault detection performance which is measured by the average percentage of fault detection (APFD). History-based TCP is one of the TCP techniques that consider the history of past data to prioritize test cases. The issue of equal priority allocation to test cases is a common problem for most TCP techniques. However, this problem has not been explored in history-based TCP techniques. To solve this problem in regression testing, most of the researchers resort to random sorting of test cases. This study aims to investigate equal priority in history-based TCP techniques. The first objective is to implement
... Show MoreA new technique for embedding image data into another BMP image data is presented. The image data to be embedded is referred to as signature image, while the image into which the signature image is embedded is referred as host image. The host and the signature images are first partitioned into 8x8 blocks, discrete cosine transformed “DCT”, only significant coefficients are retained, the retained coefficients then inserted in the transformed block in a forward and backward zigzag scan direction. The result then inversely transformed and presented as a BMP image file. The peak signal-to-noise ratio (PSNR) is exploited to evaluate the objective visual quality of the host image compared with the original image.
Average per capita GDP income is an important economic indicator. Economists use this term to determine the amount of progress or decline in the country's economy. It is also used to determine the order of countries and compare them with each other. Average per capita GDP income was first studied using the Time Series (Box Jenkins method), and the second is linear and non-linear regression; these methods are the most important and most commonly used statistical methods for forecasting because they are flexible and accurate in practice. The comparison is made to determine the best method between the two methods mentioned above using specific statistical criteria. The research found that the best approach is to build a model for predi
... Show More