In all applications and specially in real time applications, image processing and compression plays in modern life a very important part in both storage and transmission over internet for example, but finding orthogonal matrices as a filter or transform in different sizes is very complex and importance to using in different applications like image processing and communications systems, at present, new method to find orthogonal matrices as transform filter then used for Mixed Transforms Generated by using a technique so-called Tensor Product based for Data Processing, these techniques are developed and utilized. Our aims at this paper are to evaluate and analyze this new mixed technique in Image Compression using the Discrete Wavelet Transform and Slantlet Transform both as 2D matrix but mixed by Tensor Product. The performance parameters such as Compression Ratio, Peak Signal to Noise Ratio, and Root Mean Squared Error, are all evaluated for both standard colored and gray images. The simulation result shows that the techniques provide the quality of the images it was normal but acceptable and need more researchers works.
FG Mohammed, HM Al-Dabbas, Iraqi journal of science, 2018 - Cited by 6
The searching process using a binary codebook of combined Block Truncation Coding (BTC) method and Vector Quantization (VQ), i.e. a full codebook search for each input image vector to find the best matched code word in the codebook, requires a long time. Therefore, in this paper, after designing a small binary codebook, we adopted a new method by rotating each binary code word in this codebook into 900 to 2700 step 900 directions. Then, we systematized each code word depending on its angle to involve four types of binary code books (i.e. Pour when , Flat when , Vertical when, or Zigzag). The proposed scheme was used for decreasing the time of the coding procedure, with very small distortion per block, by designing s
... Show MoreIn this research a proposed technique is used to enhance the frame difference technique performance for extracting moving objects in video file. One of the most effective factors in performance dropping is noise existence, which may cause incorrect moving objects identification. Therefore it was necessary to find a way to diminish this noise effect. Traditional Average and Median spatial filters can be used to handle such situations. But here in this work the focus is on utilizing spectral domain through using Fourier and Wavelet transformations in order to decrease this noise effect. Experiments and statistical features (Entropy, Standard deviation) proved that these transformations can stand to overcome such problems in an elegant way.
... Show MoreThe computer vision branch of the artificial intelligence field is concerned with developing algorithms for analyzing video image content. Extracting edge information, which is the essential process in most pictorial pattern recognition problems. A new method of edge detection technique has been introduces in this research, for detecting boundaries.
Selection of typical lossy techniques for encoding edge video images are also discussed in this research. The concentration is devoted to discuss the Block-Truncation coding technique and Discrete Cosine Transform (DCT) coding technique. In order to reduce the volume of pictorial data which one may need to store or transmit,
... Show MoreThe computer vision branch of the artificial intelligence field is concerned with
developing algorithms for analyzing image content. Data may be compressed by
reducing the redundancy in the original data, but this makes the data have more
errors. In this paper image compression based on a new method that has been
created for image compression which is called Five Modulus Method (FMM). The
new method consists of converting each pixel value in an (4x4, 8×8,16x16) block
into a multiple of 5 for each of the R, G and B arrays. After that, the new values
could be divided by 5 to get new values which are 6-bit length for each pixel and it
is less in storage space than the original value which is 8-bits.
This study was conducted in College of Science \ Computer Science Department \ University of Baghdad to compare between automatic sorting and manual sorting, which is more efficient and accurate, as well as the use of artificial intelligence in automated sorting, which included artificial neural network, image processing, study of external characteristics, defects and impurities and physical characteristics; grading and sorting speed, and fruits weigh. the results shown value of impurities and defects. the highest value of the regression is 0.40 and the error-approximation algorithm has recorded the value 06-1 and weight fruits fruit recorded the highest value and was 138.20 g, Gradin
Pavement crack and pothole identification are important tasks in transportation maintenance and road safety. This study offers a novel technique for automatic asphalt pavement crack and pothole detection which is based on image processing. Different types of cracks (transverse, longitudinal, alligator-type, and potholes) can be identified with such techniques. The goal of this research is to evaluate road surface damage by extracting cracks and potholes, categorizing them from images and videos, and comparing the manual and the automated methods. The proposed method was tested on 50 images. The results obtained from image processing showed that the proposed method can detect cracks and potholes and identify their severity levels wit
... Show MoreIn the reverse engineering approach, a massive amount of point data is gathered together during data acquisition and this leads to larger file sizes and longer information data handling time. In addition, fitting of surfaces of these data point is time-consuming and demands particular skills. In the present work a method for getting the control points of any profile has been presented. Where, many process for an image modification was explained using Solid Work program, and a parametric equation of the profile that proposed has been derived using Bezier technique with the control points that adopted. Finally, the proposed profile was machined using 3-aixs CNC milling machine and a compression in dimensions process has been occurred betwe
... Show MoreIn this paper, a method is proposed to increase the compression ratio for the color images by
dividing the image into non-overlapping blocks and applying different compression ratio for these
blocks depending on the importance information of the block. In the region that contain important
information the compression ratio is reduced to prevent loss of the information, while in the
smoothness region which has not important information, high compression ratio is used .The
proposed method shows better results when compared with classical methods(wavelet and DCT).
The principal goal guiding any designed encryption algorithm must be security against unauthorized attackers. Within the last decade, there has been a vast increase in the communication of digital computer data in both the private and public sectors. Much of this information has a significant value; therefore it does require the protection by design strength algorithm to cipher it. This algorithm defines the mathematical steps required to transform data into a cryptographic cipher and also to transform the cipher back to the original form. The Performance and security level is the main characteristics that differentiate one encryption algorithm from another. In this paper suggested a new technique to enhance the performance of the Data E
... Show More