Fractal image compression depends on representing an image using affine transformations. The main concern for researches in the discipline of fractal image compression (FIC) algorithm is to decrease encoding time needed to compress image data. The basic technique is that each portion of the image is similar to other portions of the same image. In this process, there are many models that were developed. The presence of fractals was initially noticed and handled using Iterated Function System (IFS); that is used for encoding images. In this paper, a review of fractal image compression is discussed with its variants along with other techniques. A summarized review of contributions is achieved to determine the fulfillment of fractal image compression, specifically for the block indexing methods based on the moment descriptor. Block indexing method depends on classifying the domain and range blocks using moments to generate an invariant descriptor that reduces the long encoding time. A comparison is performed between the blocked indexing technology and other fractal image techniques to determine the importance of block indexing in saving encoding time and achieving better compression ratio while maintaining image quality on Lena image.
Copper, and its, alloys and composites (being the matrix), are broadly used in the electronic as well as bearing materials due to the excellent thermal and electrical conductivities it has.
In this study, powder metallurgy technique was used for the production of copper graphite composite with three volume perc ent of graphite. Processing parameters selected is (900) °C sintering temperature and (90) minutes holding time for samples that were heated in an inert atmosphere (argon gas). Wear test results showed a pronounced improvement in wear resistance as the percent of graphite increased which acts as solid lubricant (where wear rate was decreased by about 88% as compared with pure Cu). Microhardness and
... Show MoreThe aim of this research is to study the surface alteration characteristics and surface morphology of the superhydrophobic/hydrophobic nanocomposite coatings prepared by an electrospinning method to coat various materials such as glass and metal. This is considered as a low cost method of fabrication for polymer solutions of Polystyrene (PS), Polymethylmethacrylate (PMMA) and Silicone Rubber (RTV). Si were prepared in various wt% of composition for each solutions. Contact angle measurement, surface tension, viscosity, roughness tests were calculated for all specimens. SEM showed the morphology of the surfaces after coated. PS and PMMA showed superhydrophobic properties for metal substrate, while Si showed hydroph
... Show MoreGroundwater is considered as one of the most important sources of fresh-water, on which many regions around the world depend, especially in semi-arid and arid regions. Protecting and maintaining groundwater is a difficult process, but it is very important to maintain an important source of water. The current study aims to assess the susceptibility of groundwater to pollution using the DRASTIC model along with the GIS environments and its tool boxes. A vulnerability map was created by relying on data collected from 55 wells surveyed by the researchers as well as archived records from governmental institutions and some international organizations. The results indicate that the region falls into three vulnerability functional zones , namely
... Show MoreIn this paper Zener diode was designed by mixing three mixing ratios of Ag2O(1-x)ZnO(x), where x is 0.5, 0.3, and 0.1, that are deposited on a p-type porous silicon using laser induced plasma technique at room temperature (RT). The results of the Zener diode showed a decrease in knee and Zener voltage when the mixing ratio of Ag2O(1-x)ZnO(x) structure was increased. Nanofilms of 200nm thickness were prepared from pure ZnO and Ag2O as well as Ag2O(1-x)ZnO(x) with three maxing ratios and deposited on glass slides at RT to analyze the structure and optical properties. The structures of Ag2O and Ag2O
In this paper, two new simple, fast and efficient block matching algorithms are introduced, both methods begins blocks matching process from the image center block and moves across the blocks toward image boundaries. With each block, its motion vector is initialized using linear prediction that depending on the motion vectors of its neighbor blocks that are already scanned and their motion vectors are assessed. Also, a hybrid mechanism is introduced, it depends on mixing the proposed two predictive mechanisms with Exhaustive Search (ES) mechanism in order to gain matching accuracy near or similar to ES but with Search Time ST less than 80% of the ES. Also, it offers more control capability to reduce the search errors. The experimental tests
... Show MoreImage compression is very important in reducing the costs of data storage transmission in relatively slow channels. Wavelet transform has received significant attention because their multiresolution decomposition that allows efficient image analysis. This paper attempts to give an understanding of the wavelet transform using two more popular examples for wavelet transform, Haar and Daubechies techniques, and make compression between their effects on the image compression.
This paper presents the application of a framework of fast and efficient compressive sampling based on the concept of random sampling of sparse Audio signal. It provides four important features. (i) It is universal with a variety of sparse signals. (ii) The number of measurements required for exact reconstruction is nearly optimal and much less then the sampling frequency and below the Nyquist frequency. (iii) It has very low complexity and fast computation. (iv) It is developed on the provable mathematical model from which we are able to quantify trade-offs among streaming capability, computation/memory requirement and quality of reconstruction of the audio signal. Compressed sensing CS is an attractive compression scheme due to its uni
... Show MoreNowadays, huge digital images are used and transferred via the Internet. It has been the primary source of information in several domains in recent years. Blur image is one of the most common difficult challenges in image processing, which is caused via object movement or a camera shake. De-blurring is the main process to restore the sharp original image, so many techniques have been proposed, and a large number of research papers have been published to remove blurring from the image. This paper presented a review for the recent papers related to de-blurring published in the recent years (2017-2020). This paper focused on discussing various strategies related to enhancing the software's for image de-blur.&n
... Show More