Median filter is adopted to match the noise statistics of the degradation seeking good quality smoothing images. Two methods are suggested in this paper(Pentagonal-Hexagonal mask and Scan Window Mask), the study involved modified median filter for improving noise suppression, the modification is considered toward more reliable results. Modification median filter (Pentagonal-Hexagonal mask) was found gave better results (qualitatively and quantitatively ) than classical median filters and another suggested method (Scan Window Mask), but this will be on the account of the time required. But sometimes when the noise is line type the cross 3x3 filter preferred to another one Pentagonal-Hexagonal with few variation. Scan Window Mask gave better results qualitatively when removing line noise than another filters, also will be on the account of the time required.
Media plays an important role in shaping the mental image of their audiences for individuals, groups and organizations, States and peoples. It is the window through which overlooks the masses on events and issues, and in the light of their exposure to these means are their opinions and impressions.
Despite the importance of direct experiences in shaping opinions, drawing pictures and impressions, it is inevitable to rely on these means as individuals can not engage in direct experiences with thousands of events, issues and topics that concern their community and other societies.
There is no doubt that media is of great importance at the present time, because of its significant impact in the management of the course of pol
... Show MoreThe Internet image retrieval is an interesting task that needs efforts from image processing and relationship structure analysis. In this paper, has been proposed compressed method when you need to send more than a photo via the internet based on image retrieval. First, face detection is implemented based on local binary patterns. The background is notice based on matching global self-similarities and compared it with the rest of the image backgrounds. The propose algorithm are link the gap between the present image indexing technology, developed in the pixel domain, and the fact that an increasing number of images stored on the computer are previously compressed by JPEG at the source. The similar images are found and send a few images inst
... Show MoreRecently, Image enhancement techniques can be represented as one of the most significant topics in the field of digital image processing. The basic problem in the enhancement method is how to remove noise or improve digital image details. In the current research a method for digital image de-noising and its detail sharpening/highlighted was proposed. The proposed approach uses fuzzy logic technique to process each pixel inside entire image, and then take the decision if it is noisy or need more processing for highlighting. This issue is performed by examining the degree of association with neighboring elements based on fuzzy algorithm. The proposed de-noising approach was evaluated by some standard images after corrupting them with impulse
... Show MoreBackground: Determination of sex and estimation of stature from the skeleton is vital to medicolegal investigations. Skull is composed of hard tissue and is the best preserved part of skeleton after death, hence, in many cases it is the only available part for forensic examination. Lateral cephalogram is ideal for the skull examination as it gives details of various anatomical points in a single radiograph. This study was undertaken to evaluate the accuracy of digital cephalometric system as quick, easy and reproducible supplement tool in sex determination in Iraqi samples in different age range using certain linear and angular craniofacial measurements in predicting sex. Materials and Method The sample consisted of 113of true lateral cepha
... Show MoreIn this paper, an algorithm through which we can embed more data than the
regular methods under spatial domain is introduced. We compressed the secret data
using Huffman coding and then this compressed data is embedded using laplacian
sharpening method.
We used Laplace filters to determine the effective hiding places, then based on
threshold value we found the places with the highest values acquired from these filters
for embedding the watermark. In this work our aim is increasing the capacity of
information which is to be embedded by using Huffman code and at the same time
increasing the security of the algorithm by hiding data in the places that have highest
values of edges and less noticeable.
The perform
Currently, with the huge increase in modern communication and network applications, the speed of transformation and storing data in compact forms are pressing issues. Daily an enormous amount of images are stored and shared among people every moment, especially in the social media realm, but unfortunately, even with these marvelous applications, the limited size of sent data is still the main restriction's, where essentially all these applications utilized the well-known Joint Photographic Experts Group (JPEG) standard techniques, in the same way, the need for construction of universally accepted standard compression systems urgently required to play a key role in the immense revolution. This review is concerned with Different
... Show MoreThe image caption is the process of adding an explicit, coherent description to the contents of the image. This is done by using the latest deep learning techniques, which include computer vision and natural language processing, to understand the contents of the image and give it an appropriate caption. Multiple datasets suitable for many applications have been proposed. The biggest challenge for researchers with natural language processing is that the datasets are incompatible with all languages. The researchers worked on translating the most famous English data sets with Google Translate to understand the content of the images in their mother tongue. In this paper, the proposed review aims to enhance the understanding o
... Show MoreIn this paper, a method is proposed to increase the compression ratio for the color images by
dividing the image into non-overlapping blocks and applying different compression ratio for these
blocks depending on the importance information of the block. In the region that contain important
information the compression ratio is reduced to prevent loss of the information, while in the
smoothness region which has not important information, high compression ratio is used .The
proposed method shows better results when compared with classical methods(wavelet and DCT).