The gaps and cracks in an image result from different reasons and affect the images. There are various methods concerning gaps replenishment along with serious efforts and proposed methodologies to eliminate cracks in diverse tendencies. In the current research work a color image white crack in-painting system has been introduced. The proposed inpainting system involved on two algorithms. They are Linear Gaps Filling (LGF) and the Circular Gaps Filling (CGF). The quality of output image depends on several effects such as: pixels tone, the number of pixels in the cracked area and neighborhood of cracked area and the resolution the image. The quality of the output images of two methods (linear method: average Peak Signal to Noise Ratio (PSNR)=24.899 and circular method: average PSNR= 27.783). The result of correlation of images is close to the original image (horizontal = 0.894, vertical = 0.521, and diagonal =0.807). The first method is less time consuming than the circular method.
The process of accurate localization of the basic components of human faces (i.e., eyebrows, eyes, nose, mouth, etc.) from images is an important step in face processing techniques like face tracking, facial expression recognition or face recognition. However, it is a challenging task due to the variations in scale, orientation, pose, facial expressions, partial occlusions and lighting conditions. In the current paper, a scheme includes the method of three-hierarchal stages for facial components extraction is presented; it works regardless of illumination variance. Adaptive linear contrast enhancement methods like gamma correction and contrast stretching are used to simulate the variance in light condition among images. As testing material
... Show MoreAs result of exposure in low light-level are images with only a small number of
photons. Only the pixels in which arrive the photopulse have an intensity value
different from zero. This paper presents an easy and fast procedure for simulating
low light-level images by taking a standard well illuminated image as a reference.
The images so obtained are composed by a few illuminated pixels on a dark
background. When the number of illuminated pixels is less than 0.01% of the total
pixels number it is difficult to identify the original object.
The dependable and efficient identification of Qin seal script characters is pivotal in the discovery, preservation, and inheritance of the distinctive cultural values embodied by these artifacts. This paper uses image histograms of oriented gradients (HOG) features and an SVM model to discuss a character recognition model for identifying partial and blurred Qin seal script characters. The model achieves accurate recognition on a small, imbalanced dataset. Firstly, a dataset of Qin seal script image samples is established, and Gaussian filtering is employed to remove image noise. Subsequently, the gamma transformation algorithm adjusts the image brightness and enhances the contrast between font structures and image backgrounds. After a s
... Show MoreOne of the primary problems in internet is security, mostly when computer utilization is increasing in all social and business areas. So, the secret communications through public and private channels are the major goal of researchers. Information hiding is one of methods to obtain a security communication medium and protecting the data during transmission.
This research offers in a new method using two levels to hide, the first level is hiding by embedding and addition but the second level is hiding by injection. The first level embeds a secret message in one bit in the LSB in the FFT and the addition of one kashida. Subtraction of two random images (STRI) is RNG to find positions for hiding within the text. The second level is the in
This paper proposed several approaches for estimating the optical turbulence of the Earth’s atmosphere and their effect on solar images’ resolution using ground-based telescopes based on von Kárman, Kolmogorov, and modified von Kárman PSDs models. The results showed a strong correlation coefficient for the modified von Kármán model of atmospheric representation. As can be seen in the case where solar adaptive optics have been properly designed, they typically decrease aberration considerably and provide greatly improved imagery.
The current study includes preparing a geometric proposal of the main parameters that must be worked within a seismic reflection survey to prepare a three-dimensional subsurface image. This image represents the Siba oil field located in Basra, southern Iraq. The results were based on two options for selecting these approved elements to create a three-dimensional image of the Mishrif, Zubair and Yamama formations as well as the Jurassic and Permian Khuff and the pre-Khuff reservoir area. The first option is represented in the geometry in option -1 is 12 lines, 6 shots, and 216 chs. The receiver density is 66.67 receivers / km2, so the shot density is the same. Total shots are 21000, which is the same number of receiv
... Show MoreIn this paper, a fast lossless image compression method is introduced for compressing medical images, it is based on splitting the image blocks according to its nature along with using the polynomial approximation to decompose image signal followed by applying run length coding on the residue part of the image, which represents the error caused by applying polynomial approximation. Then, Huffman coding is applied as a last stage to encode the polynomial coefficients and run length coding. The test results indicate that the suggested method can lead to promising performance.
In this paper, we devoted to use circular shape sliding block, in image edge determination. The circular blocks have symmetrical properties in all directions for the mask points around the central mask point. Therefore, the introduced method is efficient to be use in detecting image edges, in all directions curved edges, and lines. The results exhibit a very good performance in detecting image edges, comparing with other edge detectors results.
Plagiarism is described as using someone else's ideas or work without their permission. Using lexical and semantic text similarity notions, this paper presents a plagiarism detection system for examining suspicious texts against available sources on the Web. The user can upload suspicious files in pdf or docx formats. The system will search three popular search engines for the source text (Google, Bing, and Yahoo) and try to identify the top five results for each search engine on the first retrieved page. The corpus is made up of the downloaded files and scraped web page text of the search engines' results. The corpus text and suspicious documents will then be encoded as vectors. For lexical plagiarism detection, the system will
... Show MoreIn our research, we dealt with one of the most important issues of linguistic studies of the Holy Qur’an, which is the words that are close in meaning, which some believe are synonyms, but in the Arabic language they are not considered synonyms because there are subtle differences between them. Synonyms in the Arabic language are very few, rather rare, and in the Holy Qur’an they are completely non-existent. And how were these words, close in meaning, translated in the translation of the Holy Qur’an by Almir Kuliev into the Russian language.