The image caption is the process of adding an explicit, coherent description to the contents of the image. This is done by using the latest deep learning techniques, which include computer vision and natural language processing, to understand the contents of the image and give it an appropriate caption. Multiple datasets suitable for many applications have been proposed. The biggest challenge for researchers with natural language processing is that the datasets are incompatible with all languages. The researchers worked on translating the most famous English data sets with Google Translate to understand the content of the images in their mother tongue. In this paper, the proposed review aims to enhance the understanding of image captioning strategies and to survey previous research related to image captioning while examining the most popular databases in different languages, mostly English, translating into other languages using the latest models for describing images, summarizing evaluation measures, and comparing them.
This paper suggest two method of recognition, these methods depend on the extraction of the feature of the principle component analysis when applied on the wavelet domain(multi-wavelet). First method, an idea of increasing the space of recognition, through calculating the eigenstructure of the diagonal sub-image details at five depths of wavelet transform is introduced. The effective eigen range selected here represent the base for image recognition. In second method, an idea of obtaining invariant wavelet space at all projections is presented. A new recursive from that represents invariant space of representing any image resolutions obtained from wavelet transform is adopted. In this way, all the major problems that effect the image and
... Show MoreFractal image compression gives some desirable properties like fast decoding image, and very good rate-distortion curves, but suffers from a high encoding time. In fractal image compression a partitioning of the image into ranges is required. In this work, we introduced good partitioning process by means of merge approach, since some ranges are connected to the others. This paper presents a method to reduce the encoding time of this technique by reducing the number of range blocks based on the computing the statistical measures between them . Experimental results on standard images show that the proposed method yields minimize (decrease) the encoding time and remain the quality results passable visually.
In this paper, we introduce a DCT based steganographic method for gray scale images. The embedding approach is designed to reach efficient tradeoff among the three conflicting goals; maximizing the amount of hidden message, minimizing distortion between the cover image and stego-image,and maximizing the robustness of embedding. The main idea of the method is to create a safe embedding area in the middle and high frequency region of the DCT domain using a magnitude modulation technique. The magnitude modulation is applied using uniform quantization with magnitude Adder/Subtractor modules. The conducted test results indicated that the proposed method satisfy high capacity, high preservation of perceptual and statistical properties of the steg
... Show MoreIn this research a proposed technique is used to enhance the frame difference technique performance for extracting moving objects in video file. One of the most effective factors in performance dropping is noise existence, which may cause incorrect moving objects identification. Therefore it was necessary to find a way to diminish this noise effect. Traditional Average and Median spatial filters can be used to handle such situations. But here in this work the focus is on utilizing spectral domain through using Fourier and Wavelet transformations in order to decrease this noise effect. Experiments and statistical features (Entropy, Standard deviation) proved that these transformations can stand to overcome such problems in an elegant way.
... Show MoreIn this paper, an algorithm through which we can embed more data than the
regular methods under spatial domain is introduced. We compressed the secret data
using Huffman coding and then this compressed data is embedded using laplacian
sharpening method.
We used Laplace filters to determine the effective hiding places, then based on
threshold value we found the places with the highest values acquired from these filters
for embedding the watermark. In this work our aim is increasing the capacity of
information which is to be embedded by using Huffman code and at the same time
increasing the security of the algorithm by hiding data in the places that have highest
values of edges and less noticeable.
The perform
This paper deals with a central issue in the field of human communication and reveals the roaming monitoring of the incitement and hatred speech and violence in media, its language and its methods. In this paper, the researcher seeks to provide a scientific framework for the nature of the discourse of incitement, hatred speech, violence, and the role that media can play in solving conflicts with their different dimensions and in building community peace and preventing the emergence of conflicts among different parties and in different environments. In this paper, the following themes are discussed:
The root of the discourse of hatred and incitement
The nature and dimensions of the discourse of incitement and hatred speech
The n
Background: The treatment of dental tissues proceeding to adhesive procedures is a crucial step in the bonding protocol and decides the clinical success ofrestorations. This study was conducted in vitro, with the aim of evaluating thenanoleakage on the interface between the adhesive system and the dentine treated by five surface modalities using scanning electron microscopy and energydispersiveX-ray spectrometry. Materials and methods: Twenty five extracted premolars teeth were selected in the study. Standardized class V cavities were prepared on the buccal and lingual surfaces then the teeth divided into five main groups of (5 teeth in each group n=10) according to the type of dentine surface treatment that was used: Group (A): dentine was
... Show MoreBackground: The treatment of dental tissues proceeding to adhesive procedures is a crucial step in the bonding protocol and decides the clinical success ofrestorations. This study was conducted in vitro, with the aim of evaluating thenanoleakage on the interface between the adhesive system and the dentine treated by five surface modalities using scanning electron microscopy and energydispersiveX-ray spectrometry. Materials and methods: Twenty five extracted premolars teeth were selected in the study. Standardized class V cavities were prepared on the buccal and lingual surfaces then the teeth divided into five main groups of (5 teeth in each group n=10) according to the type of dentine surface treatment that was used: Group (A): dentine was
... Show More