The issue of image captioning, which comprises automatic text generation to understand an image’s visual information, has become feasible with the developments in object recognition and image classification. Deep learning has received much interest from the scientific community and can be very useful in real-world applications. The proposed image captioning approach involves the use of Convolution Neural Network (CNN) pre-trained models combined with Long Short Term Memory (LSTM) to generate image captions. The process includes two stages. The first stage entails training the CNN-LSTM models using baseline hyper-parameters and the second stage encompasses training CNN-LSTM models by optimizing and adjusting the hyper-parameters of the previous stage. Improvements include the use of a new activation function, regular parameter tuning, and an improved learning rate in the later stages of training. The experimental results on the flickr8k dataset showed a noticeable and satisfactory improvement in the second stage, where a clear increment was achieved in the evaluation metrics Bleu1-4, Meteor, and Rouge-L. This increment confirmed the effectiveness of the alterations and highlighted the importance of hyper-parameter tuning in improving the performance of CNN-LSTM models in image caption tasks.
A common approach to the color image compression was started by transform
the red, green, and blue or (RGB) color model to a desire color model, then applying
compression techniques, and finally retransform the results into RGB model In this
paper, a new color image compression method based on multilevel block truncation
coding (MBTC) and vector quantization is presented. By exploiting human visual
system response for color, bit allocation process is implemented to distribute the bits
for encoding in more effective away.
To improve the performance efficiency of vector quantization (VQ),
modifications have been implemented. To combines the simple computational and
edge preservation properties of MBTC with high c
Groupwise non-rigid image alignment is a difficult non-linear optimization problem involving many parameters and often large datasets. Previous methods have explored various metrics and optimization strategies. Good results have been previously achieved with simple metrics, requiring complex optimization, often with many unintuitive parameters that require careful tuning for each dataset. In this chapter, the problem is restructured to use a simpler, iterative optimization algorithm, with very few free parameters. The warps are refined using an iterative Levenberg-Marquardt minimization to the mean, based on updating the locations of a small number of points and incorporating a stiffness constraint. This optimization approach is eff
... Show MoreThe novels that we have addressed in the research, Including those with the ideological and political ideology, It's carry a negative image for the Kurds without any attempt to understand, empathy and the separation between politics and the people, The novels were deformation of the image, Like tongue of the former authority which speaks their ideas, Such as (Freedom heads bagged, Happy sorrows Tuesdays for Jassim Alrassif, and Under the dogs skies for Salah Salah). The rest of novels (Life is a moment for Salam Ibrahim, The country night for Jassim Halawi, The rib for Hameed Aleqabi). These are novels contained a scene carries a negative image among many other social images, some positive, and can be described as neutral novels. We can
... Show MoreImage compression is one of the data compression types applied to digital images in order to reduce their high cost for storage and/or transmission. Image compression algorithms may take the benefit of visual sensitivity and statistical properties of image data to deliver superior results in comparison with generic data compression schemes, which are used for other digital data. In the first approach, the input image is divided into blocks, each of which is 16 x 16, 32 x 32, or 64 x 64 pixels. The blocks are converted first into a string; then, encoded by using a lossless and dictionary-based algorithm known as arithmetic coding. The more occurrence of the pixels values is codded in few bits compare with pixel values of less occurre
... Show Moreتُعبّرُ الصُّورةُ الحسَّيةُ في شعرِ ابن دُنَينير الموصليِّ([i]) في بنيتها عن تجربةِ الشاعرِ الوجدانيةِ والذهنيةِ, وأفكارهِ ومشاعرِهِ؛ فيصوغُ بها مَفهومًا جديدًا للواقعِ الماديِّ والمعنويِّ، الذي يتسمُ بالوضوحِ أولاً، وبالقرْبِ من الذهنِ ثانيًا، للربْطِ بين الحواسِّ الإنسانيةِ والمعاني الذهنيةِ، لِتُقَدِّمَ الصُّورةُ الحسيَّةُ إلى ((المتلقي صُورًا مرئيةً، يُعادُ تشكيلُها سياق
... Show MoreThis paper deals with a central issue in the field of human communication and reveals the roaming monitoring of the incitement and hatred speech and violence in media, its language and its methods. In this paper, the researcher seeks to provide a scientific framework for the nature of the discourse of incitement, hatred speech, violence, and the role that media can play in solving conflicts with their different dimensions and in building community peace and preventing the emergence of conflicts among different parties and in different environments. In this paper, the following themes are discussed:
The root of the discourse of hatred and incitement
The nature and dimensions of the discourse of incitement and hatred speech
The n
This research is a result of other studies made about the iraqi public and its relationship with different states institutions, until recently, such studies were almost non-existent. The main characteristic that distinguishes scientific research is that it involves a specific problem that needs to be studied and analysed from multiple aspects. What is meant by identifying the problem, is to limit the topic to what the researcher wants to deal with, rather than what the title suggests as topics which the researcher doesn’t want to deal with. The problem of this research is the absence of thoughtful and planned scientific programs to build a positive mental image of the institutions of the modern state in general and the House of Represe
... Show MoreProtecting information sent through insecure internet channels is a significant challenge facing researchers. In this paper, we present a novel method for image data encryption that combines chaotic maps with linear feedback shift registers in two stages. In the first stage, the image is divided into two parts. Then, the locations of the pixels of each part are redistributed through the random numbers key, which is generated using linear feedback shift registers. The second stage includes segmenting the image into the three primary colors red, green, and blue (RGB); then, the data for each color is encrypted through one of three keys that are generated using three-dimensional chaotic maps. Many statistical tests (entropy, peak signa
... Show MoreThe rapid development of telemedicine services and the requirements for exchanging medical information between physicians, consultants, and health institutions have made the protection of patients’ information an important priority for any future e-health system. The protection of medical information, including the cover (i.e. medical image), has a specificity that slightly differs from the requirements for protecting other information. It is necessary to preserve the cover greatly due to its importance on the reception side as medical staff use this information to provide a diagnosis to save a patient's life. If the cover is tampered with, this leads to failure in achieving the goal of telemedicine. Therefore, this work provides an in
... Show MoreNeighShrink is an efficient image denoising algorithm based on the discrete wavelet
transform (DWT). Its disadvantage is to use a suboptimal universal threshold and identical
neighbouring window size in all wavelet subbands. Dengwen and Wengang proposed an
improved method, which can determine an optimal threshold and neighbouring window size
for every subband by the Stein’s unbiased risk estimate (SURE). Its denoising performance is
considerably superior to NeighShrink and also outperforms SURE-LET, which is an up-todate
denoising algorithm based on the SURE. In this paper different wavelet transform
families are used with this improved method, the results show that Haar wavelet has the
lowest performance among