The issue of image captioning, which comprises automatic text generation to understand an image’s visual information, has become feasible with the developments in object recognition and image classification. Deep learning has received much interest from the scientific community and can be very useful in real-world applications. The proposed image captioning approach involves the use of Convolution Neural Network (CNN) pre-trained models combined with Long Short Term Memory (LSTM) to generate image captions. The process includes two stages. The first stage entails training the CNN-LSTM models using baseline hyper-parameters and the second stage encompasses training CNN-LSTM models by optimizing and adjusting the hyper-parameters of the previous stage. Improvements include the use of a new activation function, regular parameter tuning, and an improved learning rate in the later stages of training. The experimental results on the flickr8k dataset showed a noticeable and satisfactory improvement in the second stage, where a clear increment was achieved in the evaluation metrics Bleu1-4, Meteor, and Rouge-L. This increment confirmed the effectiveness of the alterations and highlighted the importance of hyper-parameter tuning in improving the performance of CNN-LSTM models in image caption tasks.
Mixture experiments are response variables based on the proportions of component for this mixture. In our research we will compare the scheffʼe model with the kronecker model for the mixture experiments, especially when the experimental area is restricted.
Because of the experience of the mixture of high correlation problem and the problem of multicollinearity between the explanatory variables, which has an effect on the calculation of the Fisher information matrix of the regression model.
to estimate the parameters of the mixture model, we used the (generalized inverse ) And the Stepwise Regression procedure
... Show MoreEvolutionary algorithms are better than heuristic algorithms at finding protein complexes in protein-protein interaction networks (PPINs). Many of these algorithms depend on their standard frameworks, which are based on topology. Further, many of these algorithms have been exclusively examined on networks with only reliable interaction data. The main objective of this paper is to extend the design of the canonical and topological-based evolutionary algorithms suggested in the literature to cope with noisy PPINs. The design of the evolutionary algorithm is extended based on the functional domain of the proteins rather than on the topological domain of the PPIN. The gene ontology annotation in each molecular function, biological proce
... Show MorePracticing physical activity is a very necessary requirement for all segments of society. . Many individuals, regardless of environment, age and gender, are exposed to joint diseases and injuries and pain in the neck for many and varied reasons. Treatment methods and techniques vary according to the severity of the injury, as therapeutic exercises, including stretching exercises, specifically fixed ones, are one of the ways to eliminate cases of muscle and joint dysfunction. Many individuals, regardless of environment, age and gender, are exposed to joint diseases and injuries and pain in the neck for many and varied reasons. Treatment methods and techniques differ according to the severity of the injury, as therapeutic exercises, including
... Show MoreThe Internet image retrieval is an interesting task that needs efforts from image processing and relationship structure analysis. In this paper, has been proposed compressed method when you need to send more than a photo via the internet based on image retrieval. First, face detection is implemented based on local binary patterns. The background is notice based on matching global self-similarities and compared it with the rest of the image backgrounds. The propose algorithm are link the gap between the present image indexing technology, developed in the pixel domain, and the fact that an increasing number of images stored on the computer are previously compressed by JPEG at the source. The similar images are found and send a few images inst
... Show MoreData <span>transmission in orthogonal frequency division multiplexing (OFDM) system needs source and channel coding, the transmitted data suffers from the bad effect of large peak to average power ratio (PAPR). Source code and channel codes can be joined using different joined codes. Variable length error correcting code (VLEC) is one of these joined codes. VLEC is used in mat lab simulation for image transmission in OFDM system, different VLEC code length is used and compared to find that the PAPR decreased with increasing the code length. Several techniques are used and compared for PAPR reduction. The PAPR of OFDM signal is measured for image coding with VLEC and compared with image coded by Huffman source coding and Bose-
... Show MoreThe digital camera which contain light unit inside it is useful with low illumination but not for high. For different intensity; the quality of the image will not stay good but it will have dark or low intensity so we can not change the contrast and the intensity in order to increase the losses information in the bright and the dark regions. . In this search we study the regular illumination on the images using the tungsten light by changing the intensities. The result appears that the tungsten light gives nearly far intensity for the three color bands(RGB) and the illuminated band(L).the result depend on the statistical properties which represented by the voltage ,power and intensities and the effect of this parameter on the digital
... Show MoreCyber-attacks keep growing. Because of that, we need stronger ways to protect pictures. This paper talks about DGEN, a Dynamic Generative Encryption Network. It mixes Generative Adversarial Networks with a key system that can change with context. The method may potentially mean it can adjust itself when new threats appear, instead of a fixed lock like AES. It tries to block brute‑force, statistical tricks, or quantum attacks. The design adds randomness, uses learning, and makes keys that depend on each image. That should give very good security, some flexibility, and keep compute cost low. Tests still ran on several public image sets. Results show DGEN beats AES, chaos tricks, and other GAN ideas. Entropy reached 7.99 bits per pix
... Show MoreThe research took the spatial autoregressive model: SAR and spatial error model: SEM in an attempt to provide practical evidence that proves the importance of spatial analysis, with a particular focus on the importance of using regression models spatial and that includes all of the spatial dependence, which we can test its presence or not by using Moran test. While ignoring this dependency may lead to the loss of important information about the phenomenon under research is reflected in the end on the strength of the statistical estimation power, as these models are the link between the usual regression models with time-series models. The spatial analysis had been applied to Iraq Household Socio-Economic Survey: IHS
... Show More