The issue of image captioning, which comprises automatic text generation to understand an image’s visual information, has become feasible with the developments in object recognition and image classification. Deep learning has received much interest from the scientific community and can be very useful in real-world applications. The proposed image captioning approach involves the use of Convolution Neural Network (CNN) pre-trained models combined with Long Short Term Memory (LSTM) to generate image captions. The process includes two stages. The first stage entails training the CNN-LSTM models using baseline hyper-parameters and the second stage encompasses training CNN-LSTM models by optimizing and adjusting the hyper-parameters of the previous stage. Improvements include the use of a new activation function, regular parameter tuning, and an improved learning rate in the later stages of training. The experimental results on the flickr8k dataset showed a noticeable and satisfactory improvement in the second stage, where a clear increment was achieved in the evaluation metrics Bleu1-4, Meteor, and Rouge-L. This increment confirmed the effectiveness of the alterations and highlighted the importance of hyper-parameter tuning in improving the performance of CNN-LSTM models in image caption tasks.
Estimating multivariate location and scatter with both affine equivariance and positive break down has always been difficult. Awell-known estimator which satisfies both properties is the Minimum volume Ellipsoid Estimator (MVE) Computing the exact (MVE) is often not feasible, so one usually resorts to an approximate Algorithm. In the regression setup, algorithm for positive-break down estimators like Least Median of squares typically recomputed the intercept at each step, to improve the result. This approach is called intercept adjustment. In this paper we show that a similar technique, called location adjustment, Can be applied to the (MVE). For this purpose we use the Minimum Volume Ball (MVB). In order
... Show MoreBackground: One of the recommended methods for reducing aerosol contamination during the daily regular usage of high-speed turbine and ultrasonic scaling is the use of preprocedural mouth rinse. Several agents have been investigated as a preprocedural mouth rinse. Chlorhexidine significantly reduce the viable microbial content of aerosol when used as a preprocedural rinse. Studies have shown that cetylpridinum chloride (CPC) mouthwash is equally effective as chlorhexidine in reducing plaque and gingivitis. This study compared the effect of 0.07% CPC to 0.2% chlorhexidine gluconate (CHX) as preprocedural mouth rinses in reducing the aerosol contamination by high-speed turbine. Materials and Methods: 36 patients were divided into three gro
... Show MoreBy use the notions pre-g-closedness and pre-g-openness we have generalized a class of separation axioms in topological spaces. In particular, we presented in this paper new types of regulαrities, which we named ρgregulαrity and Sρgregulαrity. Many results and properties of both types have been investigated and have illustrated by examples.
The present work aimed to make a comparative investigation between three different ionospheric models: IRI-2020, ASAPS and VOACAP. The purpose of the comparative study is to investigate the compatibility of predicting the Maximum Usable Frequency parameter (MUF) over mid-latitude region during the severe geomagnetic storm on 17 March 2015. Three stations distributed in the mid-latitudes were selected for study; these are (Athens (23.50o E, 38.00o N), Jeju (124.53o E, 33.6o N) and Pt. Arguello (239.50o W, 34.80o N). The daily MUF outcomes were calculated using the tested models for the three adopted sites, for a span of five-day (the day of the event and two days preceding and following the event day). The calculated datasets were co
... Show MoreIn this paper, we investigate the behavior of the bayes estimators, for the scale parameter of the Gompertz distribution under two different loss functions such as, the squared error loss function, the exponential loss function (proposed), based different double prior distributions represented as erlang with inverse levy prior, erlang with non-informative prior, inverse levy with non-informative prior and erlang with chi-square prior.
The simulation method was fulfilled to obtain the results, including the estimated values and the mean square error (MSE) for the scale parameter of the Gompertz distribution, for different cases for the scale parameter of the Gompertz distr
... Show MoreThe purpose of this paper is applying the robustness in Linear programming(LP) to get rid of uncertainty problem in constraint parameters, and find the robust optimal solution, to maximize the profits of the general productive company of vegetable oils for the year 2019, through the modify on a mathematical model of linear programming when some parameters of the model have uncertain values, and being processed it using robust counterpart of linear programming to get robust results from the random changes that happen in uncertain values of the problem, assuming these values belong to the uncertainty set and selecting the values that cause the worst results and to depend buil
... Show MoreAbstract— The growing use of digital technologies across various sectors and daily activities has made handwriting recognition a popular research topic. Despite the continued relevance of handwriting, people still require the conversion of handwritten copies into digital versions that can be stored and shared digitally. Handwriting recognition involves the computer's strength to identify and understand legible handwriting input data from various sources, including document, photo-graphs and others. Handwriting recognition pose a complexity challenge due to the diversity in handwriting styles among different individuals especially in real time applications. In this paper, an automatic system was designed to handwriting recognition
... Show MoreIn this paper we estimate the coefficients and scale parameter in linear regression model depending on the residuals are of type 1 of extreme value distribution for the largest values . This can be regard as an improvement for the studies with the smallest values . We study two estimation methods ( OLS & MLE ) where we resort to Newton – Raphson (NR) and Fisher Scoring methods to get MLE estimate because the difficulty of using the usual approach with MLE . The relative efficiency criterion is considered beside to the statistical inference procedures for the extreme value regression model of type 1 for largest values . Confidence interval , hypothesis testing for both scale parameter and regression coefficients
... Show More