The issue of image captioning, which comprises automatic text generation to understand an image’s visual information, has become feasible with the developments in object recognition and image classification. Deep learning has received much interest from the scientific community and can be very useful in real-world applications. The proposed image captioning approach involves the use of Convolution Neural Network (CNN) pre-trained models combined with Long Short Term Memory (LSTM) to generate image captions. The process includes two stages. The first stage entails training the CNN-LSTM models using baseline hyper-parameters and the second stage encompasses training CNN-LSTM models by optimizing and adjusting the hyper-parameters of the previous stage. Improvements include the use of a new activation function, regular parameter tuning, and an improved learning rate in the later stages of training. The experimental results on the flickr8k dataset showed a noticeable and satisfactory improvement in the second stage, where a clear increment was achieved in the evaluation metrics Bleu1-4, Meteor, and Rouge-L. This increment confirmed the effectiveness of the alterations and highlighted the importance of hyper-parameter tuning in improving the performance of CNN-LSTM models in image caption tasks.
Several attempts have been made to modify the quasi-Newton condition in order to obtain rapid convergence with complete properties (symmetric and positive definite) of the inverse of Hessian matrix (second derivative of the objective function). There are many unconstrained optimization methods that do not generate positive definiteness of the inverse of Hessian matrix. One of those methods is the symmetric rank 1( H-version) update (SR1 update), where this update satisfies the quasi-Newton condition and the symmetric property of inverse of Hessian matrix, but does not preserve the positive definite property of the inverse of Hessian matrix where the initial inverse of Hessian matrix is positive definiteness. The positive definite prope
... Show MoreConstruction contractors usually undertake multiple construction projects simultaneously. Such a situation involves sharing different types of resources, including monetary, equipment, and manpower, which may become a major challenge in many cases. In this study, the financial aspects of working on multiple projects at a time are addressed and investigated. The study considers dealing with financial shortages by proposing a multi-project scheduling optimization model for profit maximization, while minimizing the total project duration. Optimization genetic algorithm and finance-based scheduling are used to produce feasible schedules that balance the finance of activities at any time w
Optimization procedures using a variety of input parameters have gotten a lot of attention, but using three non-edible seed oils of Jatropha (Jatropha curcas), Sesame (Sesamum indicum), and Sweet Almond (Prunusamygdalus dulcis) has a few advantages, including availability and non-food competitiveness. Optimizing a two-stage trans-esterification process using a sodium hydroxide-based catalyst at a fixed catalyst (1.0wt %) and temperature (60 oC) while varying molar ratio (1:3, 1:6, 1:12), time (20–60 min), and mixing speed (500–1000 rpm), to produce optimal responses of yields were studied using response surface methodology (RSM). The optimization solution of molar ratio (1:3), time (40.9 min.),
... Show MoreMolasse medium containing different concentrations of (NH4)2 SO4, (NH4)3 PO4, urea, KCI, and P2O5 were compared with the medium used for commercial production of C. utilis in a factory south of Iraq. An efficient medium, which produced 19. 16% dry wt. and 5. 78% protein, was developed. The effect of adding various concentrations of micronutrients (FeSO4, 7T20, MnSO4. 7H20, ZnSO4. 7E20) was also studied. Results showed that FeSo4. 7H20 caused a noticeable increase in both dry wt. and protein content of the yeast.
Most Internet of Vehicles (IoV) applications are delay-sensitive and require resources for data storage and tasks processing, which is very difficult to afford by vehicles. Such tasks are often offloaded to more powerful entities, like cloud and fog servers. Fog computing is decentralized infrastructure located between data source and cloud, supplies several benefits that make it a non-frivolous extension of the cloud. The high volume data which is generated by vehicles’ sensors and also the limited computation capabilities of vehicles have imposed several challenges on VANETs systems. Therefore, VANETs is integrated with fog computing to form a paradigm namely Vehicular Fog Computing (VFC) which provide low-latency services to mo
... Show MoreThis research aims at calculating the optimum cutting condition for various types of machining methods, assisted by computers, (the computer program in this research is designed to solve linear programs; the program is written in v. basic language). The program obtains the results automatically, this occur through entering the preliminary information about the work piece and the operating condition, the program makes the calculation actually by solving a group of experimental relations, depending on the type of machining method (turning, milling, drilling). The program was transferred to package and group of windows to facilitate the use; it will automatically print the initial input and optimal solution, and thus reduce the effort and t
... Show MoreThe aim of the present study was to develop theophylline (TP) inhalable sustained delivery system by preparing solid lipid microparticles using glyceryl behenate (GB) and poloxamer 188 (PX) as a lipid carrier and a surfactant respectively. The method involves loading TP nanoparticles into the lipid using high shear homogenization – ultrasonication technique followed by lyophilization. The compositional variations and interactions were evaluated using response surface methodology, a Box – Behnken design of experiment (DOE). The DOE constructed using TP (X1), GB (X2) and PX (X3) levels as independent factors. Responses measured were the entrapment efficiency (% EE) (Y1), mass median
... Show MoreIdentification of complex communities in biological networks is a critical and ongoing challenge since lots of network-related problems correspond to the subgraph isomorphism problem known in the literature as NP-hard. Several optimization algorithms have been dedicated and applied to solve this problem. The main challenge regarding the application of optimization algorithms, specifically to handle large-scale complex networks, is their relatively long execution time. Thus, this paper proposes a parallel extension of the PSO algorithm to detect communities in complex biological networks. The main contribution of this study is summarized in three- fold; Firstly, a modified PSO algorithm with a local search operator is proposed
... Show Morethe main of this paper is to give a comprehensive presentation of estimating methods namely maximum likelihood bayes and proposed methods for the parameter