This study explores the challenges in Artificial Intelligence (AI) systems in generating image captions, a task that requires effective integration of computer vision and natural language processing techniques. A comparative analysis between traditional approaches such as retrieval- based methods and linguistic templates) and modern approaches based on deep learning such as encoder-decoder models, attention mechanisms, and transformers). Theoretical results show that modern models perform better for the accuracy and the ability to generate more complex descriptions, while traditional methods outperform speed and simplicity. The paper proposes a hybrid framework that combines the advantages of both approaches, where conventional methods produce an initial description, which is then contextually, and refined using modern models. Preliminary estimates indicate that this approach could reduce the initial computational cost by up to 20% compared to relying entirely on deep models while maintaining high accuracy. The study recommends further research to develop effective coordination mechanisms between traditional and modern methods and to move to the experimental validation phase of the hybrid model in preparation for its application in environments that require a balance between speed and accuracy, such as real-time computer vision applications.
One of the main causes for concern is the widespread presence of pharmaceuticals in the environment, which may be harmful to living things. They are often referred to as emerging chemical pollutants in water bodies because they are either still unregulated or undergoing regulation. Pharmaceutical pollution of the environment may have detrimental effects on ecosystem viability, human health, and water quality. In this study, the amount of remaining pharmaceutical compounds in environmental waters was determined using a straightforward review. Pharmaceutical production and consumption have increased due to medical advancements, leading to concerns about their environmental impact and potential harm to living things due to their increa
... Show Moresummary
In this search, we examined the factorial experiments and the study of the significance of the main effects, the interaction of the factors and their simple effects by the F test (ANOVA) for analyze the data of the factorial experience. It is also known that the analysis of variance requires several assumptions to achieve them, Therefore, in case of violation of one of these conditions we conduct a transform to the data in order to match or achieve the conditions of analysis of variance, but it was noted that these transfers do not produce accurate results, so we resort to tests or non-parametric methods that work as a solution or alternative to the parametric tests , these method
... Show MoreAn oil spill is a leakage of pipelines, vessels, oil rigs, or tankers that leads to the release of petroleum products into the marine environment or on land that happened naturally or due to human action, which resulted in severe damages and financial loss. Satellite imagery is one of the powerful tools currently utilized for capturing and getting vital information from the Earth's surface. But the complexity and the vast amount of data make it challenging and time-consuming for humans to process. However, with the advancement of deep learning techniques, the processes are now computerized for finding vital information using real-time satellite images. This paper applied three deep-learning algorithms for satellite image classification
... Show MoreThe current research creates an overall relative analysis concerning the estimation of Meixner process parameters via the wavelet packet transform. Of noteworthy presentation relevance, it compares the moment method and the wavelet packet estimator for the four parameters of the Meixner process. In this paper, the research focuses on finding the best threshold value using the square root log and modified square root log methods with the wavelet packets in the presence of noise to enhance the efficiency and effectiveness of the denoising process for the financial asset market signal. In this regard, a simulation study compares the performance of moment estimation and wavelet packets for different sample sizes. The results show that wavelet p
... Show MoreThe study was preformed for investigating of Salmonella from meat, and compared Vidas UP Salmonella (SPT) with the traditional methods of isolation for Salmonella , were examined 42 meat samples (Beef and Chicken) from the Local and Imported From local markets in the city of Baghdad from period December 2013 -February 2014 the samples were cultured on enrichment and differential media and examined samples Vidas, and confirmed of isolates by cultivation chromgenic agar, biochemical tests ,Api20 E systeme , In addition serological tests , and the serotypes determinate in the Central Public Health Laboratory / National Institute of Salmonella The results showed the contamination in imported meat was more than in the local meat 11.9% and 2
... Show MoreImage compression is a serious issue in computer storage and transmission, that simply makes efficient use of redundancy embedded within an image itself; in addition, it may exploit human vision or perception limitations to reduce the imperceivable information Polynomial coding is a modern image compression technique based on modelling concept to remove the spatial redundancy embedded within the image effectively that composed of two parts, the mathematical model and the residual. In this paper, two stages proposed technqies adopted, that starts by utilizing the lossy predictor model along with multiresolution base and thresholding techniques corresponding to first stage. Latter by incorporating the near lossless com
... Show MoreIn the image processing’s field and computer vision it’s important to represent the image by its information. Image information comes from the image’s features that extracted from it using feature detection/extraction techniques and features description. Features in computer vision define informative data. For human eye its perfect to extract information from raw image, but computer cannot recognize image information. This is why various feature extraction techniques have been presented and progressed rapidly. This paper presents a general overview of the feature extraction categories for image.
With the increasing rate of unauthorized access and attacks, security of confidential data is of utmost importance. While Cryptography only encrypts the data, but as the communication takes place in presence of third parties, so the encrypted text can be decrypted and can easily be destroyed. Steganography, on the other hand, hides the confidential data in some cover source such that the existence of the data is also hidden which do not arouse suspicion regarding the communication taking place between two parties. This paper presents to provide the transfer of secret data embedded into master file (cover-image) to obtain new image (stego-image), which is practically indistinguishable from the original image, so that other than the indeed us
... Show More