The complexity and variety of language included in policy and academic documents make the automatic classification of research papers based on the United Nations Sustainable Development Goals (SDGs) somewhat difficult. Using both pre-trained and contextual word embeddings to increase semantic understanding, this study presents a complete deep learning pipeline combining Bidirectional Long Short-Term Memory (BiLSTM) and Convolutional Neural Network (CNN) architectures which aims primarily to improve the comprehensibility and accuracy of SDG text classification, thereby enabling more effective policy monitoring and research evaluation. Successful document representation via Global Vector (GloVe), Bidirectional Encoder Representations from Transformers (BERT), and FastText embeddings follows our approach, which comprises exhaustive preprocessing operations including stemming, stopword deletion, and ways to address class imbalance. Training and evaluation of the hybrid BiLSTM-CNN model on several benchmark datasets, including SDG-labeled corpora and relevant external datasets like GoEmotion and Ohsumed, help provide a complete assessment of the model’s generalizability. Moreover, this study utilizes zero-shot prompt-based categorization using GPT-3.5/4 and Flan-T5, thereby providing a comprehensive benchmark against current approaches and doing comparative tests using leading models such as Robustly Optimized BERT Pretraining Approach (RoBERTa) and Decoding-enhanced BERT with Disentangled Attention (DeBERTa). Experimental results show that the proposed hybrid model achieves competitive performance due to contextual embeddings, which greatly improve classification accuracy. The study explains model decision processes and improves openness using interpretability techniques, including SHapley Additive exPlanations (SHAP) analysis and attention visualization. These results emphasize the need to incorporate rapid engineering techniques alongside deep learning architectures for effective and interpretable SDG text categorization. With possible effects on more general uses in policy analysis and scientific literature mining, this work offers a scalable and transparent solution for automating the evaluation of SDG research.
To maintain a sustained competitive position in the contemporary environment of knowledge economy, organizations as an open social systems must have an ability to learn and know how to adapt to rapid changes in a proper fashion so that organizational objectives will be achieved efficiently and effectively. A multilevel approach is adopted proposing that organizational learning suffers from the lack of interest about the strategic competitive performance of the organization. This remains implicit almost in all models of organizational learning and there is little focus on how learning organizations achieve sustainable competitive advantage . A dynamic model that captures t
... Show MoreDeep learning convolution neural network has been widely used to recognize or classify voice. Various techniques have been used together with convolution neural network to prepare voice data before the training process in developing the classification model. However, not all model can produce good classification accuracy as there are many types of voice or speech. Classification of Arabic alphabet pronunciation is a one of the types of voice and accurate pronunciation is required in the learning of the Qur’an reading. Thus, the technique to process the pronunciation and training of the processed data requires specific approach. To overcome this issue, a method based on padding and deep learning convolution neural network is proposed to
... Show MoreThe proliferation of many editing programs based on artificial intelligence techniques has contributed to the emergence of deepfake technology. Deepfakes are committed to fabricating and falsifying facts by making a person do actions or say words that he never did or said. So that developing an algorithm for deepfakes detection is very important to discriminate real from fake media. Convolutional neural networks (CNNs) are among the most complex classifiers, but choosing the nature of the data fed to these networks is extremely important. For this reason, we capture fine texture details of input data frames using 16 Gabor filters indifferent directions and then feed them to a binary CNN classifier instead of using the red-green-blue
... Show MoreComputer-aided diagnosis (CAD) has proved to be an effective and accurate method for diagnostic prediction over the years. This article focuses on the development of an automated CAD system with the intent to perform diagnosis as accurately as possible. Deep learning methods have been able to produce impressive results on medical image datasets. This study employs deep learning methods in conjunction with meta-heuristic algorithms and supervised machine-learning algorithms to perform an accurate diagnosis. Pre-trained convolutional neural networks (CNNs) or auto-encoder are used for feature extraction, whereas feature selection is performed using an ant colony optimization (ACO) algorithm. Ant colony optimization helps to search for the bes
... Show MoreThis research describes a new model inspired by Mobilenetv2 that was trained on a very diverse dataset. The goal is to enable fire detection in open areas to replace physical sensor-based fire detectors and reduce false alarms of fires, to achieve the lowest losses in open areas via deep learning. A diverse fire dataset was created that combines images and videos from several sources. In addition, another self-made data set was taken from the farms of the holy shrine of Al-Hussainiya in the city of Karbala. After that, the model was trained with the collected dataset. The test accuracy of the fire dataset that was trained with the new model reached 98.87%.
Compression is the reduction in size of data in order to save space or transmission time. For data transmission, compression can be performed on just the data content or on the entire transmission unit (including header data) depending on a number of factors. In this study, we considered the application of an audio compression method by using text coding where audio compression represented via convert audio file to text file for reducing the time to data transfer by communication channel. Approach: we proposed two coding methods are applied to optimizing the solution by using CFG. Results: we test our application by using 4-bit coding algorithm the results of this method show not satisfy then we proposed a new approach to compress audio fil
... Show MoreThe vast advantages of 3D modelling industry have urged competitors to improve capturing techniques and processing pipelines towards minimizing labour requirements, saving time and reducing project risk. When it comes to digital 3D documentary and conserving projects, laser scanning and photogrammetry are compared to choose between the two. Since both techniques have pros and cons, this paper approaches the potential issues of individual techniques in terms of time, budget, accuracy, density, methodology and ease to use. Terrestrial laser scanner and close-range photogrammetry are tested to document a unique invaluable artefact (Lady of Hatra) located in Iraq for future data fusion sc
Optical burst switching (OBS) network is a new generation optical communication technology. In an OBS network, an edge node first sends a control packet, called burst header packet (BHP) which reserves the necessary resources for the upcoming data burst (DB). Once the reservation is complete, the DB starts travelling to its destination through the reserved path. A notable attack on OBS network is BHP flooding attack where an edge node sends BHPs to reserve resources, but never actually sends the associated DB. As a result the reserved resources are wasted and when this happen in sufficiently large scale, a denial of service (DoS) may take place. In this study, we propose a semi-supervised machine learning approach using k-means algorithm
... Show MorePatients infected with the COVID-19 virus develop severe pneumonia, which typically results in death. Radiological data show that the disease involves interstitial lung involvement, lung opacities, bilateral ground-glass opacities, and patchy opacities. This study aimed to improve COVID-19 diagnosis via radiological chest X-ray (CXR) image analysis, making a substantial contribution to the development of a mobile application that efficiently identifies COVID-19, saving medical professionals time and resources. It also allows for timely preventative interventions by using more than 18000 CXR lung images and the MobileNetV2 convolutional neural network (CNN) architecture. The MobileNetV2 deep-learning model performances were evaluated
... Show More