The complexity and variety of language included in policy and academic documents make the automatic classification of research papers based on the United Nations Sustainable Development Goals (SDGs) somewhat difficult. Using both pre-trained and contextual word embeddings to increase semantic understanding, this study presents a complete deep learning pipeline combining Bidirectional Long Short-Term Memory (BiLSTM) and Convolutional Neural Network (CNN) architectures which aims primarily to improve the comprehensibility and accuracy of SDG text classification, thereby enabling more effective policy monitoring and research evaluation. Successful document representation via Global Vector (GloVe), Bidirectional Encoder Representations from Transformers (BERT), and FastText embeddings follows our approach, which comprises exhaustive preprocessing operations including stemming, stopword deletion, and ways to address class imbalance. Training and evaluation of the hybrid BiLSTM-CNN model on several benchmark datasets, including SDG-labeled corpora and relevant external datasets like GoEmotion and Ohsumed, help provide a complete assessment of the model’s generalizability. Moreover, this study utilizes zero-shot prompt-based categorization using GPT-3.5/4 and Flan-T5, thereby providing a comprehensive benchmark against current approaches and doing comparative tests using leading models such as Robustly Optimized BERT Pretraining Approach (RoBERTa) and Decoding-enhanced BERT with Disentangled Attention (DeBERTa). Experimental results show that the proposed hybrid model achieves competitive performance due to contextual embeddings, which greatly improve classification accuracy. The study explains model decision processes and improves openness using interpretability techniques, including SHapley Additive exPlanations (SHAP) analysis and attention visualization. These results emphasize the need to incorporate rapid engineering techniques alongside deep learning architectures for effective and interpretable SDG text categorization. With possible effects on more general uses in policy analysis and scientific literature mining, this work offers a scalable and transparent solution for automating the evaluation of SDG research.
Non uniform channelization is a crucial task in cognitive radio receivers for obtaining separate channels from the digitized wideband input signal at different intervals of time. The two main requirements in the channelizer are reconfigurability and low complexity. In this paper, a reconfigurable architecture based on a combination of Improved Coefficient Decimation Method (ICDM) and Coefficient Interpolation Method (CIM) is proposed. The proposed Hybrid Coefficient Decimation-Interpolation Method (HCDIM) based filter bank (FB) is able to realize the same number of channels realized using (ICDM) but with a maximum decimation factor divided by the interpolation factor (L), which leads to less deterioration in stop band at
... Show MoreSome of the main challenges in developing an effective network-based intrusion detection system (IDS) include analyzing large network traffic volumes and realizing the decision boundaries between normal and abnormal behaviors. Deploying feature selection together with efficient classifiers in the detection system can overcome these problems. Feature selection finds the most relevant features, thus reduces the dimensionality and complexity to analyze the network traffic. Moreover, using the most relevant features to build the predictive model, reduces the complexity of the developed model, thus reducing the building classifier model time and consequently improves the detection performance. In this study, two different sets of select
... Show MoreThis study proposes a hybrid predictive maintenance framework that integrates the Kolmogorov-Arnold Network (KAN) with Short-Time Fourier Transform (STFT) for intelligent fault diagnosis in industrial rotating machinery. The method is designed to address challenges posed by non-linear and non-stationary vibration signals under varying operational conditions. Experimental validation using the FALEX multispecimen test bench demonstrated a high classification accuracy of 97.5%, outperforming traditional models such as SVM, Random Forest, and XGBoost. The approach maintained robust performance across dynamic load scenarios and noisy environments, with precision and recall exceeding 95%. Key contributions include a hardware-accelerated K
... Show MoreSentiment analysis refers to the task of identifying polarity of positive and negative for particular text that yield an opinion. Arabic language has been expanded dramatically in the last decade especially with the emergence of social websites (e.g. Twitter, Facebook, etc.). Several studies addressed sentiment analysis for Arabic language using various techniques. The most efficient techniques according to the literature were the machine learning due to their capabilities to build a training model. Yet, there is still issues facing the Arabic sentiment analysis using machine learning techniques. Such issues are related to employing robust features that have the ability to discrimina
... Show More