The complexity and variety of language included in policy and academic documents make the automatic classification of research papers based on the United Nations Sustainable Development Goals (SDGs) somewhat difficult. Using both pre-trained and contextual word embeddings to increase semantic understanding, this study presents a complete deep learning pipeline combining Bidirectional Long Short-Term Memory (BiLSTM) and Convolutional Neural Network (CNN) architectures which aims primarily to improve the comprehensibility and accuracy of SDG text classification, thereby enabling more effective policy monitoring and research evaluation. Successful document representation via Global Vector (GloVe), Bidirectional Encoder Representations from Transformers (BERT), and FastText embeddings follows our approach, which comprises exhaustive preprocessing operations including stemming, stopword deletion, and ways to address class imbalance. Training and evaluation of the hybrid BiLSTM-CNN model on several benchmark datasets, including SDG-labeled corpora and relevant external datasets like GoEmotion and Ohsumed, help provide a complete assessment of the model’s generalizability. Moreover, this study utilizes zero-shot prompt-based categorization using GPT-3.5/4 and Flan-T5, thereby providing a comprehensive benchmark against current approaches and doing comparative tests using leading models such as Robustly Optimized BERT Pretraining Approach (RoBERTa) and Decoding-enhanced BERT with Disentangled Attention (DeBERTa). Experimental results show that the proposed hybrid model achieves competitive performance due to contextual embeddings, which greatly improve classification accuracy. The study explains model decision processes and improves openness using interpretability techniques, including SHapley Additive exPlanations (SHAP) analysis and attention visualization. These results emphasize the need to incorporate rapid engineering techniques alongside deep learning architectures for effective and interpretable SDG text categorization. With possible effects on more general uses in policy analysis and scientific literature mining, this work offers a scalable and transparent solution for automating the evaluation of SDG research.
YouTube is not just a platform that individuals share, upload, comment on videos; teachers and educators can utilize it to the best maximum so that students can have benefits. This study aims at investigating how active and influential YouTube can be in the educational process and how it is beneficial for language teachers to enhance the skills of students. The study demonstrates different theoretical frameworks that tackle the employment of technology to enhance the learning/teaching process. It relies on the strategies of Berk (2009) for using multimedia media, video clips in particular to develop the abilities of teachers for using technology in classrooms. To achieve the objective of the study, the researchers develop a questionnair
... Show MoreSix proposed simply supported high strength-steel fiber reinforced concrete (HS-SFRC) beams reinforced with FRP (fiber reinforced polymer) rebars were numerically tested by finite element method using ABAQUS software to investigate their behavior under the flexural failure. The beams were divided into two groups depending on their cross sectional shape. Group A consisted of four trapezoidal beams with dimensions of (height 200 mm, top width 250 mm, and bottom width 125 mm), while group B consisted of two rectangular beams with dimensions of (125 ×200) mm. All specimens have same total length of 1500 mm, and they were also considered to be made of same high strength concrete designed material with 1% volume fraction of steel fiber.
... Show MoreAbstract
For sparse system identification,recent suggested algorithms are
-norm Least Mean Square (
-LMS), Zero-Attracting LMS (ZA-LMS), Reweighted Zero-Attracting LMS (RZA-LMS), and p-norm LMS (p-LMS) algorithms, that have modified the cost function of the conventional LMS algorithm by adding a constraint of coefficients sparsity. And so, the proposed algorithms are named
-ZA-LMS,
In this article, the high accuracy and effectiveness of forecasting global gold prices are verified using a hybrid machine learning algorithm incorporating an Adaptive Neuro-Fuzzy Inference System (ANFIS) model with Particle Swarm Optimization (PSO) and Gray Wolf Optimizer (GWO). The hybrid approach had successes that enabled it to be a good strategy for practical use. The ARIMA-ANFIS hybrid methodology was used to forecast global gold prices. The ARIMA model is implemented on real data, and then its nonlinear residuals are predicted by ANFIS, ANFIS-PSO, and ANFIS-GWO. The results indicate that hybrid models improve the accuracy of single ARIMA and ANFIS models in forecasting. Finally, a comparison was made between the hybrid foreca
... Show MoreCloud computing offers a new way of service provision by rearranging various resources over the Internet. The most important and popular cloud service is data storage. In order to preserve the privacy of data holders, data are often stored in cloud in an encrypted form. However, encrypted data introduce new challenges for cloud data deduplication, which becomes crucial for big data storage and processing in the cloud. Traditional deduplication schemes cannot work on encrypted data. Among these data, digital videos are fairly huge in terms of storage cost and size; and techniques that can help the legal aspects of video owner such as copyright protection and reducing the cloud storage cost and size are always desired. This paper focuses on v
... Show More