The complexity and variety of language included in policy and academic documents make the automatic classification of research papers based on the United Nations Sustainable Development Goals (SDGs) somewhat difficult. Using both pre-trained and contextual word embeddings to increase semantic understanding, this study presents a complete deep learning pipeline combining Bidirectional Long Short-Term Memory (BiLSTM) and Convolutional Neural Network (CNN) architectures which aims primarily to improve the comprehensibility and accuracy of SDG text classification, thereby enabling more effective policy monitoring and research evaluation. Successful document representation via Global Vector (GloVe), Bidirectional Encoder Representations from Transformers (BERT), and FastText embeddings follows our approach, which comprises exhaustive preprocessing operations including stemming, stopword deletion, and ways to address class imbalance. Training and evaluation of the hybrid BiLSTM-CNN model on several benchmark datasets, including SDG-labeled corpora and relevant external datasets like GoEmotion and Ohsumed, help provide a complete assessment of the model’s generalizability. Moreover, this study utilizes zero-shot prompt-based categorization using GPT-3.5/4 and Flan-T5, thereby providing a comprehensive benchmark against current approaches and doing comparative tests using leading models such as Robustly Optimized BERT Pretraining Approach (RoBERTa) and Decoding-enhanced BERT with Disentangled Attention (DeBERTa). Experimental results show that the proposed hybrid model achieves competitive performance due to contextual embeddings, which greatly improve classification accuracy. The study explains model decision processes and improves openness using interpretability techniques, including SHapley Additive exPlanations (SHAP) analysis and attention visualization. These results emphasize the need to incorporate rapid engineering techniques alongside deep learning architectures for effective and interpretable SDG text categorization. With possible effects on more general uses in policy analysis and scientific literature mining, this work offers a scalable and transparent solution for automating the evaluation of SDG research.
The research problem arose from the researchers’ sense of the importance of Digital Intelligence (DI), as it is a basic requirement to help students engage in the digital world and be disciplined in using technology and digital techniques, as students’ ideas are sufficiently susceptible to influence at this stage in light of modern technology. The research aims to determine the level of DI among university students using Artificial Intelligence (AI) techniques. To verify this, the researchers built a measure of DI. The measure in its final form consisted of (24) items distributed among (8) main skills, and the validity and reliability of the tool were confirmed. It was applied to a sample of 139 male and female students who were chosen
... Show MoreIn the field of construction project management, time and cost are the most important factors to be considered in planning every project, and their relationship is complex. The total cost for each project is the sum of the direct and indirect cost. Direct cost commonly represents labor, materials, equipment, etc.
Indirect cost generally represents overhead cost such as supervision, administration, consultants, and interests. Direct cost grows at an increasing rate as the project time is reduced from its original planned time. However, indirect cost continues for the life of the project and any reduction in project time means a reduction in indirect cost. Therefore, there is a trade-off between the time and cost for completing construc
The current standard for treating pilonidal sinus (PNS) is surgical intervention with excision of the sinus. Recurrence of PNS can be controlled with good hygiene and regular shaving of the natal cleft, laser treatment is a useful adjunct to prevent recurrence. Carbon dioxide (CO2) laser is a gold standard of soft tissue surgical laser due to its wavelength (10600 nm) thin depth (0.03mm) and collateral thermal zone (150mic).It effectively seals blood vessels, lymphatic, and nerve endings, Moreover wound is rendered sterile by effect of laser. Aim of this study was to apply and assess the clinical usefulness of CO2 10600nm laser in pilonidal sinus excision and decrease chance of recurrence. Design: For 10 patients, between 18 and 39 year
... Show MoreIn this paper, we introduce a DCT based steganographic method for gray scale images. The embedding approach is designed to reach efficient tradeoff among the three conflicting goals; maximizing the amount of hidden message, minimizing distortion between the cover image and stego-image,and maximizing the robustness of embedding. The main idea of the method is to create a safe embedding area in the middle and high frequency region of the DCT domain using a magnitude modulation technique. The magnitude modulation is applied using uniform quantization with magnitude Adder/Subtractor modules. The conducted test results indicated that the proposed method satisfy high capacity, high preservation of perceptual and statistical properties of the steg
... Show MoreQuantitative analysis of human voice has been subject of interest and the subject gained momentum when human voice was identified as a modality for human authentication and identification. The main organ responsible for production of sound is larynx and the structure of larynx along with its physical properties and modes of vibration determine the nature and quality of sound produced. There has been lot of work from the point of view of fundamental frequency of sound and its characteristics. With the introduction of additional applications of human voice interest grew in other characteristics of sound and possibility of extracting useful features from human voice. We conducted a study using Fast Fourier Transform (FFT) technique to analy
... Show MoreImage segmentation can be defined as a cutting or segmenting process of the digital image into many useful points which are called segmentation, that includes image elements contribute with certain attributes different form Pixel that constitute other parts. Two phases were followed in image processing by the researcher in this paper. At the beginning, pre-processing image on images was made before the segmentation process through statistical confidence intervals that can be used for estimate of unknown remarks suggested by Acho & Buenestado in 2018. Then, the second phase includes image segmentation process by using "Bernsen's Thresholding Technique" in the first phase. The researcher drew a conclusion that in case of utilizing
... Show MoreA high sensitivity, low power and low cost sensor has been developed for photoplethysmography (PPG) measurement. The PPG principle was applied to follow the dilatation and contraction of skin blood vessels during the cardiac cycle. A standard light emitting diodes (LEDs) has been used as a light emitter and detector, and in order to reduce the space, cost and power, the classical analogue-to-digital converters (ADCs) replaced by the pulse-based signal conversion techniques. A general purpose microcontroller has been used for the implementation of measurement protocol. The proposed approach leads to better spectral sensitivity, increased resolution, reduction in cost, dimensions and power consumption. The basic sensing configuration prese
... Show MoreThe most popular medium that being used by people on the internet nowadays is video streaming. Nevertheless, streaming a video consumes much of the internet traffics. The massive quantity of internet usage goes for video streaming that disburses nearly 70% of the internet. Some constraints of interactive media might be detached; such as augmented bandwidth usage and lateness. The need for real-time transmission of video streaming while live leads to employing of Fog computing technologies which is an intermediary layer between the cloud and end user. The latter technology has been introduced to alleviate those problems by providing high real-time response and computational resources near to the
... Show MoreCopper, and its, alloys and composites (being the matrix), are broadly used in the electronic as well as bearing materials due to the excellent thermal and electrical conductivities it has.
In this study, powder metallurgy technique was used for the production of copper graphite composite with three volume perc ent of graphite. Processing parameters selected is (900) °C sintering temperature and (90) minutes holding time for samples that were heated in an inert atmosphere (argon gas). Wear test results showed a pronounced improvement in wear resistance as the percent of graphite increased which acts as solid lubricant (where wear rate was decreased by about 88% as compared with pure Cu). Microhardness and
... Show More