Preferred Language
Articles
/
jRbwvYoBVTCNdQwCIKQH
Document retrieval using term term frequency inverse sentence frequency weighting scheme
...Show More Authors

The need for an efficient method to find the furthermost appropriate document corresponding to a particular search query has become crucial due to the exponential development in the number of papers that are now readily available to us on the web. The vector space model (VSM) a perfect model used in “information retrieval”, represents these words as a vector in space and gives them weights via a popular weighting method known as term frequency inverse document frequency (TF-IDF). In this research, work has been proposed to retrieve the most relevant document focused on representing documents and queries as vectors comprising average term term frequency inverse sentence frequency (TF-ISF) weights instead of representing them as vectors of term TF-IDF weight and two basic and effective similarity measures: Cosine and Jaccard were used. Using the MS MARCO dataset, this article analyzes and assesses the retrieval effectiveness of the TF-ISF weighting scheme. The result shows that the TF-ISF model with the Cosine similarity measure retrieves more relevant documents. The model was evaluated against the conventional TF-ISF technique and shows that it performs significantly better on MS MARCO data (Microsoft-curated data of Bing queries).

Scopus Crossref
View Publication
Publication Date
Sat Mar 13 2021
Journal Name
Al-nahrain Journal Of Science
Hiding Multi Short Audio Signals in Color Image by using Fast Fourier Transform
...Show More Authors

Many purposes require communicating audio files between the users using different applications of social media. The security level of these applications is limited; at the same time many audio files are secured and must be accessed by authorized persons only, while, most present works attempt to hide single audio file in certain cover media. In this paper, a new approach of hiding three audio signals with unequal sizes in single color digital image has been proposed using the frequencies transform of this image. In the proposed approach, the Fast Fourier Transform was adopted where each audio signal is embedded in specific region with high frequencies in the frequency spectrum of the cover image to sa

... Show More
View Publication
Scopus (2)
Scopus Crossref
Publication Date
Mon Jun 01 2020
Journal Name
Journal Of Engineering
Arabic Sentiment Analysis (ASA) Using Deep Learning Approach
...Show More Authors

Sentiment analysis is one of the major fields in natural language processing whose main task is to extract sentiments, opinions, attitudes, and emotions from a subjective text. And for its importance in decision making and in people's trust with reviews on web sites, there are many academic researches to address sentiment analysis problems. Deep Learning (DL) is a powerful Machine Learning (ML) technique that has emerged with its ability of feature representation and differentiating data, leading to state-of-the-art prediction results. In recent years, DL has been widely used in sentiment analysis, however, there is scarce in its implementation in the Arabic language field. Most of the previous researches address other l

... Show More
View Publication Preview PDF
Crossref (24)
Crossref
Publication Date
Wed Feb 01 2023
Journal Name
Baghdad Science Journal
Retrieving Encrypted Images Using Convolution Neural Network and Fully Homomorphic Encryption
...Show More Authors

A content-based image retrieval (CBIR) is a technique used to retrieve images from an image database. However, the CBIR process suffers from less accuracy to retrieve images from an extensive image database and ensure the privacy of images. This paper aims to address the issues of accuracy utilizing deep learning techniques as the CNN method. Also, it provides the necessary privacy for images using fully homomorphic encryption methods by Cheon, Kim, Kim, and Song (CKKS). To achieve these aims, a system has been proposed, namely RCNN_CKKS, that includes two parts. The first part (offline processing) extracts automated high-level features based on a flatting layer in a convolutional neural network (CNN) and then stores these features in a

... Show More
View Publication Preview PDF
Scopus (16)
Crossref (4)
Scopus Clarivate Crossref
Publication Date
Mon May 21 2007
Journal Name
Journal Of Planner And Development
Using the Input - Output Model in building the economic plan using the computer
...Show More Authors

The origin of this technique lies in the analysis of François Kenai (1694-1774), the leader of the School of Naturalists, presented in Tableau Economique. This method was developed by Karl Marx in his analysis of the Departmental Relationships and the nature of these relations in the models of " "He said. The current picture of this type of economic analysis is credited to the Russian economist Vasily Leontif. This analytical model is commonly used in developing economic plans in developing countries (p. 1, p. 86). There are several types of input and output models, such as static model, mobile model, regional models, and so on. However, this research will be confined to the open-ended model, which found areas in practical application.

... Show More
View Publication Preview PDF
Publication Date
Fri May 30 2025
Journal Name
Journal Of Internet Services And Information Security
Enhancing Lung Cancer Classification using CT Images using Processing Techniques Employing U-Net Architecture
...Show More Authors

View Publication
Scopus Crossref
Publication Date
Sat Oct 31 2020
Journal Name
International Journal Of Intelligent Engineering And Systems
Speech Emotion Recognition Using MELBP Variants of Spectrogram Image
...Show More Authors

View Publication Preview PDF
Scopus (7)
Crossref (4)
Scopus Crossref
Publication Date
Sun Jul 20 2025
Journal Name
Ibn Al-haitham Journal For Pure And Applied Sciences
Using a 3D Chaotic Dynamic System as a Random Key Generator for Image Steganography
...Show More Authors

In today's digital era, the importance of securing information has reached critical levels. Steganography is one of the methods used for this purpose by hiding sensitive data within other files. This study introduces an approach utilizing a chaotic dynamic system as a random key generator, governing both the selection of hiding locations within an image and the amount of data concealed in each location. The security of the steganography approach is considerably improved by using this random procedure. A 3D dynamic system with nine parameters influencing its behavior was carefully chosen. For each parameter, suitable interval values were determined to guarantee the system's chaotic behavior. Analysis of chaotic performance is given using the

... Show More
View Publication Preview PDF
Publication Date
Mon Dec 31 2012
Journal Name
Al-khwarizmi Engineering Journal
Speech Compression Using Multecirculerletet Transform
...Show More Authors

Compressing the speech reduces the data storage requirements, leading to reducing the time of transmitting the digitized speech over long-haul links like internet. To obtain best performance in speech compression, wavelet transforms require filters that combine a number of desirable properties, such as orthogonality and symmetry.The MCT bases functions are derived from GHM bases function using 2D linear convolution .The fast computation algorithm methods introduced here added desirable features to the current transform. We further assess the performance of the MCT in speech compression application. This paper discusses the effect of using DWT and MCT (one and two dimension) on speech compression. DWT and MCT performances in terms of comp

... Show More
View Publication Preview PDF
Publication Date
Wed Nov 01 2017
Journal Name
Journal Of Economics And Administrative Sciences
strong criminal capabilities، Using simulation .
...Show More Authors

The penalized least square method is a popular method to deal with high dimensional data ,where  the number of explanatory variables is large than the sample size . The properties of  penalized least square method are given high prediction accuracy and making estimation and variables selection

 At once. The penalized least square method gives a sparse model ,that meaning a model with small variables so that can be interpreted easily .The penalized least square is not robust ,that means very sensitive to the presence of outlying observation , to deal with this problem, we can used a robust loss function to get the robust penalized least square method ,and get robust penalized estimator and

... Show More
View Publication Preview PDF
Crossref
Publication Date
Tue Dec 27 2022
Journal Name
2022 3rd Information Technology To Enhance E-learning And Other Application (it-ela)
Diabetes Prediction Using Machine Learning
...Show More Authors

Diabetes is one of the increasing chronic diseases, affecting millions of people around the earth. Diabetes diagnosis, its prediction, proper cure, and management are compulsory. Machine learning-based prediction techniques for diabetes data analysis can help in the early detection and prediction of the disease and its consequences such as hypo/hyperglycemia. In this paper, we explored the diabetes dataset collected from the medical records of one thousand Iraqi patients. We applied three classifiers, the multilayer perceptron, the KNN and the Random Forest. We involved two experiments: the first experiment used all 12 features of the dataset. The Random Forest outperforms others with 98.8% accuracy. The second experiment used only five att

... Show More
View Publication
Scopus (6)
Crossref (5)
Scopus Crossref