Preferred Language
Articles
/
bsj-7571
A Comparative Study on Association Rule Mining Algorithms on the Hospital Infection Control Dataset
...Show More Authors

Administrative procedures in various organizations produce numerous crucial records and data. These
records and data are also used in other processes like customer relationship management and accounting
operations.It is incredibly challenging to use and extract valuable and meaningful information from these data
and records because they are frequently enormous and continuously growing in size and complexity.Data
mining is the act of sorting through large data sets to find patterns and relationships that might aid in the data
analysis process of resolving business issues. Using data mining techniques, enterprises can forecast future
trends and make better business decisions.The Apriori algorithm has been introduced to calculate the
association rules between objects; the primary goal of this algorithm is to establish an association rule between
various things. The association rule describes how two or more objects are related.We have employed the
Apriori property and Apriori Mlxtend algorithms in this study and we applied them on the hospital database;
and, by using python coding, the results showed that the performance of Apriori Mlxtend was faster, and it
was 0.38622, and the Apriori property algorithm was 0.090909. That means the Apriori Mlxtend was better
than the Apriori property algorithm.

Scopus Crossref
View Publication Preview PDF
Quick Preview PDF
Publication Date
Sat Feb 09 2019
Journal Name
Journal Of The College Of Education For Women
A comparative Study to calculate the Runs Property in the encryption systems
...Show More Authors

Cryptographic applications demand much more of a pseudo-random-sequence
generator than do most other applications. Cryptographic randomness does not mean just
statistical randomness, although that is part of it. For a sequence to be cryptographically
secure pseudo-random, it must be unpredictable.
The random sequences should satisfy the basic randomness postulates; one of them is
the run postulate (sequences of the same bit). These sequences should have about the same
number of ones and zeros, about half the runs should be of length one, one quarter of length
two, one eighth of length three, and so on.The distribution of run lengths for zeros and ones
should be the same. These properties can be measured determinis

... Show More
View Publication Preview PDF
Publication Date
Tue Aug 01 2023
Journal Name
Baghdad Science Journal
Digital Data Encryption Using a Proposed W-Method Based on AES and DES Algorithms
...Show More Authors

This paper proposes a new encryption method. It combines two cipher algorithms, i.e., DES and AES, to generate hybrid keys. This combination strengthens the proposed W-method by generating high randomized keys. Two points can represent the reliability of any encryption technique. Firstly, is the key generation; therefore, our approach merges 64 bits of DES with 64 bits of AES to produce 128 bits as a root key for all remaining keys that are 15. This complexity increases the level of the ciphering process. Moreover, it shifts the operation one bit only to the right. Secondly is the nature of the encryption process. It includes two keys and mixes one round of DES with one round of AES to reduce the performance time. The W-method deals with

... Show More
View Publication Preview PDF
Scopus (6)
Crossref (2)
Scopus Crossref
Publication Date
Mon Jan 02 2017
Journal Name
Al-academy
Style and the jump in industrial product design - a comparative study
...Show More Authors

Find cares studying ways in the development of industrial products and designs: the way the progressive development (how typical) and root development (jump design), was the aim of the research: to determine the effectiveness of the pattern and the jump in the development of designs and industrial products. After a process of analysis of a sample of research and two models of contemporary household electrical appliances, it was reached a set of findings and conclusions including:1-leaping designs changed a lot of entrenched perceptions of the user on how the product works and its use and the size and shape of the product, revealing him about the possibilities of sophisticated relationships with the product, while keeping the typical desi

... Show More
View Publication Preview PDF
Crossref
Publication Date
Wed Dec 30 2015
Journal Name
College Of Islamic Sciences
Acquisition provisions in Islamic jurisprudence: A model - a comparative study
...Show More Authors

Acquisition provisions in Islamic jurisprudence

View Publication Preview PDF
Publication Date
Sun Mar 01 2020
Journal Name
Baghdad Science Journal
A Comparative Study on the Double Prior for Reliability Kumaraswamy Distribution with Numerical Solution
...Show More Authors

This work, deals with Kumaraswamy distribution. Kumaraswamy (1976, 1978) showed well known probability distribution functions such as the normal, beta and log-normal but in (1980) Kumaraswamy developed a more general probability density function for double bounded random processes, which is known as Kumaraswamy’s distribution. Classical maximum likelihood and Bayes methods estimator are used to estimate the unknown shape parameter (b). Reliability function are obtained using symmetric loss functions by using three types of informative priors two single priors and one double prior. In addition, a comparison is made for the performance of these estimators with respect to the numerical solution which are found using expansion method. The

... Show More
View Publication Preview PDF
Scopus (3)
Crossref (1)
Scopus Clarivate Crossref
Publication Date
Sun Mar 17 2019
Journal Name
Baghdad Science Journal
A Study on the Accuracy of Prediction in Recommendation System Based on Similarity Measures
...Show More Authors

Recommender Systems are tools to understand the huge amount of data available in the internet world. Collaborative filtering (CF) is one of the most knowledge discovery methods used positively in recommendation system. Memory collaborative filtering emphasizes on using facts about present users to predict new things for the target user. Similarity measures are the core operations in collaborative filtering and the prediction accuracy is mostly dependent on similarity calculations. In this study, a combination of weighted parameters and traditional similarity measures are conducted to calculate relationship among users over Movie Lens data set rating matrix. The advantages and disadvantages of each measure are spotted. From the study, a n

... Show More
View Publication Preview PDF
Scopus (17)
Crossref (2)
Scopus Clarivate Crossref
Publication Date
Sat Jan 01 2022
Journal Name
Turkish Journal Of Physiotherapy And Rehabilitation
classification coco dataset using machine learning algorithms
...Show More Authors

In this paper, we used four classification methods to classify objects and compareamong these methods, these are K Nearest Neighbor's (KNN), Stochastic Gradient Descentlearning (SGD), Logistic Regression Algorithm(LR), and Multi-Layer Perceptron (MLP). Weused MCOCO dataset for classification and detection the objects, these dataset image wererandomly divided into training and testing datasets at a ratio of 7:3, respectively. In randomlyselect training and testing dataset images, converted the color images to the gray level, thenenhancement these gray images using the histogram equalization method, resize (20 x 20) fordataset image. Principal component analysis (PCA) was used for feature extraction, andfinally apply four classification metho

... Show More
Publication Date
Wed Jun 01 2022
Journal Name
Baghdad Science Journal
Variable Selection Using aModified Gibbs Sampler Algorithm with Application on Rock Strength Dataset
...Show More Authors

Variable selection is an essential and necessary task in the statistical modeling field. Several studies have triedto develop and standardize the process of variable selection, but it isdifficultto do so. The first question a researcher needs to ask himself/herself what are the most significant variables that should be used to describe a given dataset’s response. In thispaper, a new method for variable selection using Gibbs sampler techniqueshas beendeveloped.First, the model is defined, and the posterior distributions for all the parameters are derived.The new variable selection methodis tested usingfour simulation datasets. The new approachiscompared with some existingtechniques: Ordinary Least Squared (OLS), Least Absolute Shrinkage

... Show More
View Publication Preview PDF
Scopus (3)
Crossref (2)
Scopus Clarivate Crossref
Publication Date
Sun Jun 20 2021
Journal Name
Baghdad Science Journal
A Scoping Study on Lightweight Cryptography Reviews in IoT
...Show More Authors

The efforts in designing and developing lightweight cryptography (LWC) started a decade ago. Many scholarly studies in literature report the enhancement of conventional cryptographic algorithms and the development of new algorithms. This significant number of studies resulted in the rise of many review studies on LWC in IoT. Due to the vast number of review studies on LWC in IoT, it is not known what the studies cover and how extensive the review studies are. Therefore, this article aimed to bridge the gap in the review studies by conducting a systematic scoping study. It analyzed the existing review articles on LWC in IoT to discover the extensiveness of the reviews and the topics covered. The results of the study suggested that many re

... Show More
View Publication Preview PDF
Scopus (15)
Crossref (4)
Scopus Clarivate Crossref
Publication Date
Sun Feb 25 2024
Journal Name
Baghdad Science Journal
The Effect Of Optimizers On The Generalizability Additive Neural Attention For Customer Support Twitter Dataset In Chatbot Application
...Show More Authors

When optimizing the performance of neural network-based chatbots, determining the optimizer is one of the most important aspects. Optimizers primarily control the adjustment of model parameters such as weight and bias to minimize a loss function during training. Adaptive optimizers such as ADAM have become a standard choice and are widely used for their invariant parameter updates' magnitudes concerning gradient scale variations, but often pose generalization problems. Alternatively, Stochastic Gradient Descent (SGD) with Momentum and the extension of ADAM, the ADAMW, offers several advantages. This study aims to compare and examine the effects of these optimizers on the chatbot CST dataset. The effectiveness of each optimizer is evaluat

... Show More
View Publication Preview PDF
Scopus (1)
Scopus Crossref