Preferred Language
Articles
/
bsj-1301
An Embedded Data Using Slantlet Transform
...Show More Authors

Data hiding is the process of encoding extra information in an image by making small modification to its pixels. To be practical, the hidden data must be perceptually invisible yet robust to common signal processing operations. This paper introduces a scheme for hiding a signature image that could be as much as 25% of the host image data and hence could be used both in digital watermarking as well as image/data hiding. The proposed algorithm uses orthogonal discrete wavelet transforms with two zero moments and with improved time localization called discrete slantlet transform for both host and signature image. A scaling factor ? in frequency domain control the quality of the watermarked images. Experimental results of signature image recovery after applying JPEG coding to the watermarking image are included.

Crossref
View Publication Preview PDF
Quick Preview PDF
Publication Date
Thu Feb 27 2020
Journal Name
Journal Of Mechanics Of Continua And Mathematical Sciences
SUGGESTING MULTIPHASE REGRESSION MODEL ESTIMATION WITH SOME THRESHOLD POINT
...Show More Authors

The estimation of the regular regression model requires several assumptions to be satisfied such as "linearity". One problem occurs by partitioning the regression curve into two (or more) parts and then joining them by threshold point(s). This situation is regarded as a linearity violation of regression. Therefore, the multiphase regression model is received increasing attention as an alternative approach which describes the changing of the behavior of the phenomenon through threshold point estimation. Maximum likelihood estimator "MLE" has been used in both model and threshold point estimations. However, MLE is not resistant against violations such as outliers' existence or in case of the heavy-tailed error distribution. The main goal of t

... Show More
View Publication Preview PDF
Crossref (1)
Crossref
Publication Date
Tue Feb 01 2022
Journal Name
Journal Of Engineering
Geomechanical study to predict the onset of sand production formation
...Show More Authors

One of the costliest problems facing the production of hydrocarbons in unconsolidated sandstone reservoirs is the production of sand once hydrocarbon production starts. The sanding start prediction model is very important to decide on sand control in the future, including whether or when sand control should be used. This research developed an easy-to-use Computer program to determine the beginning of sanding sites in the driven area. The model is based on estimating the critical pressure drop that occurs when sand is onset to produced. The outcomes have been drawn as a function of the free sand production with the critical flow rates for reservoir pressure decline. The results show that the pressure drawdown required to

... Show More
View Publication Preview PDF
Crossref (1)
Crossref
Publication Date
Sat Jan 30 2021
Journal Name
Iraqi Journal Of Science
Dynamic Fault Tolerance Aware Scheduling for Healthcare System on Fog Computing
...Show More Authors

 Internet of Things (IoT) contributes to improve the quality of life as it supports many applications, especially healthcare systems. Data generated from IoT devices is sent to the Cloud Computing (CC) for processing and storage, despite the latency caused by the distance. Because of the revolution in IoT devices, data sent to CC has been increasing. As a result, another problem added to the latency was increasing congestion on the cloud network. Fog Computing (FC) was used to solve these problems because of its proximity to IoT devices, while filtering data is sent to the CC. FC is a middle layer located between IoT devices and the CC layer. Due to the massive data generated by IoT devices on FC, Dynamic Weighted Round Robin (DWRR)

... Show More
View Publication Preview PDF
Scopus (7)
Crossref (7)
Scopus Crossref
Publication Date
Sat Aug 01 2015
Journal Name
Journal Of Engineering
Choosing Appropriate Distribution ‏‎by Minitab’s 17 Software to Analysis System Reliability
...Show More Authors

This research aims to choose the appropriate  probability ‎ distribution  ‎‏‎ to the reliability‎        analysis‎ for  an   item through ‎ collected data for operating and stoppage  time of  the case  study.

    Appropriate choice for .probability distribution   is when  the data look to be on or  close the form fitting line for probability plot and test the data  for  goodness of fit .

     Minitab’s 17 software  was used ‎  for this  purpose after  arranging collected data and setting it in the the program‎.

 &nb

... Show More
View Publication Preview PDF
Publication Date
Sun Mar 06 2016
Journal Name
Baghdad Science Journal
A Note on the Perturbation of arithmetic expressions
...Show More Authors

In this paper we present the theoretical foundation of forward error analysis of numerical algorithms under;• Approximations in "built-in" functions.• Rounding errors in arithmetic floating-point operations.• Perturbations of data.The error analysis is based on linearization method. The fundamental tools of the forward error analysis are system of linear absolute and relative a prior and a posteriori error equations and associated condition numbers constituting optimal of possible cumulative round – off errors. The condition numbers enable simple general, quantitative bounds definitions of numerical stability. The theoretical results have been applied a Gaussian elimination, and have proved to be very effective means of both a prior

... Show More
View Publication Preview PDF
Crossref
Publication Date
Thu Jun 30 2022
Journal Name
Iraqi Journal Of Science
A Comparative Study for Supervised Learning Algorithms to Analyze Sentiment Tweets
...Show More Authors

      Twitter popularity has increasingly grown in the last few years, influencing life’s social, political, and business aspects. People would leave their tweets on social media about an event, and simultaneously inquire to see other people's experiences and whether they had a positive/negative opinion about that event. Sentiment Analysis can be used to obtain this categorization. Product reviews, events, and other topics from all users that comprise unstructured text comments are gathered and categorized as good, harmful, or neutral using sentiment analysis. Such issues are called polarity classifications. This study aims to use Twitter data about OK cuisine reviews obtained from the Amazon website and compare the effectiveness

... Show More
View Publication Preview PDF
Scopus (4)
Crossref (3)
Scopus Crossref
Publication Date
Wed Feb 01 2023
Journal Name
Journal Of Engineering
Checking the Accuracy of Selected Formulae for both Clear Water and Live Bed Bridge Scour
...Show More Authors

Due to severe scouring, many bridges failed worldwide. Therefore, the safety of the existing bridge (after contrition) mainly depends on the continuous monitoring of local scour at the substructure. However, the bridge's safety before construction mainly depends on the consideration of local scour estimation at the bridge substructure. Estimating the local scour at the bridge piers is usually done using the available formulae. Almost all the formulae used in estimating local scour at the bridge piers were derived from laboratory data. It is essential to test the performance of proposed local scour formulae using field data. In this study, the performance of selected bridge scours estimation formulae was validated and sta

... Show More
View Publication Preview PDF
Crossref
Publication Date
Tue Aug 31 2021
Journal Name
Iraqi Journal Of Science
Development of a Job Applicants E-government System Based on Web Mining Classification Methods
...Show More Authors

     Governmental establishments are maintaining historical data for job applicants for future analysis of predication, improvement of benefits, profits, and development of organizations and institutions. In e-government, a decision can be made about job seekers after mining in their information that will lead to a beneficial insight. This paper proposes the development and implementation of an applicant's appropriate job prediction system to suit his or her skills using web content classification algorithms (Logit Boost, j48, PART, Hoeffding Tree, Naive Bayes). Furthermore, the results of the classification algorithms are compared based on data sets called "job classification data" sets. Experimental results indicate

... Show More
View Publication Preview PDF
Scopus (4)
Scopus Crossref
Publication Date
Tue Jan 04 2022
Journal Name
Iraqi Journal Of Science
Identifying of User Behavior from Server Log File
...Show More Authors

Due to the increased of information existing on the World Wide Web (WWW), the subject of how to extract new and useful knowledge from the log file has gained big interest among researchers in data mining and knowledge discovery topics.
Web miming, which is a subset of data mining divided into three particular ways, web content mining, web structure mining, web usage mining. This paper is interested in server log file, which is belonging to the third category (web usage mining). This file will be analyzed according to the suggested algorithm to extract the behavior of the user. Knowing the behavior is coming from knowing the complete path which is taken from the specific user.
Extracting these types of knowledge required many of KDD

... Show More
View Publication Preview PDF
Publication Date
Sat Sep 30 2023
Journal Name
Iraqi Journal Of Science
Performance Improvement of Generative Adversarial Networks to Generate Digital Color Images of Skin Diseases
...Show More Authors

     The main task of creating new digital images of different skin diseases is to increase the resolution of the specific textures and colors of each skin disease. In this paper, the performance of generative adversarial networks has been optimized to generate multicolor and histological color digital images of a variety of skin diseases (melanoma, birthmarks, and basal cell carcinomas). Two architectures for generative adversarial networks were built using two models: the first is a model for generating new images of dermatology through training processes, and the second is a discrimination model whose main task is to identify the generated digital images as either real or fake. The gray wolf swarm algorithm and the whale swarm alg

... Show More
View Publication Preview PDF
Scopus Crossref