Preferred Language
Articles
/
joe-3005
AGENT BASED MONITORING FOR INVESTIGATION PROCESS AND MAINTENANCE IMPROVEMENT
...Show More Authors

Agent technology has a widespread usage in most of computerized systems. In this paper agent technology has been applied to monitor wear test for an aluminium silicon alloy which is used in automotive parts and gears of light loads. In addition to wear test monitoring، porosity effect on
wear resistance has been investigated. To get a controlled amount of porosity, the specimens have
been made by powder metallurgy process with various pressures (100, 200 and 600) MPa. The aim of
this investigation is a proactive step to avoid the failure occurrence by the porosity.
A dry wear tests have been achieved by subjecting three reciprocated loads (1000, 1500 and 2000)g
for three periods (10, 45 and 90)min. The weight difference after each test is immediately measured to
find the losing weight and wear rate for each specimen. Wear test was monitored online by two
sensors, force sensor to control the applied load, find friction force and coefficient of friction. The
sensor is an acoustic emission to detect crack initiations of the worn surface by transfers the emitted
ultrasonic waves from crack initiations to electric signals. Scanning electron microscope has been
used to examine the worn surfaces. The overall results include that the effect of pores depends on pore
shapes, sizes and concentrations.

Crossref
View Publication Preview PDF
Quick Preview PDF
Publication Date
Sun Apr 23 2017
Journal Name
International Conference Of Reliable Information And Communication Technology
Classification of Arabic Writer Based on Clustering Techniques
...Show More Authors

Arabic text categorization for pattern recognitions is challenging. We propose for the first time a novel holistic method based on clustering for classifying Arabic writer. The categorization is accomplished stage-wise. Firstly, these document images are sectioned into lines, words, and characters. Secondly, their structural and statistical features are obtained from sectioned portions. Thirdly, F-Measure is used to evaluate the performance of the extracted features and their combination in different linkage methods for each distance measures and different numbers of groups. Finally, experiments are conducted on the standard KHATT dataset of Arabic handwritten text comprised of varying samples from 1000 writers. The results in the generatio

... Show More
Scopus (6)
Scopus
Publication Date
Wed Jun 01 2011
Journal Name
Journal Of Al-nahrain University Science
Breaking Knapsack Cipher Using Population Based Incremental Learning
...Show More Authors

View Publication
Crossref
Publication Date
Wed Mar 08 2017
Journal Name
Ibn Al-haitham Journal For Pure And Applied Sciences
Genetic –Based Face Retrieval Using Statistical Features
...Show More Authors

Content-based image retrieval has been keenly developed in numerous fields. This provides more active management and retrieval of images than the keyword-based method. So the content based image retrieval becomes one of the liveliest researches in the past few years. In a given set of objects, the retrieval of information suggests solutions to search for those in response to a particular description. The set of objects which can be considered are documents, images, videos, or sounds. This paper proposes a method to retrieve a multi-view face from a large face database according to color and texture attributes. Some of the features used for retrieval are color attributes such as the mean, the variance, and the color image's bitmap. In add

... Show More
View Publication Preview PDF
Publication Date
Sun Nov 01 2020
Journal Name
Journal Of Physics: Conference Series
Improve topic modeling algorithms based on Twitter hashtags
...Show More Authors
Abstract<p>Today with increase using social media, a lot of researchers have interested in topic extraction from Twitter. Twitter is an unstructured short text and messy that it is critical to find topics from tweets. While topic modeling algorithms such as Latent Semantic Analysis (LSA) and Latent Dirichlet Allocation (LDA) are originally designed to derive topics from large documents such as articles, and books. They are often less efficient when applied to short text content like Twitter. Luckily, Twitter has many features that represent the interaction between users. Tweets have rich user-generated hashtags as keywords. In this paper, we exploit the hashtags feature to improve topics learned</p> ... Show More
View Publication
Scopus (20)
Crossref (19)
Scopus Crossref
Publication Date
Thu Jan 01 2015
Journal Name
Journal Of Engineering
GNSS Baseline Configuration Based on First Order Design
...Show More Authors

The quality of Global Navigation Satellite Systems (GNSS) networks are considerably influenced by the configuration of the observed baselines. Where, this study aims to find an optimal configuration for GNSS baselines in terms of the number and distribution  of baselines to improve the quality criteria of the GNSS networks. First order design problem (FOD) was applied in this research to optimize GNSS network baselines configuration, and based on sequential adjustment method to solve its objective functions.

FOD for optimum precision (FOD-p) was the proposed model which based on the design criteria of A-optimality and E-optimality. These design criteria were selected as objective functions of precision, whic

... Show More
View Publication
Publication Date
Tue Feb 01 2022
Journal Name
Int. J. Nonlinear Anal. Appl.
Computer-based plagiarism detection techniques: A comparative study
...Show More Authors

Plagiarism is becoming more of a problem in academics. It’s made worse by the ease with which a wide range of resources can be found on the internet, as well as the ease with which they can be copied and pasted. It is academic theft since the perpetrator has ”taken” and presented the work of others as his or her own. Manual detection of plagiarism by a human being is difficult, imprecise, and time-consuming because it is difficult for anyone to compare their work to current data. Plagiarism is a big problem in higher education, and it can happen on any topic. Plagiarism detection has been studied in many scientific articles, and methods for recognition have been created utilizing the Plagiarism analysis, Authorship identification, and

... Show More
Publication Date
Mon Jan 01 2018
Journal Name
International Journal Of Electronic Security And Digital Forensics
LSB based audio steganography preserving minimum sample SNR
...Show More Authors

View Publication
Scopus (4)
Scopus Crossref
Publication Date
Fri Sep 23 2022
Journal Name
Specialusis Ugdymas
Text Cryptography based on Arabic Words Characters Number
...Show More Authors

Cryptography is a method used to mask text based on any encryption method, and the authorized user only can decrypt and read this message. An intruder tried to attack in many manners to access the communication channel, like impersonating, non-repudiation, denial of services, modification of data, threatening confidentiality and breaking availability of services. The high electronic communications between people need to ensure that transactions remain confidential. Cryptography methods give the best solution to this problem. This paper proposed a new cryptography method based on Arabic words; this method is done based on two steps. Where the first step is binary encoding generation used t

... Show More
Publication Date
Sat Oct 01 2022
Journal Name
Therapeutic Delivery
Particles-based Medicated Wound Dressings: A Comprehensive Review
...Show More Authors

View Publication
Scopus (3)
Crossref (2)
Scopus Clarivate Crossref
Publication Date
Mon Jan 01 2024
Journal Name
Journal Of Engineering
Face-based Gender Classification Using Deep Learning Model
...Show More Authors

Gender classification is a critical task in computer vision. This task holds substantial importance in various domains, including surveillance, marketing, and human-computer interaction. In this work, the face gender classification model proposed consists of three main phases: the first phase involves applying the Viola-Jones algorithm to detect facial images, which includes four steps: 1) Haar-like features, 2) Integral Image, 3) Adaboost Learning, and 4) Cascade Classifier. In the second phase, four pre-processing operations are employed, namely cropping, resizing, converting the image from(RGB) Color Space to (LAB) color space, and enhancing the images using (HE, CLAHE). The final phase involves utilizing Transfer lea

... Show More
View Publication Preview PDF
Crossref (2)
Crossref