Preferred Language
Articles
/
bsj-6145
Human Face Recognition Based on Local Ternary Pattern and Singular Value Decomposition
...Show More Authors

There is various human biometrics used nowadays, one of the most important of these biometrics is the face. Many techniques have been suggested for face recognition, but they still face a variety of challenges for recognizing faces in images captured in the uncontrolled environment, and for real-life applications. Some of these challenges are pose variation, occlusion, facial expression, illumination, bad lighting, and image quality. New techniques are updating continuously. In this paper, the singular value decomposition is used to extract the features matrix for face recognition and classification. The input color image is converted into a grayscale image and then transformed into a local ternary pattern before splitting the image into the main sixteen blocks.  Each block of these sixteen blocks is divided into more to thirty sub-blocks. For each sub-block, the SVD transformation is applied, and the norm of the diagonal matrix is calculated, which is used to create the 16x30 feature matrix. The sub-blocks of two images, (thirty elements in the main block) are compared with others using the Euclidean distance.  The minimum value for each main block is selected to be one feature input to the neural network. Classification is implemented by a backpropagation neural network, where a 16-feature matrix is used as input to the neural network. The performance of the current proposal was up to 97% when using the FEI (Brazilian) database. Moreover, the performance of this study is promised when compared with recent state-of-the-art approaches and it solves some of the challenges such as illumination and facial expression.

Scopus Clarivate Crossref
View Publication Preview PDF
Quick Preview PDF
Publication Date
Sun Apr 28 2024
Journal Name
Journal Of Advances In Information Technology
Enhancement of Recommendation Engine Technique for Bug System Fixes
...Show More Authors

This study aims to develop a recommendation engine methodology to enhance the model’s effectiveness and efficiency. The proposed model is commonly used to assign or propose a limited number of developers with the required skills and expertise to address and resolve a bug report. Managing collections within bug repositories is the responsibility of software engineers in addressing specific defects. Identifying the optimal allocation of personnel to activities is challenging when dealing with software defects, which necessitates a substantial workforce of developers. Analyzing new scientific methodologies to enhance comprehension of the results is the purpose of this analysis. Additionally, developer priorities were discussed, especially th

... Show More
View Publication Preview PDF
Scopus (1)
Crossref (1)
Scopus Clarivate Crossref
Publication Date
Sat Jul 01 2023
Journal Name
Journal Of Accounting And Financial Studies ( Jafs )
Reducing activity costs under Resource Consumption Accounting: An Applied Research in the MidlandRefineriesCompany-Al Daura Refine
...Show More Authors

The research aims to identify the importance of applying resource consumption accounting in the Iraqi industrial environment in general, and oil in particular, and its role in reducing the costs of activities by excluding and isolating idle energy costs, as the research problem represents that the company faces deficiencies and challenges in applying strategic cost tools. The research was based on The hypothesis that the application of resource consumption accounting will lead to the provision of appropriate information for the company through the allocation of costs properly by resource consumption accounting and then reduce the costs of activities. To prove the hypothesis of the research, the Light Derivatives Authority - Al-Dora Refin

... Show More
View Publication Preview PDF
Publication Date
Fri Feb 01 2019
Journal Name
Journal Of Economics And Administrative Sciences
A Standard Study of the Role of the Tourism Sector in Achieving Economic Growth in Tunisia for the Period (1995-2017
...Show More Authors

This study examines the relationship between the increase in the number of tourists coming to Tunisia and GDP during the period 1995-2017, using the methodology of joint integration, causal testing and error correction model. The research found the time series instability of the logarithm of the number of tourists coming to Tunisia and the output logarithm but after applying the first differences, these chains become stable, THUS these time series are integrated in the first differences. Using the Johansson method, we found the possibility of a simultaneous integration relationship between the logarithm of the number of tourists coming to Tunisia and the logarithm of GDP in Tunisia, and there is a causal relationship in one direc

... Show More
View Publication Preview PDF
Crossref
Publication Date
Thu Feb 28 2019
Journal Name
Journal Of Engineering
Modified W-LEACH Protocol in Wireless Sensor Network
...Show More Authors

In this paper, a Modified Weighted Low Energy Adaptive Clustering Hierarchy (MW-LEACH) protocol is implemented to improve the Quality of Service (QoS) in Wireless Sensor Network (WSN) with mobile sink node. The Quality of Service is measured in terms of Throughput Ratio (TR), Packet Loss Ratio (PLR) and Energy Consumption (EC). The protocol is implemented based on Python simulation. Simulation Results showed that the proposed protocol provides better Quality of Service in comparison with Weighted Low Energy Cluster Hierarchy (W-LEACH) protocol by 63%.

  

View Publication Preview PDF
Crossref (5)
Crossref
Publication Date
Tue Jan 18 2022
Journal Name
International Journal Of Interactive Mobile Technologies (ijim)
Performance Analysis of OLSR Protocol in Mobile Ad Hoc Networks
...Show More Authors

Optimized Link State Routing Protocol (OLSR) is an efficient routing protocol used for various Ad hoc networks. OLSR employs the Multipoint Relay (MPR) technique to reduce network overhead traffic. A mobility model's main goal is to realistically simulate the movement behaviors of actual users. However, the high mobility and mobility model is the major design issues for an efficient and effective routing protocol for real Mobile Ad hoc Networks (MANETs). Therefore, this paper aims to analyze the performance of the OLSR protocol concerning various random and group mobility models. Two simulation scenarios were conducted over four mobility models, specifically the  Random Waypoint model (RWP), Random Direction model (RD), Nomadic Co

... Show More
View Publication Preview PDF
Crossref (29)
Crossref
Publication Date
Sun Oct 28 2018
Journal Name
Journal Of Planner And Development
أثر المعالم في الفضاء الحضري المعالم (الأبنية) كنواة لتنظيم الفضاء الحضري
...Show More Authors

View Publication Preview PDF
Publication Date
Fri Aug 23 2013
Journal Name
International Journal Of Computer Applications
Lossless Compression of Medical Images using Multiresolution Polynomial Approximation Model
...Show More Authors

In this paper, a simple fast lossless image compression method is introduced for compressing medical images, it is based on integrates multiresolution coding along with polynomial approximation of linear based to decompose image signal followed by efficient coding. The test results indicate that the suggested method can lead to promising performance due to flexibility in overcoming the limitations or restrictions of the model order length and extra overhead information required compared to traditional predictive coding techniques.

View Publication
Crossref (4)
Crossref
Publication Date
Wed Sep 30 2020
Journal Name
Iraqi Journal Of Chemical And Petroleum Engineering
Digital Rock Samples Porosity Analysis by OTSU Thresholding Technique Using MATLAB
...Show More Authors

Porosity plays an essential role in petroleum engineering. It controls fluid storage in aquifers, connectivity of the pore structure control fluid flow through reservoir formations. To quantify the relationships between porosity, storage, transport and rock properties, however, the pore structure must be measured and quantitatively described. Porosity estimation of digital image utilizing image processing essential for the reservoir rock analysis since the sample 2D porosity briefly described. The regular procedure utilizes the binarization process, which uses the pixel value threshold to convert the color and grayscale images to binary images. The idea is to accommodate the blue regions entirely with pores and transform it to white in r

... Show More
View Publication Preview PDF
Crossref (6)
Crossref
Publication Date
Sat Apr 15 2023
Journal Name
Journal Of Robotics
A New Proposed Hybrid Learning Approach with Features for Extraction of Image Classification
...Show More Authors

Image classification is the process of finding common features in images from various classes and applying them to categorize and label them. The main problem of the image classification process is the abundance of images, the high complexity of the data, and the shortage of labeled data, presenting the key obstacles in image classification. The cornerstone of image classification is evaluating the convolutional features retrieved from deep learning models and training them with machine learning classifiers. This study proposes a new approach of “hybrid learning” by combining deep learning with machine learning for image classification based on convolutional feature extraction using the VGG-16 deep learning model and seven class

... Show More
View Publication
Scopus (6)
Crossref (4)
Scopus Clarivate Crossref
Publication Date
Wed Apr 02 2014
Journal Name
Journal Of Theoretical And Applied Information Technology
TUMOR BRAIN DETECTION THROUGH MR IMAGES: A REVIEW OF LITERATURE
...Show More Authors

Today’s modern medical imaging research faces the challenge of detecting brain tumor through Magnetic Resonance Images (MRI). Normally, to produce images of soft tissue of human body, MRI images are used by experts. It is used for analysis of human organs to replace surgery. For brain tumor detection, image segmentation is required. For this purpose, the brain is partitioned into two distinct regions. This is considered to be one of the most important but difficult part of the process of detecting brain tumor. Hence, it is highly necessary that segmentation of the MRI images must be done accurately before asking the computer to do the exact diagnosis. Earlier, a variety of algorithms were developed for segmentation of MRI images by usin

... Show More
Scopus (48)
Scopus