Preferred Language
Articles
/
BhevNY8BVTCNdQwCXmL3
A new modified differential evolution algorithm scheme-based linear frequency modulation radar signal de-noising
...Show More Authors

The main intention of this study was to investigate the development of a new optimization technique based on the differential evolution (DE) algorithm, for the purpose of linear frequency modulation radar signal de-noising. As the standard DE algorithm is a fixed length optimizer, it is not suitable for solving signal de-noising problems that call for variability. A modified crossover scheme called rand-length crossover was designed to fit the proposed variable-length DE, and the new DE algorithm is referred to as the random variable-length crossover differential evolution (rvlx-DE) algorithm. The measurement results demonstrate a highly efficient capability for target detection in terms of frequency response and peak forming that was isolated from noise distortion. The modified method showed significant improvements in performance over traditional de-noising techniques.

Scopus Clarivate Crossref
View Publication
Publication Date
Tue Aug 27 2024
Journal Name
Tem Journal
Preparing the Electrical Signal Data of the Heart by Performing Segmentation Based on the Neural Network U-Net
...Show More Authors

Research on the automated extraction of essential data from an electrocardiography (ECG) recording has been a significant topic for a long time. The main focus of digital processing processes is to measure fiducial points that determine the beginning and end of the P, QRS, and T waves based on their waveform properties. The presence of unavoidable noise during ECG data collection and inherent physiological differences among individuals make it challenging to accurately identify these reference points, resulting in suboptimal performance. This is done through several primary stages that rely on the idea of preliminary processing of the ECG electrical signal through a set of steps (preparing raw data and converting them into files tha

... Show More
View Publication
Scopus Clarivate Crossref
Publication Date
Fri Jul 14 2023
Journal Name
International Journal Of Information Technology & Decision Making
A Decision Modeling Approach for Data Acquisition Systems of the Vehicle Industry Based on Interval-Valued Linear Diophantine Fuzzy Set
...Show More Authors

Modeling data acquisition systems (DASs) can support the vehicle industry in the development and design of sophisticated driver assistance systems. Modeling DASs on the basis of multiple criteria is considered as a multicriteria decision-making (MCDM) problem. Although literature reviews have provided models for DASs, the issue of imprecise, unclear, and ambiguous information remains unresolved. Compared with existing MCDM methods, the robustness of the fuzzy decision by opinion score method II (FDOSM II) and fuzzy weighted with zero inconsistency II (FWZIC II) is demonstrated for modeling the DASs. However, these methods are implemented in an intuitionistic fuzzy set environment that restricts the ability of experts to provide mem

... Show More
View Publication
Scopus (3)
Crossref (9)
Scopus Clarivate Crossref
Publication Date
Tue Jan 01 2013
Journal Name
Brain Research Bulletin
A note on the probability distribution function of the surface electromyogram signal
...Show More Authors

View Publication
Scopus (86)
Crossref (89)
Scopus Clarivate Crossref
Publication Date
Tue Apr 02 2019
Journal Name
Artificial Intelligence Research
A three-stage learning algorithm for deep multilayer perceptron with effective weight initialisation based on sparse auto-encoder
...Show More Authors

A three-stage learning algorithm for deep multilayer perceptron (DMLP) with effective weight initialisation based on sparse auto-encoder is proposed in this paper, which aims to overcome difficulties in training deep neural networks with limited training data in high-dimensional feature space. At the first stage, unsupervised learning is adopted using sparse auto-encoder to obtain the initial weights of the feature extraction layers of the DMLP. At the second stage, error back-propagation is used to train the DMLP by fixing the weights obtained at the first stage for its feature extraction layers. At the third stage, all the weights of the DMLP obtained at the second stage are refined by error back-propagation. Network structures an

... Show More
View Publication
Crossref (1)
Crossref
Publication Date
Wed Dec 13 2017
Journal Name
Al-khwarizmi Engineering Journal
Design of a Kinematic Neural Controller for Mobile Robots based on Enhanced Hybrid Firefly-Artificial Bee Colony Algorithm
...Show More Authors

The paper present design of a control structure that enables integration of a Kinematic neural controller for trajectory tracking of a nonholonomic differential two wheeled mobile robot, then  proposes a Kinematic neural controller to direct a National Instrument mobile robot (NI Mobile Robot). The controller is to make the actual velocity of the wheeled mobile robot close the required velocity by guarantees that the trajectory tracking mean squire error converges at minimum tracking error. The proposed tracking control system consists of two layers; The first layer is a multi-layer perceptron neural network system that controls the mobile robot to track the required path , The second layer is an optimization layer ,which is impleme

... Show More
View Publication Preview PDF
Publication Date
Fri Apr 30 2010
Journal Name
Journal Of Applied Computer Science & Mathematics
Image Hiding Using Magnitude Modulation on the DCT Coefficients
...Show More Authors

In this paper, we introduce a DCT based steganographic method for gray scale images. The embedding approach is designed to reach efficient tradeoff among the three conflicting goals; maximizing the amount of hidden message, minimizing distortion between the cover image and stego-image,and maximizing the robustness of embedding. The main idea of the method is to create a safe embedding area in the middle and high frequency region of the DCT domain using a magnitude modulation technique. The magnitude modulation is applied using uniform quantization with magnitude Adder/Subtractor modules. The conducted test results indicated that the proposed method satisfy high capacity, high preservation of perceptual and statistical properties of the steg

... Show More
View Publication Preview PDF
Publication Date
Thu Oct 01 2020
Journal Name
Bulletin Of Electrical Engineering And Informatics
Lightweight hamming product code based multiple bit error correction coding scheme using shared resources for on chip interconnects
...Show More Authors

In this paper, we present multiple bit error correction coding scheme based on extended Hamming product code combined with type II HARQ using shared resources for on chip interconnect. The shared resources reduce the hardware complexity of the encoder and decoder compared to the existing three stages iterative decoding method for on chip interconnects. The proposed method of decoding achieves 20% and 28% reduction in area and power consumption respectively, with only small increase in decoder delay compared to the existing three stage iterative decoding scheme for multiple bit error correction. The proposed code also achieves excellent improvement in residual flit error rate and up to 58% of total power consumption compared to the other err

... Show More
View Publication
Scopus (5)
Crossref (2)
Scopus Crossref
Publication Date
Sun Nov 19 2017
Journal Name
Journal Of Al-qadisiyah For Computer Science And Mathematics
Image Compression based on Fixed Predictor Multiresolution Thresholding of Linear Polynomial Nearlossless Techniques
...Show More Authors

Image compression is a serious issue in computer storage and transmission,  that simply makes efficient use of redundancy embedded within an image itself; in addition, it may exploit human vision or perception limitations to reduce the imperceivable information Polynomial coding is a modern image compression technique based on modelling concept to remove the spatial redundancy embedded within the image effectively that composed of two parts, the  mathematical model and the residual. In this paper, two stages proposed technqies adopted, that starts by utilizing the lossy predictor model along with multiresolution base and thresholding techniques corresponding to first stage. Latter by incorporating the near lossless com

... Show More
View Publication
Crossref (3)
Crossref
Publication Date
Wed Feb 01 2023
Journal Name
Baghdad Science Journal
3-D Packing in Container using Teaching Learning Based Optimization Algorithm
...Show More Authors

The paper aims to propose Teaching Learning based Optimization (TLBO) algorithm to solve 3-D packing problem in containers. The objective which can be presented in a mathematical model is optimizing the space usage in a container. Besides the interaction effect between students and teacher, this algorithm also observes the learning process between students in the classroom which does not need any control parameters. Thus, TLBO provides the teachers phase and students phase as its main updating process to find the best solution. More precisely, to validate the algorithm effectiveness, it was implemented in three sample cases. There was small data which had 5 size-types of items with 12 units, medium data which had 10 size-types of items w

... Show More
View Publication Preview PDF
Scopus Clarivate Crossref
Publication Date
Tue Jan 01 2019
Journal Name
Ieee Access
Speech Enhancement Algorithm Based on Super-Gaussian Modeling and Orthogonal Polynomials
...Show More Authors

View Publication
Scopus (44)
Crossref (43)
Scopus Clarivate Crossref