Preferred Language
Articles
/
ijs-5199
A Comparative Study Using DCT, Delta Modulation, and Double Shift Coding for Compressing Electroencephalogram Data
...Show More Authors

      Storing, transferring, and processing high-dimensional electroencephalogram (EGG) signals is a critical challenge. The goal of EEG compression is to remove redundant data in EEG signals. Medical signals like EEG must be of high quality for medical diagnosis. This paper uses a compression system with near-zero Mean Squared Error (MSE) based on Discrete Cosine Transform (DCT) and double shift coding for fast and efficient EEG data compression. This paper investigates and compares the use or non-use of delta modulation, which is applied to the transformed and quantized input signal. Double shift coding is applied after mapping the output to positive as a final step. The system performance is tested using EEG data files from the CHB-MIT Scalp EEG Database. Compression Ratio (CR) is used to evaluate the compression system performance. The results are encouraging when compared with previous works on the same data samples.

Scopus Crossref
View Publication Preview PDF
Quick Preview PDF
Publication Date
Thu Jun 01 2017
Journal Name
Iosr Journal Of Computer Engineering
Lossy Image Compression Using Wavelet Transform, Polynomial Prediction And Block Truncation Coding
...Show More Authors

View Publication
Crossref
Publication Date
Sat Jan 30 2021
Journal Name
Iraqi Journal Of Science
Image Compression Based on Arithmetic Coding Algorithm
...Show More Authors

The past years have seen a rapid development in the area of image compression techniques, mainly due to the need of fast and efficient techniques for storage and transmission of data among individuals. Compression is the process of representing the data in a compact form rather than in its original or incompact form. In this paper, integer implementation of Arithmetic Coding (AC) and Discreet Cosine Transform (DCT) were applied to colored images. The DCT was applied using the YCbCr color model. The transformed image was then quantized with the standard quantization tables for luminance and chrominance. The quantized coefficients were scanned by zigzag scan and the output was encoded using AC. The results showed a decent compression ratio

... Show More
View Publication Preview PDF
Scopus (9)
Crossref (5)
Scopus Crossref
Publication Date
Thu Aug 30 2018
Journal Name
Journal Of Engineering
An Optimum Strategy for Producing Precise GPS Satellite Orbits using Double-Differenced Observations
...Show More Authors

Both the double-differenced and zero-differenced GNSS positioning strategies have been widely used by the geodesists for different geodetic applications which are demanded for reliable and precise positions. A closer inspection of the requirements of these two GNSS positioning techniques, the zero-differenced positioning, which is known as Precise Point Positioning (PPP), has gained a special importance due to three main reasons. Firstly, the effective applications of PPP for geodetic purposes and precise applications depend entirely on the availability of the precise satellite products which consist of precise satellite orbital elements, precise satellite clock corrections, and Earth orientation parameters. Secondly, th

... Show More
View Publication Preview PDF
Crossref (1)
Crossref
Publication Date
Sat Jan 10 2015
Journal Name
British Journal Of Applied Science & Technology
The Use of Cubic Bezier Interpolation, Biorthogonal Wavelet and Quadtree Coding to Compress Color Images
...Show More Authors

In this paper, an efficient method for compressing color image is presented. It allows progressive transmission and zooming of the image without need to extra storage. The proposed method is going to be accomplished using cubic Bezier surface (CBI) representation on wide area of images in order to prune the image component that shows large scale variation. Then, the produced cubic Bezier surface is subtracted from the image signal to get the residue component. Then, bi-orthogonal wavelet transform is applied to decompose the residue component. Both scalar quantization and quad tree coding steps are applied on the produced wavelet sub bands. Finally, adaptive shift coding is applied to handle the remaining statistical redundancy and attain e

... Show More
View Publication
Crossref (2)
Crossref
Publication Date
Thu Dec 01 2011
Journal Name
Journal Of Economics And Administrative Sciences
Dynamic algorithm (DRBLTS) and potentially weighted (WBP) to estimate hippocampal regression parameters using a techniqueBootstrap (comparative study)
...Show More Authors

Bootstrap is one of an important re-sampling technique which has given the attention of  researches recently. The presence of outliers in the original data set may cause serious problem to the classical bootstrap when the percentage of outliers are higher than the original one. Many methods are proposed to overcome this problem such  Dynamic Robust Bootstrap for LTS (DRBLTS) and Weighted Bootstrap with Probability (WBP). This paper try to show the accuracy of parameters estimation by comparison the results of both methods. The bias , MSE and RMSE are considered. The criterion of the accuracy is based on the RMSE value since the method that provide us RMSE value smaller than other is con

... Show More
View Publication Preview PDF
Crossref
Publication Date
Wed Sep 01 2021
Journal Name
Baghdad Science Journal
On Comparison Study between Double Sumudu and Elzaki Linear Transforms Method for Solving Fractional Partial Differential Equations
...Show More Authors

        In this paper, double Sumudu and double Elzaki transforms methods are used to compute the numerical solutions for some types of fractional order partial differential equations with constant coefficients and explaining the efficiently of the method by illustrating some numerical examples that are computed by using  Mathcad 15.and graphic in Matlab R2015a.

View Publication Preview PDF
Scopus (4)
Scopus Clarivate Crossref
Publication Date
Tue May 30 2023
Journal Name
Iraqi Journal Of Science
Application of Data Mining and Imputation Algorithms for Missing Value Handling: A Study Case Car Evaluation Dataset
...Show More Authors

     Data mining is a data analysis process using software to find certain patterns or rules in a large amount of data, which is expected to provide knowledge to support decisions. However, missing value in data mining often leads to a loss of information. The purpose of this study is to improve the performance of data classification with missing values, ​​precisely and accurately. The test method is carried out using the Car Evaluation dataset from the UCI Machine Learning Repository. RStudio and RapidMiner tools were used for testing the algorithm. This study will result in a data analysis of the tested parameters to measure the performance of the algorithm. Using test variations: performance at C5.0, C4.5, and k-NN at 0% missi

... Show More
View Publication Preview PDF
Scopus Crossref
Publication Date
Thu Dec 29 2016
Journal Name
Ibn Al-haitham Journal For Pure And Applied Sciences
Proposed Steganography Method Based on DCT Coefficients
...Show More Authors

      In this paper an algorithm for Steganography using DCT for cover image and DWT for hidden image with an embedding order key is proposed. For more security and complexity the cover image convert from RGB to YIQ, Y plane is used and divided into four equally parts and then converted to DCT domain. The four coefficient of the DWT of the hidden image are embedded into each part of cover DCT, the embedding order based on the order key of which is stored with cover in a database table in both the sender and receiver sender. Experimental results show that the proposed algorithm gets successful hiding information into the cover image. We use Microsoft Office Access 2003 database as DBMS, the hiding, extracting algo

... Show More
View Publication Preview PDF
Publication Date
Wed Aug 01 2018
Journal Name
Journal Of Economics And Administrative Sciences
A Comparative Study of Some Methods of Estimating Robust Variance Covariance Matrix of the Parameters Estimated by (OLS) in Cross-Sectional Data
...Show More Authors

 

Abstract

The Classical Normal Linear Regression Model Based on Several hypotheses, one of them is Heteroscedasticity as it is known that the wing of least squares method (OLS), under the existence of these two problems make the estimators, lose their desirable properties, in addition the statistical inference becomes unaccepted table. According that we put tow alternative,  the first one is  (Generalized Least Square) Which is denoted by (GLS), and the second alternative is to (Robust covariance matrix estimation) the estimated parameters method(OLS), and that the way (GLS) method neat and certified, if the capabilities (Efficient) and the statistical inference Thread on the basis of an acceptable

... Show More
View Publication Preview PDF
Crossref
Publication Date
Tue Jul 25 2023
Journal Name
Journal Of Optical Communications
Design mode filtering interferometer using etched double clad fiber
...Show More Authors
Abstract<p>Mode filtering technique is one of the most desired techniques in optical fiber communication systems, especially for multiple input multiple output (MIMO) coherent optical communications that have mode-dependent losses in communication channels. In this work, a special type of optical fiber sensing head was used, where it utilizes DCF13 that is made by Thorlabs and has two numerical apertures (NA’s). One is for core and 1st cladding region, while the 2nd relates the 1st cladding to the 2nd cladding. Etching process using 40 % hydro-fluoric (HF) acid was performed on the DCF13 with variable time in minutes. Investigation of the correlation between the degree of etching and the re</p> ... Show More
View Publication
Scopus (3)
Crossref (3)
Scopus Crossref