Preferred Language
Articles
/
thdWOI8BVTCNdQwCG2M8
The Arithmetic Coding and Hybrid Discrete Wavelet and Cosine Transform Approaches in Image Compression
...Show More Authors

Image compression is one of the data compression types applied to digital images in order to reduce their high cost for storage and/or transmission. Image compression algorithms may take the benefit of visual sensitivity and statistical properties of image data to deliver superior results in comparison with generic data compression schemes, which are used for other digital data. In the first approach, the input image is divided into blocks, each of which is 16 x 16, 32 x 32, or 64 x 64 pixels. The blocks are converted first into a string; then, encoded by using a lossless and dictionary-based algorithm known as arithmetic coding. The more occurrence of the pixels values is codded in few bits compare with pixel values of less occurrence through the sub intervals between the range 0 and 1. Finally, the stream of compressed tables is reassembled for decompressing (image restoration). The results showed a compression gain of 10-12% and less time consumption when applying this type of coding to each block rather than the entire image. To improve the compression ratio, the second approach was used based on the YCbCr colour model. In this regard, images were decomposed into four sub-bands (low-low, high-low, low-high, and high-high) by using the discrete wavelet transform compression algorithm. Then, the low-low sub-band was transmuted to frequency components (low and high) via discrete wavelet transform. Next, these components were quantized by using scalar quantization and then scanning in a zigzag way. The compression ratio result is 15.1 to 27.5 for magnetic resonance imaging with a different peak signal to noise ratio and mean square error; 25 to 43 for X-ray images; 32 to 46 for computed tomography scan images; and 19 to 36 for magnetic resonance imaging brain images. The second approach showed an improved compression scheme compared to the first approach considering compression ratio, peak signal to noise ratio, and mean square error.

Crossref
View Publication
Publication Date
Thu Apr 30 2020
Journal Name
Journal Of Economics And Administrative Sciences
Estimate the Partial Linear Model Using Wavelet and Kernel Smoothers
...Show More Authors

This article aims to estimate the partially linear model by using two methods, which are the Wavelet and Kernel Smoothers. Simulation experiments are used to study the small sample behavior depending on different functions, sample sizes, and variances. Results explained that the wavelet smoother is the best depending on the mean average squares error criterion for all cases that used.

 

 

View Publication Preview PDF
Crossref
Publication Date
Sat Apr 15 2023
Journal Name
Journal Of Robotics
A New Proposed Hybrid Learning Approach with Features for Extraction of Image Classification
...Show More Authors

Image classification is the process of finding common features in images from various classes and applying them to categorize and label them. The main problem of the image classification process is the abundance of images, the high complexity of the data, and the shortage of labeled data, presenting the key obstacles in image classification. The cornerstone of image classification is evaluating the convolutional features retrieved from deep learning models and training them with machine learning classifiers. This study proposes a new approach of “hybrid learning” by combining deep learning with machine learning for image classification based on convolutional feature extraction using the VGG-16 deep learning model and seven class

... Show More
View Publication
Scopus (4)
Crossref (2)
Scopus Clarivate Crossref
Publication Date
Sat Apr 15 2023
Journal Name
Journal Of Robotics
A New Proposed Hybrid Learning Approach with Features for Extraction of Image Classification
...Show More Authors

Image classification is the process of finding common features in images from various classes and applying them to categorize and label them. The main problem of the image classification process is the abundance of images, the high complexity of the data, and the shortage of labeled data, presenting the key obstacles in image classification. The cornerstone of image classification is evaluating the convolutional features retrieved from deep learning models and training them with machine learning classifiers. This study proposes a new approach of “hybrid learning” by combining deep learning with machine learning for image classification based on convolutional feature extraction using the VGG-16 deep learning model and seven class

... Show More
View Publication
Scopus (4)
Crossref (2)
Scopus Clarivate Crossref
Publication Date
Sun Jul 09 2023
Journal Name
Journal Of Engineering
PAPR Reduction of OFDM Signals Using Clipping and Coding
...Show More Authors

The problem of the high peak to average ratio (PAPR) in OFDM signals is investigated with a brief presentation of the various methods used to reduce the PAPR with special attention to the clipping method. An alternative approach of clipping is presented, where the clipping is performed right after the IFFT stage unlike the conventional clipping that is performed in the power amplifier stage, which causes undesirable out of signal band spectral growth. In the proposed method, there is clipping of samples not clipping of wave, therefore, the spectral distortion is avoided. Coding is required to correct the errors introduced by the clipping and the overall system is tested for two types of modulations, the QPSK as a constant amplitude modul

... Show More
View Publication Preview PDF
Crossref
Publication Date
Wed Sep 12 2018
Journal Name
Ibn Al-haitham Journal For Pure And Applied Sciences
New Transform Fundamental Properties and Its Applications
...Show More Authors

        In this paper, new transform with fundamental properties are presented. The new transform has many interesting properties and applications which make it rival to other transforms.

Furthermore, we generalize all existing differentiation, integration, and convolution theorems in the existing literature. New results and new shifting theorems are introduced. Finally, comprehensive list of this transforms of functions will be providing.

View Publication Preview PDF
Crossref (16)
Crossref
Publication Date
Sun Mar 20 2016
Journal Name
Al-academy
Indicative coding of the actor’s performance in the Iraqi theater show
...Show More Authors

View Publication
Crossref
Publication Date
Wed Mar 08 2023
Journal Name
Sensors
A Critical Review of Remote Sensing Approaches and Deep Learning Techniques in Archaeology
...Show More Authors

To date, comprehensive reviews and discussions of the strengths and limitations of Remote Sensing (RS) standalone and combination approaches, and Deep Learning (DL)-based RS datasets in archaeology have been limited. The objective of this paper is, therefore, to review and critically discuss existing studies that have applied these advanced approaches in archaeology, with a specific focus on digital preservation and object detection. RS standalone approaches including range-based and image-based modelling (e.g., laser scanning and SfM photogrammetry) have several disadvantages in terms of spatial resolution, penetrations, textures, colours, and accuracy. These limitations have led some archaeological studies to fuse/integrate multip

... Show More
View Publication
Scopus (6)
Crossref (6)
Scopus Clarivate Crossref
Publication Date
Sun Jan 01 2023
Journal Name
8th Engineering And 2nd International Conference For College Of Engineering – University Of Baghdad: Coec8-2021 Proceedings
An analytical study of the spread patterns of the informal settlements in Baghdad and sustainable urban improvement approaches
...Show More Authors

View Publication
Scopus Crossref
Publication Date
Tue Dec 05 2023
Journal Name
Baghdad Science Journal
AlexNet Convolutional Neural Network Architecture with Cosine and Hamming Similarity/Distance Measures for Fingerprint Biometric Matching
...Show More Authors

In information security, fingerprint verification is one of the most common recent approaches for verifying human identity through a distinctive pattern. The verification process works by comparing a pair of fingerprint templates and identifying the similarity/matching among them. Several research studies have utilized different techniques for the matching process such as fuzzy vault and image filtering approaches. Yet, these approaches are still suffering from the imprecise articulation of the biometrics’ interesting patterns. The emergence of deep learning architectures such as the Convolutional Neural Network (CNN) has been extensively used for image processing and object detection tasks and showed an outstanding performance compare

... Show More
View Publication Preview PDF
Scopus (2)
Crossref (1)
Scopus Crossref
Publication Date
Tue Sep 01 2015
Journal Name
2015 7th Computer Science And Electronic Engineering Conference (ceec)
An experimental investigation on PCA based on cosine similarity and correlation for text feature dimensionality reduction
...Show More Authors

View Publication
Scopus (6)
Crossref (5)
Scopus Crossref