Preferred Language
Articles
/
hhc-LZIBVTCNdQwC7afF
Lossy and Lossless Video Frame Compression: A Novel Approach for High-Temporal Video Data Analytics
...Show More Authors

The smart city concept has attracted high research attention in recent years within diverse application domains, such as crime suspect identification, border security, transportation, aerospace, and so on. Specific focus has been on increased automation using data driven approaches, while leveraging remote sensing and real-time streaming of heterogenous data from various resources, including unmanned aerial vehicles, surveillance cameras, and low-earth-orbit satellites. One of the core challenges in exploitation of such high temporal data streams, specifically videos, is the trade-off between the quality of video streaming and limited transmission bandwidth. An optimal compromise is needed between video quality and subsequently, recognition and understanding and efficient processing of large amounts of video data. This research proposes a novel unified approach to lossy and lossless video frame compression, which is beneficial for the autonomous processing and enhanced representation of high-resolution video data in various domains. The proposed fast block matching motion estimation technique, namely mean predictive block matching, is based on the principle that general motion in any video frame is usually coherent. This coherent nature of the video frames dictates a high probability of a macroblock having the same direction of motion as the macroblocks surrounding it. The technique employs the partial distortion elimination algorithm to condense the exploration time, where partial summation of the matching distortion between the current macroblock and its contender ones will be used, when the matching distortion surpasses the current lowest error. Experimental results demonstrate the superiority of the proposed approach over state-of-the-art techniques, including the four step search, three step search, diamond search, and new three step search.

Scopus Clarivate Crossref
View Publication
Publication Date
Mon Nov 02 2020
Journal Name
International Journal Of Pharmaceutical Research
Serum Afamin As A Novel Biomarker for Non-Alcoholic Fatty Liver Disease as A Complication of Hypothyroidism in Iraqi Patients.
...Show More Authors

View Publication
Scopus Crossref
Publication Date
Thu Aug 01 2019
Journal Name
Journal Of Economics And Administrative Sciences
Some Estimation methods for the two models SPSEM and SPSAR for spatially dependent data
...Show More Authors

ABSTRUCT

In This Paper, some semi- parametric spatial models were estimated, these models are, the semi – parametric spatial error model (SPSEM), which suffer from the problem of spatial errors dependence, and the semi – parametric spatial auto regressive model (SPSAR). Where the method of maximum likelihood was used in estimating the parameter of spatial error          ( λ ) in the model (SPSEM), estimated  the parameter of spatial dependence ( ρ ) in the model ( SPSAR ), and using the non-parametric method in estimating the smoothing function m(x) for these two models, these non-parametric methods are; the local linear estimator (LLE) which require finding the smoo

... Show More
View Publication Preview PDF
Crossref
Publication Date
Fri Jan 01 2016
Journal Name
Middle-east Journal Of Scientific Research
Question Classification Using Different Approach: A Whole Review
...Show More Authors

Preview PDF
Publication Date
Thu Jun 01 2017
Journal Name
International Journal Of Engineering Research And Advanced Technology
The Use of First Order Polynomial with Double Scalar Quantization for Image Compression
...Show More Authors

Publication Date
Sat Oct 01 2011
Journal Name
Journal Of Engineering
IMPROVED IMAGE COMPRESSION BASED WAVELET TRANSFORM AND THRESHOLD ENTROPY
...Show More Authors

In this paper, a method is proposed to increase the compression ratio for the color images by
dividing the image into non-overlapping blocks and applying different compression ratio for these
blocks depending on the importance information of the block. In the region that contain important
information the compression ratio is reduced to prevent loss of the information, while in the
smoothness region which has not important information, high compression ratio is used .The
proposed method shows better results when compared with classical methods(wavelet and DCT).

View Publication Preview PDF
Crossref
Publication Date
Sat Dec 02 2017
Journal Name
Al-khwarizmi Engineering Journal
Speech Signal Compression Using Wavelet And Linear Predictive Coding
...Show More Authors

A new algorithm is proposed to compress speech signals using wavelet transform and linear predictive coding. Signal compression based on the concept of selecting a small number of approximation coefficients after they are compressed by the wavelet decomposition (Haar and db4) at a suitable chosen level and ignored details coefficients, and then approximation coefficients are windowed by a rectangular window and fed to the linear predictor. Levinson Durbin algorithm is used to compute LP coefficients, reflection coefficients and predictor error. The compress files contain LP coefficients and previous sample. These files are very small in size compared to the size of the original signals. Compression ratio is calculated from the size of th

... Show More
View Publication Preview PDF
Publication Date
Wed Dec 15 2021
Journal Name
Nasaq
A Corpus-Based Approach to the Study of Vocabulary in English Textbooks for Iraqi Intermediate Schools
...Show More Authors

Learning the vocabulary of a language has great impact on acquiring that language. Many scholars in the field of language learning emphasize the importance of vocabulary as part of the learner's communicative competence, considering it the heart of language. One of the best methods of learning vocabulary is to focus on those words of high frequency. The present article is a corpus based approach to the study of vocabulary whereby the research data are analyzed quantitatively using the software program "AntWordprofiler". This program analyses new input research data in terms of already stored reliable corpora. The aim of this article is to find out whether the vocabularies used in the English textbook for Intermediate Schools in Iraq are con

... Show More
View Publication Preview PDF
Publication Date
Wed Mar 03 2021
Journal Name
Innovative Infrastructure Solutions
Experimental investigation of a new sustainable approach for recycling waste styrofoam food containers in lightweight concrete
...Show More Authors

View Publication
Scopus (9)
Crossref (5)
Scopus Clarivate Crossref
Publication Date
Wed Mar 10 2021
Journal Name
Baghdad Science Journal
A new approach for the topical treatment of acne vulgaris by clindamycin HCL supported on kalion
...Show More Authors

Has been studied both processes Almetzaz and extortion of a substance Alklanda Maysan different amounts of Alcaúlan Guy 70% alcohol solution using the method when the wavelength

View Publication Preview PDF
Publication Date
Fri Jan 01 2016
Journal Name
Advances In Computing
A New Abnormality Detection Approach for T1-Weighted Magnetic Resonance Imaging Brain Slices Using Three Planes
...Show More Authors

Generally, radiologists analyse the Magnetic Resonance Imaging (MRI) by visual inspection to detect and identify the presence of tumour or abnormal tissue in brain MR images. The huge number of such MR images makes this visual interpretation process, not only laborious and expensive but often erroneous. Furthermore, the human eye and brain sensitivity to elucidate such images gets reduced with the increase of number of cases, especially when only some slices contain information of the affected area. Therefore, an automated system for the analysis and classification of MR images is mandatory. In this paper, we propose a new method for abnormality detection from T1-Weighted MRI of human head scans using three planes, including axial plane, co

... Show More