Preferred Language
Articles
/
CxiqKJgBVTCNdQwC77o3
Hetero-associative Memory Based New Iraqi License Plate Recognition
...Show More Authors

نتيجة للتطورات الأخيرة في أبحاث الطرق السريعة بالإضافة إلى زيادة استخدام المركبات، كان هناك اهتمام كبير بنظام النقل الذكي الأكثر حداثة وفعالية ودقة (ITS) في مجال رؤية الكمبيوتر أو معالجة الصور الرقمية، يلعب تحديد كائنات معينة في صورة دورًا مهمًا في إنشاء صورة شاملة. هناك تحدٍ مرتبط بالتعرف على لوحة ترخيص السيارة (VLPR) بسبب الاختلاف في وجهة النظر، والتنسيقات المتعددة، وظروف الإضاءة غير الموحدة في وقت الحصول على الصورة والشكل واللون، بالإضافة إلى الصعوبات مثل ضعف دقة الصورة ، الصورة الباهتة ، الإضاءة السيئة، التباين المنخفض، يجب التغلب عليها. اقترحت هذه الورقة نموذجًا باستخدام تعديل الذاكرة الترابطية ثنائية الاتجاه  (MBAM)، وهي نوع واحد من الذاكرة الترابطية غير المتجانسة، وتعمل MBAM على مرحلتين)مرحلتي التعلم والتقارب) للتعرف على اللوحة، ويمكن لهذا النموذج المقترح التغلب على تلك الصعوبات بسبب قدرة الذاكرة الترابطية لـ MBAM على قبول الضوضاء وتمييز الصور المشوهة، وكذلك سرعة عملية الحساب نظرًا لصغر حجم الشبكة. نتيجة دقة تحديد منطقة اللوحة هي 99.6٪، ودقة تجزئة الأحرف 98٪، والدقة المحققة للتعرف على الأحرف هي100 ٪ في ظروف مختلفة.

Scopus Clarivate Crossref
View Publication Preview PDF
Quick Preview PDF
Publication Date
Tue Feb 01 2022
Journal Name
Int. J. Nonlinear Anal. Appl.
Computer-based plagiarism detection techniques: A comparative study
...Show More Authors

Plagiarism is becoming more of a problem in academics. It’s made worse by the ease with which a wide range of resources can be found on the internet, as well as the ease with which they can be copied and pasted. It is academic theft since the perpetrator has ”taken” and presented the work of others as his or her own. Manual detection of plagiarism by a human being is difficult, imprecise, and time-consuming because it is difficult for anyone to compare their work to current data. Plagiarism is a big problem in higher education, and it can happen on any topic. Plagiarism detection has been studied in many scientific articles, and methods for recognition have been created utilizing the Plagiarism analysis, Authorship identification, and

... Show More
Publication Date
Mon Apr 15 2019
Journal Name
Proceedings Of The International Conference On Information And Communication Technology
Hybrid LDPC-STBC communications system based on chaos
...Show More Authors

View Publication
Scopus Clarivate Crossref
Publication Date
Wed Jun 24 2020
Journal Name
Neuroimaging - Neurobiology, Multimodal And Network Applications
Electroencephalogram Based Biomarkers for Detection of Alzheimer’s Disease
...Show More Authors

Alzheimer’s disease (AD) is an age-related progressive and neurodegenerative disorder, which is characterized by loss of memory and cognitive decline. It is the main cause of disability among older people. The rapid increase in the number of people living with AD and other forms of dementia due to the aging population represents a major challenge to health and social care systems worldwide. Degeneration of brain cells due to AD starts many years before the clinical manifestations become clear. Early diagnosis of AD will contribute to the development of effective treatments that could slow, stop, or prevent significant cognitive decline. Consequently, early diagnosis of AD may also be valuable in detecting patients with dementia who have n

... Show More
View Publication
Crossref (2)
Crossref
Publication Date
Sun Sep 24 2023
Journal Name
Journal Of Al-qadisiyah For Computer Science And Mathematics
Iris Data Compression Based on Hexa-Data Coding
...Show More Authors

Iris research is focused on developing techniques for identifying and locating relevant biometric features, accurate segmentation and efficient computation while lending themselves to compression methods. Most iris segmentation methods are based on complex modelling of traits and characteristics which, in turn, reduce the effectiveness of the system being used as a real time system. This paper introduces a novel parameterized technique for iris segmentation. The method is based on a number of steps starting from converting grayscale eye image to a bit plane representation, selection of the most significant bit planes followed by a parameterization of the iris location resulting in an accurate segmentation of the iris from the origin

... Show More
View Publication
Crossref
Publication Date
Tue Nov 01 2016
Journal Name
Research Journal Of Pharmaceutical, Biological And Chemical Sciences
Treating of oil-based drill cuttings by earthworms
...Show More Authors

This study assessed the advantage of using earthworms in combination with punch waste and nutrients in remediating drill cuttings contaminated with hydrocarbons. Analyses were performed on day 0, 7, 14, 21, and 28 of the experiment. Two hydrocarbon concentrations were used (20000 mg/kg and 40000 mg/kg) for three groups of earthworms number which were five, ten and twenty earthworms. After 28 days, the total petroleum hydrocarbon (TPH) concentration (20000 mg/kg) was reduced to 13200 mg/kg, 9800 mg/kg, and 6300 mg/kg in treatments with five, ten and twenty earthworms respectively. Also, TPH concentration (40000 mg/kg) was reduced to 22000 mg/kg, 10100 mg/kg, and 4200 mg/kg in treatments with the above number of earthworms respectively. The p

... Show More
View Publication
Publication Date
Fri Oct 02 2015
Journal Name
American Journal Of Applied Sciences
Advances in Document Clustering with Evolutionary-Based Algorithms
...Show More Authors

Document clustering is the process of organizing a particular electronic corpus of documents into subgroups of similar text features. Formerly, a number of conventional algorithms had been applied to perform document clustering. There are current endeavors to enhance clustering performance by employing evolutionary algorithms. Thus, such endeavors became an emerging topic gaining more attention in recent years. The aim of this paper is to present an up-to-date and self-contained review fully devoted to document clustering via evolutionary algorithms. It firstly provides a comprehensive inspection to the document clustering model revealing its various components with its related concepts. Then it shows and analyzes the principle research wor

... Show More
View Publication
Scopus (2)
Crossref (2)
Scopus Crossref
Publication Date
Sun Apr 23 2017
Journal Name
International Conference Of Reliable Information And Communication Technology
Classification of Arabic Writer Based on Clustering Techniques
...Show More Authors

Arabic text categorization for pattern recognitions is challenging. We propose for the first time a novel holistic method based on clustering for classifying Arabic writer. The categorization is accomplished stage-wise. Firstly, these document images are sectioned into lines, words, and characters. Secondly, their structural and statistical features are obtained from sectioned portions. Thirdly, F-Measure is used to evaluate the performance of the extracted features and their combination in different linkage methods for each distance measures and different numbers of groups. Finally, experiments are conducted on the standard KHATT dataset of Arabic handwritten text comprised of varying samples from 1000 writers. The results in the generatio

... Show More
Scopus (6)
Scopus
Publication Date
Mon May 15 2017
Journal Name
International Journal Of Image And Data Fusion
Image edge detection operators based on orthogonal polynomials
...Show More Authors

View Publication
Scopus (32)
Crossref (10)
Scopus Crossref
Publication Date
Tue Jan 01 2013
Journal Name
International Journal Of Computer Applications
Content-based Image Retrieval (CBIR) using Hybrid Technique
...Show More Authors

Image retrieval is used in searching for images from images database. In this paper, content – based image retrieval (CBIR) using four feature extraction techniques has been achieved. The four techniques are colored histogram features technique, properties features technique, gray level co- occurrence matrix (GLCM) statistical features technique and hybrid technique. The features are extracted from the data base images and query (test) images in order to find the similarity measure. The similarity-based matching is very important in CBIR, so, three types of similarity measure are used, normalized Mahalanobis distance, Euclidean distance and Manhattan distance. A comparison between them has been implemented. From the results, it is conclud

... Show More
View Publication
Publication Date
Mon Mar 01 2021
Journal Name
Iop Conference Series: Materials Science And Engineering
Speech Enhancement Algorithm Based on a Hybrid Estimator
...Show More Authors
Abstract<p>Speech is the essential way to interact between humans or between human and machine. However, it is always contaminated with different types of environment noise. Therefore, speech enhancement algorithms (SEA) have appeared as a significant approach in speech processing filed to suppress background noise and return back the original speech signal. In this paper, a new efficient two-stage SEA with low distortion is proposed based on minimum mean square error sense. The estimation of clean signal is performed by taking the advantages of Laplacian speech and noise modeling based on orthogonal transform (Discrete Krawtchouk-Tchebichef transform) coefficients distribution. The Discrete Kra</p> ... Show More
View Publication
Crossref (11)
Crossref