Color image compression is a good way to encode digital images by decreasing the number of bits wanted to supply the image. The main objective is to reduce storage space, reduce transportation costs and maintain good quality. In current research work, a simple effective methodology is proposed for the purpose of compressing color art digital images and obtaining a low bit rate by compressing the matrix resulting from the scalar quantization process (reducing the number of bits from 24 to 8 bits) using displacement coding and then compressing the remainder using the Mabel ZF algorithm Welch LZW. The proposed methodology maintains the quality of the reconstructed image. Macroscopic and
Artificial intelligence (AI) is entering many fields of life nowadays. One of these fields is biometric authentication. Palm print recognition is considered a fundamental aspect of biometric identification systems due to the inherent stability, reliability, and uniqueness of palm print features, coupled with their non-invasive nature. In this paper, we develop an approach to identify individuals from palm print image recognition using Orange software in which a hybrid of AI methods: Deep Learning (DL) and traditional Machine Learning (ML) methods are used to enhance the overall performance metrics. The system comprises of three stages: pre-processing, feature extraction, and feature classification or matching. The SqueezeNet deep le
... Show MoreArtificial intelligence (AI) is entering many fields of life nowadays. One of these fields is biometric authentication. Palm print recognition is considered a fundamental aspect of biometric identification systems due to the inherent stability, reliability, and uniqueness of palm print features, coupled with their non-invasive nature. In this paper, we develop an approach to identify individuals from palm print image recognition using Orange software in which a hybrid of AI methods: Deep Learning (DL) and traditional Machine Learning (ML) methods are used to enhance the overall performance metrics. The system comprises of three stages: pre-processing, feature extraction, and feature classification or matching. The SqueezeNet deep le
... Show MoreObjective: The study aim is to identify factors that may contribute to children’s weight status variations. Methodology: A descriptive cross sectional study is carried out has been conducted at the AL- Samawah city in Primary Health Care Centers for the purpose of the screening children’s weight status of Age One to five Years Old. This study is started from December 16th 2018 to February 14th 2019. A(non propriety) purposive sample comprised of (20) primary health centers (10 main and 10 sub) are selected of 500 children who visit the primary health care center during the period for the purpose of the study; Data was collected through using a questionnaire designed and developed for the purpose of the study . It consists of two main
... Show MoreSteganography is a technique of concealing secret data within other quotidian files of the same or different types. Hiding data has been essential to digital information security. This work aims to design a stego method that can effectively hide a message inside the images of the video file. In this work, a video steganography model has been proposed through training a model to hiding video (or images) within another video using convolutional neural networks (CNN). By using a CNN in this approach, two main goals can be achieved for any steganographic methods which are, increasing security (hardness to observed and broken by used steganalysis program), this was achieved in this work as the weights and architecture are randomized. Thus,
... Show MoreABSTRACT
Learning vocabulary is a challenging task for female English as a foreign language (EFL) students. Thus, improving students’ knowledge of vocabulary is critical if they are to make progress in learning a new language. The current study aimed at exploring the vocabulary learning strategies used by EFL students at Northern Border University (NBU). It also aimed to identify the mechanisms applied by EFL students at NBU University to learn vocabulary. It also aimed at evaluating the approaches adopted by EFL female students at Northern Border University (NBU) to learn a language. The study adopted the descriptive-analytical method. Two research instruments were developed to collect data namely, a survey qu
... Show MoreTwo EM techniques, terrain conductivity and VLF-Radiohm resistivity (using two
different instruments of Geonics EM 34-3 and EMI6R respectively) have been applied to
evaluate their ability in delineation and measuring the depth of shallow subsurface cavities
near Haditha city.
Thirty one survey traverses were achieved to distinguish the subsurface cavities in the
investigated area. Both EM techniques are found to be successfiul tools in study area.
Phonetics has close relevance with Musicology; in this study I decided explaining the interlinkages and harmony between Phonetics and Musicology. Linguists preceded philosophers in an attempt to link Phonetics with Musicology; the 1st serious attempt to link Phonetics with Musicology was done by Ibn Jeny (Dead 392 IC), but the real attempt is found with Farabi through his book under title Al Musiqa Al Kabeer, he defined music and link it with tune and relation between melody and tone, This is the same as pointed out by Ikhwan Al Safa who followed the doctrine of al-Farabi, their attention was with music and link it with phoneme, as they made music independent science, and they created special mathematics rules for it. Melody in music can
... Show Moreيقترح هذا البحث طريقة جديدة لتقدير دالة كثافة الرابطة باستخدام تحليل المويجات كطريقة لامعلمية، من أجل الحصول على نتائج أكثر دقة وخالية من مشكلة تاثيرات الحدود التي تعاني منها طرائق التقدير اللامعلمية. اذ تعد طريقة المويجات طريقة اوتماتيكية للتعامل مع تاثيرات الحدود وذلك لانها لا تأخذ بنظر الاعتبار إذا كانت السلسلة الزمنية مستقرة او غير مستقرة. ولتقدير دالة كثافة الرابطة تم استعمال المحاكاة لتوليد البي
... Show MoreDeepFake is a concern for celebrities and everyone because it is simple to create. DeepFake images, especially high-quality ones, are difficult to detect using people, local descriptors, and current approaches. On the other hand, video manipulation detection is more accessible than an image, which many state-of-the-art systems offer. Moreover, the detection of video manipulation depends entirely on its detection through images. Many worked on DeepFake detection in images, but they had complex mathematical calculations in preprocessing steps, and many limitations, including that the face must be in front, the eyes have to be open, and the mouth should be open with the appearance of teeth, etc. Also, the accuracy of their counterfeit detectio
... Show More