In the field of data security, the critical challenge of preserving sensitive information during its transmission through public channels takes centre stage. Steganography, a method employed to conceal data within various carrier objects such as text, can be proposed to address these security challenges. Text, owing to its extensive usage and constrained bandwidth, stands out as an optimal medium for this purpose. Despite the richness of the Arabic language in its linguistic features, only a small number of studies have explored Arabic text steganography. Arabic text, characterized by its distinctive script and linguistic features, has gained notable attention as a promising domain for steganographic ventures. Arabic text steganography harnesses the unique attributes of this language, encompassing its complex character designs, diacritical marks, and ligatures, to effectively protect information. In this work, we propose a new text steganography method based on Arabic language characteristics concealment, where the proposed method has two levels of security which are: Arabic encoding and word shifting. In the first step, build a new Arabic encoding mapping table to convert an English plaintext to Arabic characters, then use a word shifting process to add an authentication phase for the sending message and add another level of security to the achieved ciphertext. The proposed method showed that Arabic language characteristics steganography achieved 0.15 ms for 1 k, 1.0033 ms for 3 k, 2.331 ms for 5 k, and 5.22 ms for 10 k file sizes respectively.
In this work, a pollution-sensitive Photonic Crystal Fiber (PCF) based on Surface Plasmon Resonance (SPR) technology is designed and implemented for sensing refractive indices and concentrations of polluted water . The overall construction of the sensor is achieved by splicing short lengths of PCF (ESM-12) solid core on one side with traditional multimode fiber (MMF) and depositing a gold nanofilm of 50nm thickness on the end of the PCF sensor. The PCF- SPR experiment was carried out with various samples of polluted water including(distilled water, draining water, dirty pond water, chemical water, salty water and oiled water). The location of the resonant wavelength peaks is seen to move to longer wavelengths (red shift)
... Show MoreFacial emotion recognition finds many real applications in the daily life like human robot interaction, eLearning, healthcare, customer services etc. The task of facial emotion recognition is not easy due to the difficulty in determining the effective feature set that can recognize the emotion conveyed within the facial expression accurately. Graph mining techniques are exploited in this paper to solve facial emotion recognition problem. After determining positions of facial landmarks in face region, twelve different graphs are constructed using four facial components to serve as a source for sub-graphs mining stage using gSpan algorithm. In each group, the discriminative set of sub-graphs are selected and fed to Deep Belief Network (DBN) f
... Show MoreAny software application can be divided into four distinct interconnected domains namely, problem domain, usage domain, development domain and system domain. A methodology for assistive technology software development is presented here that seeks to provide a framework for requirements elicitation studies together with their subsequent mapping implementing use-case driven object-oriented analysis for component based software architectures. Early feedback on user interface components effectiveness is adopted through process usability evaluation. A model is suggested that consists of the three environments; problem, conceptual, and representational environments or worlds. This model aims to emphasize on the relationship between the objects
... Show MoreEarthquakes occur on faults and create new faults. They also occur on normal, reverse and strike-slip faults. The aim of this work is to suggest a new unified classification of Shallow depth earthquakes based on the faulting styles, and to characterize each class. The characterization criteria include the maximum magnitude, focal depth, b-constant value, return period and relations between magnitude, focal depth and dip of fault plane. Global Centroid Moment Tensor (GCMT) catalog is the source of the used data. This catalog covers the period from Jan.1976 to Dec. 2017. We selected only the shallow (depth less than 70kms) pure, normal, strike-slip and reverse earthquakes (magnitude ≥ 5) and excluded the oblique earthquakes. Th
... Show MoreCarbon nanotubes are an ideal material for infrared applications due to their
excellent electronic and photo electronic properties, suitable band gap, mechanical
and chemical stabilities. Functionalised multi-wall carbon nanotubes (f-MWCNTs)
were incorporated into polythiophen (PTh) matrix by electro polymerization
method. f-MWCNTs/ PTh nanocomposit films were prepared with 5wt% and
10wt% loading ratios of f-MWCNTs in the polymer matrix. The films are deposited
on porous silicon nanosurfaces to fabricate photoconductive detectors work in the
near IR region. The detectors were illuminated by semiconductor laser diode with
peak wavelength of 808 nm radiation power of 300 mW. FTIR spectra assignments
verify that t
Tracking moving objects is one of a very important applications within the computer vision. The goal of object tracking is segmenting a region of interest from a video scene and keeping track of its motion and positioning. Track moving objects used in many applications such as video surveillance, robot vision, and traffic monitoring, and animation. In this paper a four-wheeled robotic system have been designed and implemented by using Arduino-Uno microcontroller. Also a useful algorithms have been developed to detect and track objects in real time. The main proposed algorithm based on the kernel based color algorithm and some geometric properties to tracking color object. Robotic system is a compromise of two principal parts which are th
... Show MoreIn recent years, predicting heart disease has become one of the most demanding tasks in medicine. In modern times, one person dies from heart disease every minute. Within the field of healthcare, data science is critical for analyzing large amounts of data. Because predicting heart disease is such a difficult task, it is necessary to automate the process in order to prevent the dangers connected with it and to assist health professionals in accurately and rapidly diagnosing heart disease. In this article, an efficient machine learning-based diagnosis system has been developed for the diagnosis of heart disease. The system is designed using machine learning classifiers such as Support Vector Machine (SVM), Nave Bayes (NB), and K-Ne
... Show MoreThis paper describes a new finishing process using magnetic abrasives were newly made to finish effectively brass plate that is very difficult to be polished by the conventional machining processes. Taguchi experimental design method was adopted for evaluating the effect of the process parameters on the improvement of the surface roughness and hardness by the magnetic abrasive polishing. The process parameters are: the applied current to the inductor, the working gap between the workpiece and the inductor, the rotational speed and the volume of powder. The analysis of variance(ANOVA) was analyzed using statistical software to identify the optimal conditions for better surface roughness and hardness. Regressions models based on statistical m
... Show MoreThe paper aims is to solve the problem of choosing the appropriate project from several service projects for the Iraqi Martyrs Foundation or arrange them according to the preference within the targeted criteria. this is done by using Multi-Criteria Decision Method (MCDM), which is the method of Multi-Objective Optimization by Ratios Analysis (MOORA) to measure the composite score of performance that each alternative gets and the maximum benefit accruing to the beneficiary and according to the criteria and weights that are calculated by the Analytic Hierarchy Process (AHP). The most important findings of the research and relying on expert opinion are to choose the second project as the best alternative and make an arrangement acco
... Show MoreCurrently, the prominence of automatic multi document summarization task belongs to the information rapid increasing on the Internet. Automatic document summarization technology is progressing and may offer a solution to the problem of information overload.
Automatic text summarization system has the challenge of producing a high quality summary. In this study, the design of generic text summarization model based on sentence extraction has been redirected into a more semantic measure reflecting individually the two significant objectives: content coverage and diversity when generating summaries from multiple documents as an explicit optimization model. The proposed two models have been then coupled and def
... Show More