In the field of data security, the critical challenge of preserving sensitive information during its transmission through public channels takes centre stage. Steganography, a method employed to conceal data within various carrier objects such as text, can be proposed to address these security challenges. Text, owing to its extensive usage and constrained bandwidth, stands out as an optimal medium for this purpose. Despite the richness of the Arabic language in its linguistic features, only a small number of studies have explored Arabic text steganography. Arabic text, characterized by its distinctive script and linguistic features, has gained notable attention as a promising domain for steganographic ventures. Arabic text steganography harnesses the unique attributes of this language, encompassing its complex character designs, diacritical marks, and ligatures, to effectively protect information. In this work, we propose a new text steganography method based on Arabic language characteristics concealment, where the proposed method has two levels of security which are: Arabic encoding and word shifting. In the first step, build a new Arabic encoding mapping table to convert an English plaintext to Arabic characters, then use a word shifting process to add an authentication phase for the sending message and add another level of security to the achieved ciphertext. The proposed method showed that Arabic language characteristics steganography achieved 0.15 ms for 1 k, 1.0033 ms for 3 k, 2.331 ms for 5 k, and 5.22 ms for 10 k file sizes respectively.
My study here is about the (possible for itself) features especially in the (Alashaira) opinion, also I touch to the other logicians opinions, I clarified these features concisely in the preface to be clear in front of the reader, I made it in two studies and five subjects.I talked about the meaning of the existence in the ((possible for itself) theme, and I clarified its types in the logicians and the philosophers' opinions, through which the existence and the nonexistence are equal in it, as it cannot be existed nor be nonsexist unless with a separated reason, and there is no necessity to oblige its existence, or its nonexistence is impossible.Then I talked about the outweighing of the (possible for itself) as one side of the exi
... Show MoreA novel median filter based on crow optimization algorithms (OMF) is suggested to reduce the random salt and pepper noise and improve the quality of the RGB-colored and gray images. The fundamental idea of the approach is that first, the crow optimization algorithm detects noise pixels, and that replacing them with an optimum median value depending on a criterion of maximization fitness function. Finally, the standard measure peak signal-to-noise ratio (PSNR), Structural Similarity, absolute square error and mean square error have been used to test the performance of suggested filters (original and improved median filter) used to removed noise from images. It achieves the simulation based on MATLAB R2019b and the resul
... Show MoreAuthentication is the process of determining whether someone or something is, in fact, who or what it is declared to be. As the dependence upon computers and computer networks grows, the need for user authentication has increased. User’s claimed identity can be verified by one of several methods. One of the most popular of these methods is represented by (something user know), such as password or Personal Identification Number (PIN). Biometrics is the science and technology of authentication by identifying the living individual’s physiological or behavioral attributes. Keystroke authentication is a new behavioral access control system to identify legitimate users via their typing behavior. The objective of this paper is to provide user
... Show MoreSpeech is the essential way to interact between humans or between human and machine. However, it is always contaminated with different types of environment noise. Therefore, speech enhancement algorithms (SEA) have appeared as a significant approach in speech processing filed to suppress background noise and return back the original speech signal. In this paper, a new efficient two-stage SEA with low distortion is proposed based on minimum mean square error sense. The estimation of clean signal is performed by taking the advantages of Laplacian speech and noise modeling based on orthogonal transform (Discrete Krawtchouk-Tchebichef transform) coefficients distribution. The Discrete Kra
Iris research is focused on developing techniques for identifying and locating relevant biometric features, accurate segmentation and efficient computation while lending themselves to compression methods. Most iris segmentation methods are based on complex modelling of traits and characteristics which, in turn, reduce the effectiveness of the system being used as a real time system. This paper introduces a novel parameterized technique for iris segmentation. The method is based on a number of steps starting from converting grayscale eye image to a bit plane representation, selection of the most significant bit planes followed by a parameterization of the iris location resulting in an accurate segmentation of the iris from the origin
... Show More