An approach for hiding information has been proposed for securing information using Slanlet transform and the T-codes. Same as the wavelet transform the Slantlet transform is better in compression signal and good time localization signal compression than the conventional transforms like (DCT) discrete cosine transforms. The proposed method provides efficient security, because the original secret image is encrypted before embedding in order to build a robust system that is no attacker can defeat it. Some of the well known fidelity measures like (PSNR and AR) were used to measure the quality of the Steganography image and the image after extracted. The results show that the stego-image is closed related to the cover image, with (PSNR) Peak Signal to Noise Ratio is about 55dB. The recovered secret image is extracted (100%) if stego-image has no attack. These methods can provide good hiding capacity and image quality. Several types of attacks have been applied to the proposed methods in order to measure the robustness like (compression, add noise and cropping). The proposed algorithm has been implemented by using computer simulation program MATLAB version 7.9 under windows 7 operating system by Microsoft cooperation.
The main aim of image compression is to reduce the its size to be able for transforming and storage, therefore many methods appeared to compress the image, one of these methods is "Multilayer Perceptron ". Multilayer Perceptron (MLP) method which is artificial neural network based on the Back-Propagation algorithm for compressing the image. In case this algorithm depends upon the number of neurons in the hidden layer only the above mentioned will not be quite enough to reach the desired results, then we have to take into consideration the standards which the compression process depend on to get the best results. We have trained a group of TIFF images with the size of (256*256) in our research, compressed them by using MLP for each
... Show MoreThe investigation of signature validation is crucial to the field of personal authenticity. The biometrics-based system has been developed to support some information security features.Aperson’s signature, an essential biometric trait of a human being, can be used to verify their identification. In this study, a mechanism for automatically verifying signatures has been suggested. The offline properties of handwritten signatures are highlighted in this study which aims to verify the authenticity of handwritten signatures whether they are real or forged using computer-based machine learning techniques. The main goal of developing such systems is to verify people through the validity of their signatures. In this research, images of a group o
... Show MoreKeywords provide the reader with a summary of the contents of the document and play a significant role in information retrieval systems, especially in search engine optimization and bibliographic databases. Furthermore keywords help to classify the document into the related topic. Keywords extraction included manual extracting depends on the content of the document or article and the judgment of its author. Manual extracting of keywords is costly, consumes effort and time, and error probability. In this research an automatic Arabic keywords extraction model based on deep learning algorithms is proposed. The model consists of three main steps: preprocessing, feature extraction and classification to classify the document
... Show MoreIn this research Artificial Neural Network (ANN) technique was applied to study the filtration process in water treatment. Eight models have been developed and tested using data from a pilot filtration plant, working under different process design criteria; influent turbidity, bed depth, grain size, filtration rate and running time (length of the filtration run), recording effluent turbidity and head losses. The ANN models were constructed for the prediction of different performance criteria in the filtration process: effluent turbidity, head losses and running time. The results indicate that it is quite possible to use artificial neural networks in predicting effluent turbidity, head losses and running time in the filtration process, wi
... Show MoreClassification of network traffic is an important topic for network management, traffic routing, safe traffic discrimination, and better service delivery. Traffic examination is the entire process of examining traffic data, from intercepting traffic data to discovering patterns, relationships, misconfigurations, and anomalies in a network. Between them, traffic classification is a sub-domain of this field, the purpose of which is to classify network traffic into predefined classes such as usual or abnormal traffic and application type. Most Internet applications encrypt data during traffic, and classifying encrypted data during traffic is not possible with traditional methods. Statistical and intelligence methods can find and model traff
... Show MoreThis paper explores VANET topics: architecture, characteristics, security, routing protocols, applications, simulators, and 5G integration. We update, edit, and summarize some of the published data as we analyze each notion. For ease of comprehension and clarity, we give part of the data as tables and figures. This survey also raises issues for potential future research topics, such as how to integrate VANET with a 5G cellular network and how to use trust mechanisms to enhance security, scalability, effectiveness, and other VANET features and services. In short, this review may aid academics and developers in choosing the key VANET characteristics for their objectives in a single document.
Videogames are currently one of the most widespread means of digital communication and entertainment; their releases are attracting considerable interest with growing number of audience and revenues each year. Videogames are examined by a variety of disciplines and fields. Nevertheless, scholarly attention concerned with the discourse of videogames from a linguistic perspective is relatively scarce, especially from a pragma-stylistic standpoint. This book addresses this vital issue by providing a pragma-stylistic analysis of the digital discourse of two well-known action videogames (First Person Shooter Games). It explores the role of the digital discourse of action videogames in maintaining real-like interactivity between the game and the
... Show MoreFR Almoswai, BN Rashid, PEOPLE: International Journal of Social Sciences, 2017 - Cited by 22
The background subtraction is a leading technique adopted for detecting the moving objects in video surveillance systems. Various background subtraction models have been applied to tackle different challenges in many surveillance environments. In this paper, we propose a model of pixel-based color-histogram and Fuzzy C-means (FCM) to obtain the background model using cosine similarity (CS) to measure the closeness between the current pixel and the background model and eventually determine the background and foreground pixel according to a tuned threshold. The performance of this model is benchmarked on CDnet2014 dynamic scenes dataset using statistical metrics. The results show a better performance against the state-of the art
... Show MoreEncryption of data is translating data to another shape or symbol which enables people only with an access to the secret key or a password that can read it. The data which are encrypted are generally referred to as cipher text, while data which are unencrypted are known plain text. Entropy can be used as a measure which gives the number of bits that are needed for coding the data of an image. As the values of pixel within an image are dispensed through further gray-levels, the entropy increases. The aim of this research is to compare between CAST-128 with proposed adaptive key and RSA encryption methods for video frames to determine the more accurate method with highest entropy. The first method is achieved by applying the "CAST-128" and
... Show More