Faces blurring is one of the important complex processes that is considered one of the advanced computer vision fields. The face blurring processes generally have two main steps to be done. The first step has detected the faces that appear in the frames while the second step is tracking the detected faces which based on the information extracted during the detection step. In the proposed method, an image is captured by the camera in real time, then the Viola Jones algorithm used for the purpose of detecting multiple faces in the captured image and for the purpose of reducing the time consumed to handle the entire captured image, the image background is removed and only the motion areas are processed. After detecting the faces, the Color-Space algorithm is used to tracks the detected faces depending on the color of the face and to check the differences between the faces the Template Matching algorithm was used to reduce the processes time. Finally, the
detected faces as well as the faces that were tracked based on their color were obscured by the use of the Gaussian filter. The achieved accuracy for a single face and dynamic background are about 82.8% and 76.3% respectively.
The huge amount of documents in the internet led to the rapid need of text classification (TC). TC is used to organize these text documents. In this research paper, a new model is based on Extreme Machine learning (EML) is used. The proposed model consists of many phases including: preprocessing, feature extraction, Multiple Linear Regression (MLR) and ELM. The basic idea of the proposed model is built upon the calculation of feature weights by using MLR. These feature weights with the extracted features introduced as an input to the ELM that produced weighted Extreme Learning Machine (WELM). The results showed a great competence of the proposed WELM compared to the ELM.
In this paper an authentication based finger print biometric system is proposed with personal identity information of name and birthday. A generation of National Identification Number (NIDN) is proposed in merging of finger print features and the personal identity information to generate the Quick Response code (QR) image that used in access system. In this paper two approaches are dependent, traditional authentication and strong identification with QR and NIDN information. The system shows accuracy of 96.153% with threshold value of 50. The accuracy reaches to 100% when the threshold value goes under 50.
Arabic text categorization for pattern recognitions is challenging. We propose for the first time a novel holistic method based on clustering for classifying Arabic writer. The categorization is accomplished stage-wise. Firstly, these document images are sectioned into lines, words, and characters. Secondly, their structural and statistical features are obtained from sectioned portions. Thirdly, F-Measure is used to evaluate the performance of the extracted features and their combination in different linkage methods for each distance measures and different numbers of groups. Finally, experiments are conducted on the standard KHATT dataset of Arabic handwritten text comprised of varying samples from 1000 writers. The results in the generatio
... Show MoreThe revolution of multimedia has been a driving force behind fast and secured data transmission techniques. The security of image information from unapproved access is imperative. Encryptions technique is used to transfer data, where each kind of data has its own special elements; thus various methods should to be used to conserve distributing the image. This paper produces image encryption improvements based on proposed an approach to generate efficient intelligent session (mask keys) based on investigates from the combination between robust feature for ECC algebra and construction level in Greedy Randomized Adaptive Search Procedure (GRASP) to produce durable symmetric session mask keys consist of ECC points. Symmetric behavior for ECC
... Show MoreFinding similarities in texts is important in many areas such as information retrieval, automated article scoring, and short answer categorization. Evaluating short answers is not an easy task due to differences in natural language. Methods for calculating the similarity between texts depend on semantic or grammatical aspects. This paper discusses a method for evaluating short answers using semantic networks to represent the typical (correct) answer and students' answers. The semantic network of nodes and relationships represents the text (answers). Moreover, grammatical aspects are found by measuring the similarity of parts of speech between the answers. In addition, finding hierarchical relationships between nodes in netwo
... Show MoreDigital forensic is part of forensic science that implicitly covers crime related to computer and other digital devices. It‟s being for a while that academic studies are interested in digital forensics. The researchers aim to find out a discipline based on scientific structures that defines a model reflecting their observations. This paper suggests a model to improve the whole investigation process and obtaining an accurate and complete evidence and adopts securing the digital evidence by cryptography algorithms presenting a reliable evidence in a court of law. This paper presents the main and basic concepts of the frameworks and models used in digital forensics investigation.
The corrosion of metals is of great economic importance. Estimates show that the quarter of the iron and the steel produced is destroyed in this way. Rubber lining has been used for severe corrosion protection because NR and certain synthetic rubbers have a basic resistance to the very corrosive chemicals particularly acids. The present work includes producing ebonite from both natural and synthetic rubbers ; therefore, the following materials were chosen to produce ebonite rubber: a) Natural rubber (NR). b) Styrene butadiene rubber (SBR). c) Nitrile rubber (NBR). d) Neoprene rubber (CR) [WRT]. The best ebonite vulcanizates are obtained in the presence of 30 Pphr sulfur, and carbon black as reinforcing filler. The relation between
... Show MoreMerging images is one of the most important technologies in remote sensing applications and geographic information systems. In this study, a simulation process using a camera for fused images by using resizing image for interpolation methods (nearest, bilinear and bicubic). Statistical techniques have been used as an efficient merging technique in the images integration process employing different models namely Local Mean Matching (LMM) and Regression Variable Substitution (RVS), and apply spatial frequency techniques include high pass filter additive method (HPFA). Thus, in the current research, statistical measures have been used to check the quality of the merged images. This has been carried out by calculating the correlation a
... Show MoreThe transition of customers from one telecom operator to another has a direct impact on the company's growth and revenue. Traditional classification algorithms fail to predict churn effectively. This research introduces a deep learning model for predicting customers planning to leave to another operator. The model works on a high-dimensional large-scale data set. The performance of the model was measured against other classification algorithms, such as Gaussian NB, Random Forrest, and Decision Tree in predicting churn. The evaluation was performed based on accuracy, precision, recall, F-measure, Area Under Curve (AUC), and Receiver Operating Characteristic (ROC) Curve. The proposed deep learning model performs better than othe
... Show More