Predicting the network traffic of web pages is one of the areas that has increased focus in recent years. Modeling traffic helps find strategies for distributing network loads, identifying user behaviors and malicious traffic, and predicting future trends. Many statistical and intelligent methods have been studied to predict web traffic using time series of network traffic. In this paper, the use of machine learning algorithms to model Wikipedia traffic using Google's time series dataset is studied. Two data sets were used for time series, data generalization, building a set of machine learning models (XGboost, Logistic Regression, Linear Regression, and Random Forest), and comparing the performance of the models using (SMAPE) and (MAPE). The results showed the possibility of modeling the network traffic time series and that the performance of the linear regression model is the best compared to the rest of the models for both series.
In this work we present a technique to extract the heart contours from noisy echocardiograph images. Our technique is based on improving the image before applying contours detection to reduce heavy noise and get better image quality. To perform that, we combine many pre-processing techniques (filtering, morphological operations, and contrast adjustment) to avoid unclear edges and enhance low contrast of echocardiograph images, after implementing these techniques we can get legible detection for heart boundaries and valves movement by traditional edge detection methods.
Data hiding strategies have recently gained popularity in different fields; Digital watermark technology was developed for hiding copyright information in the image visually or invisibly. Today, 3D model technology has the potential to alter the field because it allows for the production of sophisticated structures and forms that were previously impossible to achieve. In this paper, a new watermarking method for the 3D model is presented. The proposed method is based on the geometrical and topology properties of the 3D model surface to increase the security. The geometrical properties are based on computing the mean curvature for a surface and topology based on the number of edges around each vertex, the vertices
... Show MoreRecently, Image enhancement techniques can be represented as one of the most significant topics in the field of digital image processing. The basic problem in the enhancement method is how to remove noise or improve digital image details. In the current research a method for digital image de-noising and its detail sharpening/highlighted was proposed. The proposed approach uses fuzzy logic technique to process each pixel inside entire image, and then take the decision if it is noisy or need more processing for highlighting. This issue is performed by examining the degree of association with neighboring elements based on fuzzy algorithm. The proposed de-noising approach was evaluated by some standard images after corrupting them with impulse
... Show MoreProducing pseudo-random numbers (PRN) with high performance is one of the important issues that attract many researchers today. This paper suggests pseudo-random number generator models that integrate Hopfield Neural Network (HNN) with fuzzy logic system to improve the randomness of the Hopfield Pseudo-random generator. The fuzzy logic system has been introduced to control the update of HNN parameters. The proposed model is compared with three state-ofthe-art baselines the results analysis using National Institute of Standards and Technology (NIST) statistical test and ENT test shows that the projected model is statistically significant in comparison to the baselines and this demonstrates the competency of neuro-fuzzy based model to produce
... Show MoreIris recognition occupies an important rank among the biometric types of approaches as a result of its accuracy and efficiency. The aim of this paper is to suggest a developed system for iris identification based on the fusion of scale invariant feature transforms (SIFT) along with local binary patterns of features extraction. Several steps have been applied. Firstly, any image type was converted to grayscale. Secondly, localization of the iris was achieved using circular Hough transform. Thirdly, the normalization to convert the polar value to Cartesian using Daugman’s rubber sheet models, followed by histogram equalization to enhance the iris region. Finally, the features were extracted by utilizing the scale invariant feature
... Show MoreThe current study primarily aims to develop a dictionary system for tracing mobile phone numbers for call centers of mobile communication companies. This system tries to save the numbers using a digital search tree in order to make the processes of searching and retrieving customers’ information easier and faster. Several shrubs that represent digits of the total phone numbers will be built through following the phone number digits to be added to the dictionary, with the owner name being at the last node in the tree. Thus, by such searching process, every phone number can be tracked digit-by-digit according to a required path inside its tree, until reaching the leaf. Then, the value stored in the node, that rep
... Show MoreImage pattern classification is considered a significant step for image and video processing. Although various image pattern algorithms have been proposed so far that achieved adequate classification, achieving higher accuracy while reducing the computation time remains challenging to date. A robust image pattern classification method is essential to obtain the desired accuracy. This method can be accurately classify image blocks into plain, edge, and texture (PET) using an efficient feature extraction mechanism. Moreover, to date, most of the existing studies are focused on evaluating their methods based on specific orthogonal moments, which limits the understanding of their potential application to various Discrete Orthogonal Moments (DOM
... Show MoreThe sending of information at the present time requires the speed and providing protection for it. So compression of the data is used in order to provide speed and encryption is used in order to provide protection. In this paper a proposed method is presented in order to provide compression and security for the secret information before sending it. The proposed method based on especial keys with MTF transform method to provide compression and based on RNA coding with MTF encoding method to provide security. The proposed method based on multi secret keys. Every key is designed in an especial way. The main reason in designing these keys in special way is to protect these keys from the predication of the unauthorized users.