This paper proposes and tests a computerized approach for constructing a 3D model of blood vessels from angiogram images. The approach is divided into two steps, image features extraction and solid model formation. In the first step, image morphological operations and post-processing techniques are used for extracting geometrical entities from the angiogram image. These entities are the middle curve and outer edges of the blood vessel, which are then passed to a computer-aided graphical system for the second phase of processing. The system has embedded programming capabilities and pre-programmed libraries for automating a sequence of events that are exploited to create a solid model of the blood vessel. The gradient of the middle curve is adopted to steer the vessel’s direction, while the cross-sections of the blood vessel are formed as a sequence of circles lying in planes that are orthogonal to the gradients of the middle curves. The radii for the circles are estimated as a distance between the intersection points of the blood vessel edges with the orthogonal plane to the middle curve gradient. The system then uses these circles and the middle curve gradients to produce a solid volume that represents the 3D shape of the blood vessel. The method was tested and evaluated using different cases of angiogram images, and showed a reasonable agreement between the generated shapes and the tested images.
Predicting the network traffic of web pages is one of the areas that has increased focus in recent years. Modeling traffic helps find strategies for distributing network loads, identifying user behaviors and malicious traffic, and predicting future trends. Many statistical and intelligent methods have been studied to predict web traffic using time series of network traffic. In this paper, the use of machine learning algorithms to model Wikipedia traffic using Google's time series dataset is studied. Two data sets were used for time series, data generalization, building a set of machine learning models (XGboost, Logistic Regression, Linear Regression, and Random Forest), and comparing the performance of the models using (SMAPE) and
... Show MoreSpeech is the essential way to interact between humans or between human and machine. However, it is always contaminated with different types of environment noise. Therefore, speech enhancement algorithms (SEA) have appeared as a significant approach in speech processing filed to suppress background noise and return back the original speech signal. In this paper, a new efficient two-stage SEA with low distortion is proposed based on minimum mean square error sense. The estimation of clean signal is performed by taking the advantages of Laplacian speech and noise modeling based on orthogonal transform (Discrete Krawtchouk-Tchebichef transform) coefficients distribution. The Discrete Kra
The revolution of multimedia has been a driving force behind fast and secured data transmission techniques. The security of image information from unapproved access is imperative. Encryptions technique is used to transfer data, where each kind of data has its own special elements; thus various methods should to be used to conserve distributing the image. This paper produces image encryption improvements based on proposed an approach to generate efficient intelligent session (mask keys) based on investigates from the combination between robust feature for ECC algebra and construction level in Greedy Randomized Adaptive Search Procedure (GRASP) to produce durable symmetric session mask keys consist of ECC points. Symmetric behavior for ECC
... Show MoreThe messages are ancient method to exchange information between peoples. It had many ways to send it with some security.
Encryption and steganography was oldest ways to message security, but there are still many problems in key generation, key distribution, suitable cover image and others. In this paper we present proposed algorithm to exchange security message without any encryption, or image as cover to hidden. Our proposed algorithm depends on two copies of the same collection images set (CIS), one in sender side and other in receiver side which always exchange message between them.
To send any message text the sender converts message to ASCII c
... Show MoreThe huge amount of documents in the internet led to the rapid need of text classification (TC). TC is used to organize these text documents. In this research paper, a new model is based on Extreme Machine learning (EML) is used. The proposed model consists of many phases including: preprocessing, feature extraction, Multiple Linear Regression (MLR) and ELM. The basic idea of the proposed model is built upon the calculation of feature weights by using MLR. These feature weights with the extracted features introduced as an input to the ELM that produced weighted Extreme Learning Machine (WELM). The results showed a great competence of the proposed WELM compared to the ELM.
In this paper an authentication based finger print biometric system is proposed with personal identity information of name and birthday. A generation of National Identification Number (NIDN) is proposed in merging of finger print features and the personal identity information to generate the Quick Response code (QR) image that used in access system. In this paper two approaches are dependent, traditional authentication and strong identification with QR and NIDN information. The system shows accuracy of 96.153% with threshold value of 50. The accuracy reaches to 100% when the threshold value goes under 50.
Digital forensic is part of forensic science that implicitly covers crime related to computer and other digital devices. It‟s being for a while that academic studies are interested in digital forensics. The researchers aim to find out a discipline based on scientific structures that defines a model reflecting their observations. This paper suggests a model to improve the whole investigation process and obtaining an accurate and complete evidence and adopts securing the digital evidence by cryptography algorithms presenting a reliable evidence in a court of law. This paper presents the main and basic concepts of the frameworks and models used in digital forensics investigation.
Today with increase using social media, a lot of researchers have interested in topic extraction from Twitter. Twitter is an unstructured short text and messy that it is critical to find topics from tweets. While topic modeling algorithms such as Latent Semantic Analysis (LSA) and Latent Dirichlet Allocation (LDA) are originally designed to derive topics from large documents such as articles, and books. They are often less efficient when applied to short text content like Twitter. Luckily, Twitter has many features that represent the interaction between users. Tweets have rich user-generated hashtags as keywords. In this paper, we exploit the hashtags feature to improve topics learned
A novel median filter based on crow optimization algorithms (OMF) is suggested to reduce the random salt and pepper noise and improve the quality of the RGB-colored and gray images. The fundamental idea of the approach is that first, the crow optimization algorithm detects noise pixels, and that replacing them with an optimum median value depending on a criterion of maximization fitness function. Finally, the standard measure peak signal-to-noise ratio (PSNR), Structural Similarity, absolute square error and mean square error have been used to test the performance of suggested filters (original and improved median filter) used to removed noise from images. It achieves the simulation based on MATLAB R2019b and the resul
... Show More