The huge amount of information in the internet makes rapid need of text
summarization. Text summarization is the process of selecting important sentences
from documents with keeping the main idea of the original documents. This paper
proposes a method depends on Technique for Order of Preference by Similarity to
Ideal Solution (TOPSIS). The first step in our model is based on extracting seven
features for each sentence in the documents set. Multiple Linear Regression (MLR)
is then used to assign a weight for the selected features. Then TOPSIS method
applied to rank the sentences. The sentences with high scores will be selected to be
included in the generated summary. The proposed model is evaluated using dataset
supplied by the Text Analysis Conference (TAC-2011) for English documents. The
performance of the proposed model is evaluated using Recall-Oriented Understudy
for Gisting Evaluation (ROUGE) metric. The obtained results support the
effectiveness of the proposed model.
Grey system theory is a multidisciplinary scientific approach, which deals with systems that have partially unknown information (small sample and uncertain information). Grey modeling as an important component of such theory gives successful results with limited amount of data. Grey Models are divided into two types; univariate and multivariate grey models. The univariate grey model with one order derivative equation GM (1,1) is the base stone of the theory, it is considered the time series prediction model but it doesn’t take the relative factors in account. The traditional multivariate grey models GM(1,M) takes those factor in account but it has a complex structure and some defects in " modeling mechanism", "parameter estimation "and "m
... Show MoreThis paper presents an IoT smart building platform with fog and cloud computing capable of performing near real-time predictive analytics in fog nodes. The researchers explained thoroughly the internet of things in smart buildings, the big data analytics, and the fog and cloud computing technologies. They then presented the smart platform, its requirements, and its components. The datasets on which the analytics will be run will be displayed. The linear regression and the support vector regression data mining techniques are presented. Those two machine learning models are implemented with the appropriate techniques, starting by cleaning and preparing the data visualization and uncovering hidden information about the behavior of
... Show MoreIn regression testing, Test case prioritization (TCP) is a technique to arrange all the available test cases. TCP techniques can improve fault detection performance which is measured by the average percentage of fault detection (APFD). History-based TCP is one of the TCP techniques that consider the history of past data to prioritize test cases. The issue of equal priority allocation to test cases is a common problem for most TCP techniques. However, this problem has not been explored in history-based TCP techniques. To solve this problem in regression testing, most of the researchers resort to random sorting of test cases. This study aims to investigate equal priority in history-based TCP techniques. The first objective is to implement
... Show MoreIn this article we derive two reliability mathematical expressions of two kinds of s-out of -k stress-strength model systems; and . Both stress and strength are assumed to have an Inverse Lomax distribution with unknown shape parameters and a common known scale parameter. The increase and decrease in the real values of the two reliabilities are studied according to the increase and decrease in the distribution parameters. Two estimation methods are used to estimate the distribution parameters and the reliabilities, which are Maximum Likelihood and Regression. A comparison is made between the estimators based on a simulation study by the mean squared error criteria, which revealed that the maximum likelihood estimator works the best.
The security of message information has drawn more attention nowadays, so; cryptography has been used extensively. This research aims to generate secured cipher keys from retina information to increase the level of security. The proposed technique utilizes cryptography based on retina information. The main contribution is the original procedure used to generate three types of keys in one system from the retina vessel's end position and improve the technique of three systems, each with one key. The distances between the center of the diagonals of the retina image and the retina vessel's end (diagonal center-end (DCE)) represent the first key. The distances between the center of the radius of the retina and the retina vessel's end (ra
... Show MoreForeign Object Debris (FOD) is defined as one of the major problems in the airline maintenance industry, reducing the levels of safety. A foreign object which may result in causing serious damage to an airplane, including engine problems and personal safety risks. Therefore, it is critical to detect FOD in place to guarantee the safety of airplanes flying. FOD detection systems in the past lacked an effective method for automatic material recognition as well as high speed and accuracy in detecting materials. This paper proposes the FOD model using a variety of feature extraction approaches like Gray-level Co-occurrence Matrix (GLCM) and Linear Discriminant Analysis (LDA) to extract features and Deep Learning (DL) for classifi
... Show MoreThe approach given in this paper leads to numerical methods to find the approximate solution of volterra integro –diff. equ.1st kind. First, we reduce it from integro VIDEs to integral VIEs of the 2nd kind by using the reducing theory, then we use two types of Non-polynomial spline function (linear, and quadratic). Finally, programs for each method are written in MATLAB language and a comparison between these two types of Non-polynomial spline function is made depending on the least square errors and running time. Some test examples and the exact solution are also given.
This paper introduces a generalization sequence of positive and linear operators of integral type based on two parameters to improve the order of approximation. First, the simultaneous approximation is studied and a Voronovskaja-type asymptotic formula is introduced. Next, an error of the estimation in the simultaneous approximation is found. Finally, a numerical example to approximate a test function and its first derivative of this function is given for some values of the parameters.
This paper is concerned with the numerical blow-up solutions of semi-linear heat equations, where the nonlinear terms are of power type functions, with zero Dirichlet boundary conditions. We use explicit linear and implicit Euler finite difference schemes with a special time-steps formula to compute the blow-up solutions, and to estimate the blow-up times for three numerical experiments. Moreover, we calculate the error bounds and the numerical order of convergence arise from using these methods. Finally, we carry out the numerical simulations to the discrete graphs obtained from using these methods to support the numerical results and to confirm some known blow-up properties for the studied problems.
Some maps of the chaotic firefly algorithm were selected to select variables for data on blood diseases and blood vessels obtained from Nasiriyah General Hospital where the data were tested and tracking the distribution of Gamma and it was concluded that a Chebyshevmap method is more efficient than a Sinusoidal map method through mean square error criterion.