Cloud Computing is a mass platform to serve high volume data from multi-devices and numerous technologies. Cloud tenants have a high demand to access their data faster without any disruptions. Therefore, cloud providers are struggling to ensure every individual data is secured and always accessible. Hence, an appropriate replication strategy capable of selecting essential data is required in cloud replication environments as the solution. This paper proposed a Crucial File Selection Strategy (CFSS) to address poor response time in a cloud replication environment. A cloud simulator called CloudSim is used to conduct the necessary experiments, and results are presented to evidence the enhancement on replication performance. The obtained analytical graphs are discussed thoroughly, and apparently, the proposed CFSS algorithm outperformed another existing algorithm with a 10.47% improvement in average response time for multiple jobs per round.
Multi-document summarization is an optimization problem demanding optimization of more than one objective function simultaneously. The proposed work regards balancing of the two significant objectives: content coverage and diversity when generating summaries from a collection of text documents.
Any automatic text summarization system has the challenge of producing high quality summary. Despite the existing efforts on designing and evaluating the performance of many text summarization techniques, their formulations lack the introduction of any model that can give an explicit representation of – coverage and diversity – the two contradictory semantics of any summary. In this work, the design of
... Show MoreAutomatic document summarization technology is evolving and may offer a solution to the problem of information overload. Multi-document summarization is an optimization problem demanding optimizing more than one objective function concurrently. The proposed work considers a balance of two significant objectives: content coverage and diversity while generating a summary from a collection of text documents. Despite the large efforts introduced from several researchers for designing and evaluating performance of many text summarization techniques, their formulations lack the introduction of any model that can give an explicit representation of – coverage and diversity – the two contradictory semantics of any summary. The design of gener
... Show MoreBilinear interpolation and use of perceptual color spaces (HSL, HSV, LAB, and LUV) fusion techniques are presented to improve spatial and spectral characteristics of the multispectral image that has a low resolution to match the high spatial resolution of a panchromatic image for different satellites image data (Orbview-3 and Landsat-7) for the same region. The Signal-to-Noise Ratio (SNR) fidelity criterion for achromatic information has been calculated, as well as the mean color-shifting parameters that computed the ratio of chromatic information loss of the RGB compound inside each pixel to evaluate the quality of the fused images. The results showed the superiority of HSL color space to fuse images over the rest of the spac
... Show MoreThere is a great operational risk to control the day-to-day management in water treatment plants, so water companies are looking for solutions to predict how the treatment processes may be improved due to the increased pressure to remain competitive. This study focused on the mathematical modeling of water treatment processes with the primary motivation to provide tools that can be used to predict the performance of the treatment to enable better control of uncertainty and risk. This research included choosing the most important variables affecting quality standards using the correlation test. According to this test, it was found that the important parameters of raw water: Total Hardn
In this paper, we used four classification methods to classify objects and compareamong these methods, these are K Nearest Neighbor's (KNN), Stochastic Gradient Descentlearning (SGD), Logistic Regression Algorithm(LR), and Multi-Layer Perceptron (MLP). Weused MCOCO dataset for classification and detection the objects, these dataset image wererandomly divided into training and testing datasets at a ratio of 7:3, respectively. In randomlyselect training and testing dataset images, converted the color images to the gray level, thenenhancement these gray images using the histogram equalization method, resize (20 x 20) fordataset image. Principal component analysis (PCA) was used for feature extraction, andfinally apply four classification metho
... Show MoreIn this paper three techniques for image compression are implemented. The proposed techniques consist of three dimension (3-D) two level discrete wavelet transform (DWT), 3-D two level discrete multi-wavelet transform (DMWT) and 3-D two level hybrid (wavelet-multiwavelet transform) technique. Daubechies and Haar are used in discrete wavelet transform and Critically Sampled preprocessing is used in discrete multi-wavelet transform. The aim is to maintain to increase the compression ratio (CR) with respect to increase the level of the transformation in case of 3-D transformation, so, the compression ratio is measured for each level. To get a good compression, the image data properties, were measured, such as, image entropy (He), percent r
... Show MoreIn this paper three techniques for image compression are implemented. The proposed techniques consist of three dimension (3-D) two level discrete wavelet transform (DWT), 3-D two level discrete multi-wavelet transform (DMWT) and 3-D two level hybrid (wavelet-multiwavelet transform) technique. Daubechies and Haar are used in discrete wavelet transform and Critically Sampled preprocessing is used in discrete multi-wavelet transform. The aim is to maintain to increase the compression ratio (CR) with respect to increase the level of the transformation in case of 3-D transformation, so, the compression ratio is measured for each level. To get a good compression, the image data properties, were measured, such as, image entropy (He), percent root-
... Show MoreEssential approaches involving photons are among the most common uses of parallel optical computation due to their recent invention, ease of production, and low cost. As a result, most researchers have concentrated their efforts on it. The Basic Arithmetic Unit BAU is built using a three-step approach that uses optical gates with three states to configure the circuitry for addition, subtraction, and multiplication. This is a new optical computing method based on the usage of a radix of (2): a binary number with a signed-digit (BSD) system that includes the numbers -1, 0, and 1. Light with horizontal polarization (LHP) (↔), light with no intensity (LNI) (⥀), and light with vertical polarization (LVP) (↨) is represen
... Show MoreThese search summaries in building a mathematical model to the issue of Integer linear Fractional programming and finding the best solution of Integer linear Fractional programming (I.L.F.P) that maximize the productivity of the company,s revenue by using the largest possible number of production units and maximizing denominator objective which represents,s proportion of profits to the costs, thus maximizing total profit of the company at the lowest cost through using Dinkelbach algorithm and the complementary method on the Light industries company data for 2013 and comparing results with Goal programming methods results.
It is clear that the final results of resolution and Dinkelbac
... Show MoreThe research explain the developments in the structure of government Expenditure for the period (1990-2014), this period include tow different periods in terms of the conditions, the first period (1990-2002)characterized by imposing the economic sanctions and deny the Iraqi economy from the oil revenues, while the second period (2003-2014) marked by abundance resource rents as a result of lifting the ban on oil exports, (autoregressive Distributed lag Model) has been used to measure the impact of government Expenditure in both side current and investment in the oil-GDP (gross domestic product) and non oil-GDP, the stady found that there is no significant relationship between current Expenditure in non-oil and oil-GDP in bo
... Show More