Cloud computing provides huge amount of area for storage of the data, but with an increase of number of users and size of their data, cloud storage environment faces earnest problem such as saving storage space, managing this large data, security and privacy of data. To save space in cloud storage one of the important methods is data deduplication, it is one of the compression technique that allows only one copy of the data to be saved and eliminate the extra copies. To offer security and privacy of the sensitive data while supporting the deduplication, In this work attacks that exploit the hybrid cloud deduplication have been identified, allowing an attacker to gain access to the files of other users based on very small hash signatures of these files. More specifically, an attacker who knows the hash signature of a file can convince the storage service that he/she owns that file, hence the server lets the attacker to download the entire file. To overcome such attacks,the hash signature is encrypted with the user password. As a proof of concept a prototype of the proposed authorized deduplicate is implemented and conducted the test bed experiments using the prototype. Performance measurements indicate that the proposed Deduplication system incurs minimal overhead in the context of uploading, bandwidth compared to native deduplication.
In this study, we made a comparison between LASSO & SCAD methods, which are two special methods for dealing with models in partial quantile regression. (Nadaraya & Watson Kernel) was used to estimate the non-parametric part ;in addition, the rule of thumb method was used to estimate the smoothing bandwidth (h). Penalty methods proved to be efficient in estimating the regression coefficients, but the SCAD method according to the mean squared error criterion (MSE) was the best after estimating the missing data using the mean imputation method
A two time step stochastic multi-variables multi-sites hydrological data forecasting model was developed and verified using a case study. The philosophy of this model is to use the cross-variables correlations, cross-sites correlations and the two steps time lag correlations simultaneously, for estimating the parameters of the model which then are modified using the mutation process of the genetic algorithm optimization model. The objective function that to be minimized is the Akiake test value. The case study is of four variables and three sites. The variables are the monthly air temperature, humidity, precipitation, and evaporation; the sites are Sulaimania, Chwarta, and Penjwin, which are located north Iraq. The model performance was
... Show MoreMachine learning has a significant advantage for many difficulties in the oil and gas industry, especially when it comes to resolving complex challenges in reservoir characterization. Permeability is one of the most difficult petrophysical parameters to predict using conventional logging techniques. Clarifications of the work flow methodology are presented alongside comprehensive models in this study. The purpose of this study is to provide a more robust technique for predicting permeability; previous studies on the Bazirgan field have attempted to do so, but their estimates have been vague, and the methods they give are obsolete and do not make any concessions to the real or rigid in order to solve the permeability computation. To
... Show MoreImage retrieval is used in searching for images from images database. In this paper, content – based image retrieval (CBIR) using four feature extraction techniques has been achieved. The four techniques are colored histogram features technique, properties features technique, gray level co- occurrence matrix (GLCM) statistical features technique and hybrid technique. The features are extracted from the data base images and query (test) images in order to find the similarity measure. The similarity-based matching is very important in CBIR, so, three types of similarity measure are used, normalized Mahalanobis distance, Euclidean distance and Manhattan distance. A comparison between them has been implemented. From the results, it is conclud
... Show MoreThis paper presents a hybrid genetic algorithm (hGA) for optimizing the maximum likelihood function ln(L(phi(1),theta(1)))of the mixed model ARMA(1,1). The presented hybrid genetic algorithm (hGA) couples two processes: the canonical genetic algorithm (cGA) composed of three main steps: selection, local recombination and mutation, with the local search algorithm represent by steepest descent algorithm (sDA) which is defined by three basic parameters: frequency, probability, and number of local search iterations. The experimental design is based on simulating the cGA, hGA, and sDA algorithms with different values of model parameters, and sample size(n). The study contains comparison among these algorithms depending on MSE value. One can conc
... Show More
Abstract:
We can notice cluster data in social, health and behavioral sciences, so this type of data have a link between its observations and we can express these clusters through the relationship between measurements on units within the same group.
In this research, I estimate the reliability function of cluster function by using the seemingly unrelate
... Show More3D models delivered from digital photogrammetric techniques have massively increased and developed to meet the requirements of many applications. The reliability of these models is basically dependent on the data processing cycle and the adopted tool solution in addition to data quality. Agisoft PhotoScan is a professional image-based 3D modelling software, which seeks to create orderly, precise n 3D content from fixed images. It works with arbitrary images those qualified in both controlled and uncontrolled conditions. Following the recommendations of many users all around the globe, Agisoft PhotoScan, has become an important source to generate precise 3D data for different applications. How reliable is this data for accurate 3D mo
... Show More