Cloud storage provides scalable and low cost resources featuring economies of scale based on cross-user architecture. As the amount of data outsourced grows explosively, data deduplication, a technique that eliminates data redundancy, becomes essential. The most important cloud service is data storage. In order to protect the privacy of data owner, data are stored in cloud in an encrypted form. However, encrypted data introduce new challenges for cloud data deduplication, which becomes crucial for data storage. Traditional deduplication schemes cannot work on encrypted data. Existing solutions of encrypted data deduplication suffer from security weakness. This paper proposes a combined compressive sensing and video deduplication to maximize deduplication ratios. Our approach uses data deduplication to remove identical copies of the video. Our experimental results show significant storage savings, while providing strong level security
Accurate predictive tools for VLE calculation are always needed. A new method is introduced for VLE calculation which is very simple to apply with very good results compared with previously used methods. It does not need any physical property except each binary system need tow constants only. Also, this method can be applied to calculate VLE data for any binary system at any polarity or from any group family. But the system binary should not confirm an azeotrope. This new method is expanding in application to cover a range of temperature. This expansion does not need anything except the application of the new proposed form with the system of two constants. This method with its development is applied to 56 binary mixtures with 1120 equili
... Show MoreThe influx of data in bioinformatics is primarily in the form of DNA, RNA, and protein sequences. This condition places a significant burden on scientists and computers. Some genomics studies depend on clustering techniques to group similarly expressed genes into one cluster. Clustering is a type of unsupervised learning that can be used to divide unknown cluster data into clusters. The k-means and fuzzy c-means (FCM) algorithms are examples of algorithms that can be used for clustering. Consequently, clustering is a common approach that divides an input space into several homogeneous zones; it can be achieved using a variety of algorithms. This study used three models to cluster a brain tumor dataset. The first model uses FCM, whic
... Show MoreTraffic classification is referred to as the task of categorizing traffic flows into application-aware classes such as chats, streaming, VoIP, etc. Most systems of network traffic identification are based on features. These features may be static signatures, port numbers, statistical characteristics, and so on. Current methods of data flow classification are effective, they still lack new inventive approaches to meet the needs of vital points such as real-time traffic classification, low power consumption, ), Central Processing Unit (CPU) utilization, etc. Our novel Fast Deep Packet Header Inspection (FDPHI) traffic classification proposal employs 1 Dimension Convolution Neural Network (1D-CNN) to automatically learn more representational c
... Show MoreThe seasonal behavior of the light curve for selected star SS UMI and EXDRA during outburst cycle is studied. This behavior describes maximum temperature of outburst in dwarf nova. The raw data has been mathematically modeled by fitting Gaussian function based on the full width of the half maximum and the maximum value of the Gaussian. The results of this modeling describe the value of temperature of the dwarf novae star system leading to identify the type of elements that each dwarf nova consisted of.
Abstract. Full-waveform airborne laser scanning data has shown its potential to enhance available segmentation and classification approaches through the additional information it can provide. However, this additional information is unable to directly provide a valid physical representation of surface features due to many variables affecting the backscattered energy during travel between the sensor and the target. Effectively, this delivers a mis-match between signals from overlapping flightlines. Therefore direct use of this information is not recommended without the adoption of a comprehensive radiometric calibration strategy that accounts for all these effects. This paper presents a practical and reliable radiometric calibration r
... Show MoreEffective decision-making process is the basis for successfully solving any engineering problem. Many decisions taken in the construction projects differ in their nature due to the complex nature of the construction projects. One of the most crucial decisions that might result in numerous issues over the course of a construction project is the selection of the contractor. This study aims to use the ordinal priority approach (OPA) for the contractor selection process in the construction industry. The proposed model involves two computer programs; the first of these will be used to evaluate the decision-makers/experts in the construction projects, while the second will be used to formul
The seismic method depends on the nature of the reflected waves from the interfaces between layers, which in turn depends on the density and velocity of the layer, and this is called acoustic impedance. The seismic sections of the East Abu-Amoud field that is located in Missan Province, south-eastern Iraq, were studied and interpreted for updating the structural picture of the major Mishrif Formation for the reservoir in the field. The Mishrif Formation is rich in petroleum in this area, with an area covering about 820 km2. The horizon was calibrated and defined on the seismic section with well logs data (well tops, check shot, sonic logs, and density logs) in the interp
... Show More