Cloud computing offers a new way of service provision by rearranging various resources over the Internet. The most important and popular cloud service is data storage. In order to preserve the privacy of data holders, data are often stored in cloud in an encrypted form. However, encrypted data introduce new challenges for cloud data deduplication, which becomes crucial for big data storage and processing in the cloud. Traditional deduplication schemes cannot work on encrypted data. Among these data, digital videos are fairly huge in terms of storage cost and size; and techniques that can help the legal aspects of video owner such as copyright protection and reducing the cloud storage cost and size are always desired. This paper focuses on video copyright protection and deduplication. A video copyright and deduplication scheme in cloud storage environments using the H.264 compression algorithm and SHA-512 hashing technique is proposed. This paper proposes a combined copyright production and deduplication based on video content to authenticate and to verify the integrity of the compressed H.264 video. The design of the proposed scheme consists of two modules. First, a H.264 compression algorithm is applied on the given video by the user. Second, a unique signature in different time interval of the compressed video is generated by the user in such a way that the CSP can use it to compare the user’s video against other videos without compromising the security of the user’s video. To avoid any attacker to gain access to the hash signature during uploading to the cloud, the hash signature is encrypted with the user password. Some experimental results are provided, showing the effectiveness of our proposed copyright protection and deduplication system.
The research seeks to identify the impact of fraud detection skills in the settlement of compensatory claims for the fire and accident insurance portfolio and the reflection of these skills in preventing and reducing the payment of undue compensation to some who seek profit and enrichment at the expense of the insurance contract. And compensatory claims in the portfolio of fire and accident insurance in the two research companies, which show the effect and positive return of the detection skills and settlement of the compensation on the amount of actual compensation against the claims inflated by some of the insured, The research sample consisted of (70) respondents from a community size (85) individuals between the director and assistan
... Show MoreDevelopment of improved methods for the synthesis of metal oxide nanoparticles are of high priority for the advancement of material science and technology. Herein, the biosynthesis of ZnO using hydrahelix of beta vulgaris and the seed of abrus precatorius as an aqueaus extracts adduced respectivily as stablizer and reductant reagent. The support are characterized by spectroscopic methods ( Ft-IR, Uv-vis ).The FTIR confirmed the presence of ZnO band. The Uv-visible showed absorption peak at corresponds to the ZnO nanostructures. X-ray diffraction, scaning electron microscopy (SEM), dispersive X-ray spectroscopy (EDX) techniques are taken to investigation the size, structure and composition of synthesised ZnO nanocrystals. The XRD pattern mat
... Show MoreData mining has the most important role in healthcare for discovering hidden relationships in big datasets, especially in breast cancer diagnostics, which is the most popular cause of death in the world. In this paper two algorithms are applied that are decision tree and K-Nearest Neighbour for diagnosing Breast Cancer Grad in order to reduce its risk on patients. In decision tree with feature selection, the Gini index gives an accuracy of %87.83, while with entropy, the feature selection gives an accuracy of %86.77. In both cases, Age appeared as the most effective parameter, particularly when Age<49.5. Whereas Ki67 appeared as a second effective parameter. Furthermore, K- Nearest Neighbor is based on the minimu
... Show MoreIn digital images, protecting sensitive visual information against unauthorized access is considered a critical issue; robust encryption methods are the best solution to preserve such information. This paper introduces a model designed to enhance the performance of the Tiny Encryption Algorithm (TEA) in encrypting images. Two approaches have been suggested for the image cipher process as a preprocessing step before applying the Tiny Encryption Algorithm (TEA). The step mentioned earlier aims to de-correlate and weaken adjacent pixel values as a preparation process before the encryption process. The first approach suggests an Affine transformation for image encryption at two layers, utilizing two different key sets for each layer. Th
... Show MoreNumeral recognition is considered an essential preliminary step for optical character recognition, document understanding, and others. Although several handwritten numeral recognition algorithms have been proposed so far, achieving adequate recognition accuracy and execution time remain challenging to date. In particular, recognition accuracy depends on the features extraction mechanism. As such, a fast and robust numeral recognition method is essential, which meets the desired accuracy by extracting the features efficiently while maintaining fast implementation time. Furthermore, to date most of the existing studies are focused on evaluating their methods based on clean environments, thus limiting understanding of their potential a
... Show MoreUsing the Internet, nothing is secure and as we are in need of means of protecting our data, the use of passwords has become important in the electronic world. To ensure that there is no hacking and to protect the database that contains important information such as the ID card and banking information, the proposed system stores the username after hashing it using the 256 hash algorithm and strong passwords are saved to repel attackers using one of two methods: -The first method is to add a random salt to the password using the CSPRNG algorithm, then hash it using hash 256 and store it on the website. -The second method is to use the PBKDF2 algorithm, which salts the passwords and extends them (deriving the password) before being ha
... Show MoreJoining tissue is a growing problem in surgery with the advancement of the technology and more precise and difficult surgeries are done. Tissue welding using laser is a promising technique that might help in more advancement of the surgical practice. Objectives: To study the ability of laser in joining tissues and the optimum parameters for yielding good welding of tissues. Methods: An in-vitro study, done at the Institute of Laser, Baghdad University during the period from October 2008 to February 2009. Diode and Nd-YAG lasers were applied, using different sessions, on sheep small intestine with or without solder to obtain welding of a 2-mm length full thickness incision. Different powers and energies were used to get maximum effect. Re
... Show MoreIn present work examined the oxidation desulfurization in batch system for model fuels with 2250 ppm sulfur content using air as the oxidant and ZnO/AC composite prepared by thermal co-precipitation method. Different factors were studied such as composite loading 1, 1.5 and 2.5 g, temperature 25 oC, 30 oC and 40 oC and reaction time 30, 45 and 60 minutes. The optimum condition is obtained by using Tauguchi experiential design for oxidation desulfurization of model fuel. the highest percent sulfur removal is about 33 at optimum conditions. The kinetic and effect of internal mass transfer were studied for oxidation desulfurization of model fuel, also an empirical kinetic model was calculated for model fuels
... Show MoreThe denoising of a natural image corrupted by Gaussian noise is a problem in signal or image processing. Much work has been done in the field of wavelet thresholding but most of it was focused on statistical modeling of wavelet coefficients and the optimal choice of thresholds. This paper describes a new method for the suppression of noise in image by fusing the stationary wavelet denoising technique with adaptive wiener filter. The wiener filter is applied to the reconstructed image for the approximation coefficients only, while the thresholding technique is applied to the details coefficients of the transform, then get the final denoised image is obtained by combining the two results. The proposed method was applied by usin
... Show MoreHome New Trends in Information and Communications Technology Applications Conference paper Audio Compression Using Transform Coding with LZW and Double Shift Coding Zainab J. Ahmed & Loay E. George Conference paper First Online: 11 January 2022 126 Accesses Part of the Communications in Computer and Information Science book series (CCIS,volume 1511) Abstract The need for audio compression is still a vital issue, because of its significance in reducing the data size of one of the most common digital media that is exchanged between distant parties. In this paper, the efficiencies of two audio compression modules were investigated; the first module is based on discrete cosine transform and the second module is based on discrete wavelet tr
... Show More