Nowadays, 3D content is becoming an essential part of multimedia applications, when the 3D content is not protected, hackers may attack and steal it. This paper introduces a proposed scheme that provides high protection for 3D content by implementing multiple levels of security with preserving the original size using weight factor (w). First level of security is implemented by encrypting the texture map based on a 2D Logistic chaotic map. Second level is implemented by shuffling vertices (confusion) based on a 1D Tent chaotic map. Third level is implemented by modifying the vertices values (diffusion) based on a 3D Lorenz chaotic map. Results illustrate that the proposed scheme is completely deform the entire 3D content according to Hausdorff Distance (HD) approximately around 100 after the encryption process. It provides a high security against brute force attack because it has large key space equal to 10165 and secret key sensitivity using NPCR near 99:6% and UACI near 33:4%. The histogram and HD indicate the decrypted 3D content is identical to the origin where HD values approximate zero.
With the development of information technology and means for information transfer it has become necessary to protect sensitive information. The current research presents a method to protect secret colored images which includes three phases: The first phase calculates hash value using one of hash functions to ensure that no tampering with or updating the contents of the secret image. The second phase is encrypting image and embedding it randomly into appropriate cover image using Random Least Significant Bit (RLSB) technique. Random hiding provides protection of information embedded inside cover image for inability to predict the hiding positions, as well as the difficult of determining the concealment positions through the analysis of im
... Show MoreClinical keratoconus (KCN) detection is a challenging and time-consuming task. In the diagnosis process, ophthalmologists must revise demographic and clinical ophthalmic examinations. The latter include slit-lamb, corneal topographic maps, and Pentacam indices (PI). We propose an Ensemble of Deep Transfer Learning (EDTL) based on corneal topographic maps. We consider four pretrained networks, SqueezeNet (SqN), AlexNet (AN), ShuffleNet (SfN), and MobileNet-v2 (MN), and fine-tune them on a dataset of KCN and normal cases, each including four topographic maps. We also consider a PI classifier. Then, our EDTL method combines the output probabilities of each of the five classifiers to obtain a decision b
Automatic document summarization technology is evolving and may offer a solution to the problem of information overload. Multi-document summarization is an optimization problem demanding optimizing more than one objective function concurrently. The proposed work considers a balance of two significant objectives: content coverage and diversity while generating a summary from a collection of text documents. Despite the large efforts introduced from several researchers for designing and evaluating performance of many text summarization techniques, their formulations lack the introduction of any model that can give an explicit representation of – coverage and diversity – the two contradictory semantics of any summary. The design of gener
... Show MoreIn this research The study of Multi-level model (partial pooling model) we consider The partial pooling model which is one Multi-level models and one of the Most important models and extensive use and application in the analysis of the data .This Model characterized by the fact that the treatments take hierarchical or structural Form, in this partial pooling models, Full Maximum likelihood FML was used to estimated parameters of partial pooling models (fixed and random ), comparison between the preference of these Models, The application was on the Suspended Dust data in Iraq, The data were for four and a half years .Eight stations were selected randomly among the stations in Iraq. We use Akaik′s Informa
... Show MoreIn this research The study of Multi-level model (partial pooling model) we consider The partial pooling model which is one Multi-level models and one of the Most important models and extensive use and application in the analysis of the data .This Model characterized by the fact that the treatments take hierarchical or structural Form, in this partial pooling models, Full Maximum likelihood FML was used to estimated parameters of partial pooling models (fixed and random ), comparison between the preference of these Models, The application was on the Suspended Dust data in Iraq, The data were for four and a half years .Eight stations were selected randomly among the stations in Iraq. We use Akaik′s Informa
... Show MoreSemantic segmentation realization and understanding is a stringent task not just for computer vision but also in the researches of the sciences of earth, semantic segmentation decompose compound architectures in one elements, the most mutual object in a civil outside or inside senses must classified then reinforced with information meaning of all object, it’s a method for labeling and clustering point cloud automatically. Three dimensions natural scenes classification need a point cloud dataset to representation data format as input, many challenge appeared with working of 3d data like: little number, resolution and accurate of three Dimensional dataset . Deep learning now is the po
Three-dimensional (3D) image and medical image processing, which are considered big data analysis, have attracted significant attention during the last few years. To this end, efficient 3D object recognition techniques could be beneficial to such image and medical image processing. However, to date, most of the proposed methods for 3D object recognition experience major challenges in terms of high computational complexity. This is attributed to the fact that the computational complexity and execution time are increased when the dimensions of the object are increased, which is the case in 3D object recognition. Therefore, finding an efficient method for obtaining high recognition accuracy with low computational complexity is essentia
... Show MoreWireless sensor network (WSN) security is an important component for protecting data from an attacker. For improving security, cryptography technologies are divided into two kinds: symmetric and asymmetric. Therefore, the implementation of protocols for generating a secret key takes a long time in comparison to the sensor’s limitations, which decrease network throughput because they are based on an asymmetric method. The asymmetric algorithms are complex and decrease network throughput. In this paper, an encryption symmetric secret key in wireless sensor networks (WSN) is proposed. In this work, 24 experiments are proposed, which are encryption using the AES algorithm in the cases of 1 key, 10 keys, 25 keys, and 50 keys. I
... Show More