Big data usually running in large-scale and centralized key management systems. However, the centralized key management systems are increasing the problems such as single point of failure, exchanging a secret key over insecure channels, third-party query, and key escrow problem. To avoid these problems, we propose an improved certificate-based encryption scheme that ensures data confidentiality by combining symmetric and asymmetric cryptography schemes. The combination can be implemented by using the Advanced Encryption Standard (AES) and Elliptic Curve Diffie-Hellman (ECDH). The proposed scheme is an enhanced version of the Certificate-Based Encryption (CBE) scheme and preserves all its advantages. However, the key generation process in our scheme has been done without any intervention from the certificate issuer and avoiding the risk of compromised CA. The Elliptic Curve Digital Signature Algorithm (ECDSA) has been used with the ECDH to handle the authentication of the key exchange. The proposed scheme is demonstrated on a big dataset of social networks. The scheme is analyzed based on security criteria that have been compared with the previous schemes to evaluate its performance.
This work involves the preparation of the ligand [KL] :- ÂÂÂÂÂÂ
K[4-(N-(5-methylisoxazol-3-yl) sulfamyl) phenylcarbamodithioate] from the reaction of sulfamethoxazole with Carbon disulfide in the presence of potassium hydroxide under reflux (4 hours) using methanol as asolvent. The prepared ligand was characterized using FT-IR, UV-Vis, 1H,13C–NMR spectroscopy, molar conductivity and melting point, Complexes for the above ligand [KL] with some bivalent transition and non-transition metals (Mn +2, Co+2 , Ni+2 ,
... Show MoreCloud computing provides huge amount of area for storage of the data, but with an increase of number of users and size of their data, cloud storage environment faces earnest problem such as saving storage space, managing this large data, security and privacy of data. To save space in cloud storage one of the important methods is data deduplication, it is one of the compression technique that allows only one copy of the data to be saved and eliminate the extra copies. To offer security and privacy of the sensitive data while supporting the deduplication, In this work attacks that exploit the hybrid cloud deduplication have been identified, allowing an attacker to gain access to the files of other users based on very small hash signatures of
... Show MoreInformation systems and data exchange between government institutions are growing rapidly around the world, and with it, the threats to information within government departments are growing. In recent years, research into the development and construction of secure information systems in government institutions seems to be very effective. Based on information system principles, this study proposes a model for providing and evaluating security for all of the departments of government institutions. The requirements of any information system begin with the organization's surroundings and objectives. Most prior techniques did not take into account the organizational component on which the information system runs, despite the relevance of
... Show MoreMost frequently used models for modeling and forecasting periodic climatic time series do not have the capability of handling periodic variability that characterizes it. In this paper, the Fourier Autoregressive model with abilities to analyze periodic variability is implemented. From the results, FAR(1), FAR(2) and FAR(2) models were chosen based on Periodic Autocorrelation function (PeACF) and Periodic Partial Autocorrelation function (PePACF). The coefficients of the tentative model were estimated using a Discrete Fourier transform estimation method. FAR(1) models were chosen as the optimal model based on the smallest values of Periodic Akaike (PAIC) and Bayesian Information criteria (PBIC). The residual of the fitted models was diagn
... Show MoreMissing data is one of the problems that may occur in regression models. This problem is usually handled by deletion mechanism available in statistical software. This method reduces statistical inference values because deletion affects sample size. In this paper, Expectation Maximization algorithm (EM), Multicycle-Expectation-Conditional Maximization algorithm (MC-ECM), Expectation-Conditional Maximization Either (ECME), and Recurrent Neural Networks (RNN) are used to estimate multiple regression models when explanatory variables have some missing values. Experimental dataset were generated using Visual Basic programming language with missing values of explanatory variables according to a missing mechanism at random general pattern and s
... Show More
The process of soil classification in Iraq for industrial purposes is important topics that need to be extensive and specialized studies. In order for the advancement of reality service and industrial in our dear country, that a lot of scientific research touched upon the soil classification in the agricultural, commercial and other fields. No source and research can be found that touched upon the classification of land for industrial purposes directly. In this research specialized programs have been used such as geographic information system software The geographical information system permits the study of local distribution of phenomena, activities and the aims that can be determined in the loca
The study area lies in the northern part of Iraq, This study depends on one scene of Thematic Mapper (TM5) data of Landsat, these data are subset by using region of interest (ROI) file within the ERDAS 9.2 software. RS and GIS have been used as tools for detecting the desertification during the periods 1990-2000-2009 by using Normalized Difference Vegetation Index NDVI, Water Index WI and Barren Land Index BLI. The indicators of Desertification which used in this study for period 1990-2000 and 2000-2009 are represented by decrease the vegetation cover and increase water body and barren land.
Longitudinal data is becoming increasingly common, especially in the medical and economic fields, and various methods have been analyzed and developed to analyze this type of data.
In this research, the focus was on compiling and analyzing this data, as cluster analysis plays an important role in identifying and grouping co-expressed subfiles over time and employing them on the nonparametric smoothing cubic B-spline model, which is characterized by providing continuous first and second derivatives, resulting in a smoother curve with fewer abrupt changes in slope. It is also more flexible and can pick up on more complex patterns and fluctuations in the data.
The longitudinal balanced data profile was compiled into subgroup
... Show More