Encryption of data is translating data to another shape or symbol which enables people only with an access to the secret key or a password that can read it. The data which are encrypted are generally referred to as cipher text, while data which are unencrypted are known plain text. Entropy can be used as a measure which gives the number of bits that are needed for coding the data of an image. As the values of pixel within an image are dispensed through further gray-levels, the entropy increases. The aim of this research is to compare between CAST-128 with proposed adaptive key and RSA encryption methods for video frames to determine the more accurate method with highest entropy. The first method is achieved by applying the "CAST-128" and the second is achieved by applying the "RSA ". CAST-128 utilizes a pair of sub-keys for each round as a quantum of five bits that was utilized as a key of rotation for each round and a quantum of 32 (bits) was utilized as a key of masking into a round . The proposed adaptive 128-bits key can be extracted from the main diagonal of each frame before encryption. RSA is a public-key cryptographic technique which can be known as (asymmetric) cryptography. An asymmetry of a key depends on factoring a product of two big prime values. A comparison was applied on several videos and the results showed that CAST-128 method proved the highest degree of entropy even if the frames have lots of distorted data or unclear image pixels. For example, the entropy value of a sample of a girl video is 2581.921 when using CAST-128, while it is 2271.329 when using the RSA; also the entropy value of a sample of a scooter video is 2569.814 when using the CAST-128, while it is 2282.844 when using RSA.
In this study, we focused on the random coefficient estimation of the general regression and Swamy models of panel data. By using this type of data, the data give a better chance of obtaining a better method and better indicators. Entropy's methods have been used to estimate random coefficients for the general regression and Swamy of the panel data which were presented in two ways: the first represents the maximum dual Entropy and the second is general maximum Entropy in which a comparison between them have been done by using simulation to choose the optimal methods.
The results have been compared by using mean squares error and mean absolute percentage error to different cases in term of correlation valu
... Show MoreThe current study was designed to compare some of the vital markers in the sera of diabetic and neuropathy patients via estimating Adipsin, Fasting blood Glucose(FBG), Glycated(HbA1c) hemoglobin, Homeostasis Model Assessment Index (Homa IR ), Cholesterol, High density lipoprotein (HDL), Triglycerides (T.G), Low-density, and lipoprotein (LDL), Very Low Density Lipoprotein (VLDL), in sera of Iraqi patients with diabetes and neuropathy. A total of ninety subjects were divided into three groups: group I (30 diabetic with neuropathy males) and group II (30 diabetic males without neuropathy), and 30 healthy sujects were employed as control group. The results showed a significant decline in Adipsin levels (p>0.05) in neuropathy, T2DM g
... Show MoreIn order to select the optimal tracking of fast time variation of multipath fast time variation Rayleigh fading channel, this paper focuses on the recursive least-squares (RLS) and Extended recursive least-squares (E-RLS) algorithms and reaches the conclusion that E-RLS is more feasible according to the comparison output of the simulation program from tracking performance and mean square error over five fast time variation of Rayleigh fading channels and more than one time (send/receive) reach to 100 times to make sure from efficiency of these algorithms.
this research aims to identify the impact of teaching with ana logies in collection of chemistry students grade average.And direction in order to verify to the zero hypothesis has been formulated and validated,The researcher conducted experiment lasted a full semester as experimental design was chosen (exactly)two equal one pilot and another officer.The research community has been selected one of intentional Middle School of the Education Directorate in B aghdad Karkh second either search sample consisted of (68) students from second grade average (34 ) Students for each group randomly selected research groups was equal to the following variables (lifetime months,parent of first grade chemistry degrees average test informat
... Show MoreIn this research، a comparison has been made between the robust estimators of (M) for the Cubic Smoothing Splines technique، to avoid the problem of abnormality in data or contamination of error، and the traditional estimation method of Cubic Smoothing Splines technique by using two criteria of differentiation which are (MADE، WASE) for different sample sizes and disparity levels to estimate the chronologically different coefficients functions for the balanced longitudinal data which are characterized by observations obtained through (n) from the independent subjects، each one of them is measured repeatedly by group of specific time points (m)،since the frequent measurements within the subjects are almost connected an
... Show More
Abstract:
The models of time series often suffer from the problem of the existence of outliers that accompany the data collection process for many reasons, their existence may have a significant impact on the estimation of the parameters of the studied model. Access to highly efficient estimators is one of the most important stages of statistical analysis, And it is therefore important to choose the appropriate methods to obtain good estimators. The aim of this research is to compare the ordinary estimators and the robust estimators of the estimation of the parameters of
... Show MoreMultilevel models are among the most important models widely used in the application and analysis of data that are characterized by the fact that observations take a hierarchical form, In our research we examined the multilevel logistic regression model (intercept random and slope random model) , here the importance of the research highlights that the usual regression models calculate the total variance of the model and its inability to read variance and variations between levels ,however in the case of multi-level regression models, the calculation of the total variance is inaccurate and therefore these models calculate the variations for each level of the model, Where the research aims to estimate the parameters of this m
... Show MoreThis paper presents a grey model GM(1,1) of the first rank and a variable one and is the basis of the grey system theory , This research dealt properties of grey model and a set of methods to estimate parameters of the grey model GM(1,1) is the least square Method (LS) , weighted least square method (WLS), total least square method (TLS) and gradient descent method (DS). These methods were compared based on two types of standards: Mean square error (MSE), mean absolute percentage error (MAPE), and after comparison using simulation the best method was applied to real data represented by the rate of consumption of the two types of oils a Heavy fuel (HFO) and diesel fuel (D.O) and has been applied several tests to
... Show MoreThe Diffie-Hellman is a key exchange protocol to provide a way to transfer shared secret keys between two parties, although those parties might never have communicated together. This paper suggested a new way to transfer keys through public or non-secure channels depending on the sent video files over the channel and then extract keys. The proposed method of key generation depends on the video file content by using the entropy value of the video frames. The proposed system solves the weaknesses in the Diffie-Hellman key exchange algorithm, which is MIMA (Man-in-the-Middle attack) and DLA( Discrete logarithm attack). When the method used high definition videos with a vast amount of data, the keys generated with a large number up to 5
... Show MoreCyber-attacks keep growing. Because of that, we need stronger ways to protect pictures. This paper talks about DGEN, a Dynamic Generative Encryption Network. It mixes Generative Adversarial Networks with a key system that can change with context. The method may potentially mean it can adjust itself when new threats appear, instead of a fixed lock like AES. It tries to block brute‑force, statistical tricks, or quantum attacks. The design adds randomness, uses learning, and makes keys that depend on each image. That should give very good security, some flexibility, and keep compute cost low. Tests still ran on several public image sets. Results show DGEN beats AES, chaos tricks, and other GAN ideas. Entropy reached 7.99 bits per pix
... Show More