The dramatic decrease in the cost of genome sequencing over the last two decades has led to an abundance of genomic data. This data has been used in research related to the discovery of genetic diseases and the production of medicines. At the same time, the huge space for storing the genome (2–3 GB) has led to it being considered one of the most important sources of big data, which has prompted research centers concerned with genetic research to take advantage of the cloud and its services in storing and managing this data. The cloud is a shared storage environment, which makes data stored in it vulnerable to unwanted tampering or disclosure. This leads to serious concerns about securing such data from tampering and unauthorized searches by those involved. In addition to securing inquiries, making calculations on this data, and generating differential privacy and garbled circuits, cryptography is considered one of the important solutions to this problem. This paper introduces most of the important challenges related to maintaining privacy and security and classifies each problem with appropriate, proposed, or applied solutions that will fuel researchers' future interest in developing more effective privacy-preserving methods for genomic data.
In this study, the adsorption of Zn (NO3)2 is carried out by using surfaces of malvaparviflora. The validity of the adsorption is evaluated by using atomic absorption Spectrophotometry through determination the amount of adsorbed Zn (NO3)2. Various parameters such as PH, adsorbent weight and contact time are studied in terms of their effect on the reaction progress. Furthermore, Lagergren’s equation is used to determine adsorption kinetics. It is observed that high removal of Zn (NO3)2 is obtained at PH=2. High removal of Zn (NO3)2 is at the time equivalent of 60 min and reaches equilibrium,where 0.25gm is the best weight of adsorbant . For kinetics the reaction onto malvaparviflora follows pseudo first order Lagergren’s equation.
Our aim of this research is to find the results of numerical solution of Volterra linear integral equation of the second kind using numerical methods such that Trapezoidal and Simpson's rule. That is to derive some statistical properties expected value, the variance and the correlation coefficient between the numerical and exact solutionâ–¡
A new chelate polymer (2-5-hydroxy-3-methyl-2- (3-nonyl benzene) imino) methyl) benzyl) 4-6-dimethyphenol] (K4) was prepared by using the condensation reaction method and identified by several techniques, including FT-IR, NMR, and atomic absorption spectroscopy, as well as TG-DTA thermal analysis. The kinetic equilibrium for the sorption of lead and cadmium ions on the chelate polymer surface was also investigated. The results showed that the sorption of both ions followed the pseudo-first-order and pseudo-second-order kinetic equilibrium. The rate constant values of pseudo-first-order reaction were equal to 0.062 and 0.057 min-1 , while the values of pseudo-second-order were 0.0103 and 0.053 L.m
... Show MoreThe science of information security has become a concern of many researchers, whose efforts are trying to come up with solutions and technologies that ensure the transfer of information in a more secure manner through the network, especially the Internet, without any penetration of that information, given the risk of digital data being sent between the two parties through an insecure channel. This paper includes two data protection techniques. The first technique is cryptography by using Menezes Vanstone elliptic curve ciphering system, which depends on public key technologies. Then, the encoded data is randomly included in the frame, depending on the seed used. The experimental results, using a PSNR within avera
... Show MoreThis article discusses the estimation methods for parameters of a generalized inverted exponential distribution with different estimation methods by using Progressive type-I interval censored data. In addition to conventional maximum likelihood estimation, the mid-point method, probability plot method and method of moments are suggested for parameter estimation. To get maximum likelihood estimates, we utilize the Newton-Raphson, expectation -maximization and stochastic expectation-maximization methods. Furthermore, the approximate confidence intervals for the parameters are obtained via the inverse of the observed information matrix. The Monte Carlo simulations are used to introduce numerical comparisons of the proposed estimators. In ad
... Show MoreGeneral medical fields and computer science usually conjugate together to produce impressive results in both fields using applications, programs and algorithms provided by Data mining field. The present research's title contains the term hygiene which may be described as the principle of maintaining cleanliness of the external body. Whilst the environmental hygienic hazards can present themselves in various media shapes e.g. air, water, soil…etc. The influence they can exert on our health is very complex and may be modulated by our genetic makeup, psychological factors and by our perceptions of the risks that they present. Our main concern in this research is not to improve general health, rather than to propose a data mining approach
... Show MoreThe influx of data in bioinformatics is primarily in the form of DNA, RNA, and protein sequences. This condition places a significant burden on scientists and computers. Some genomics studies depend on clustering techniques to group similarly expressed genes into one cluster. Clustering is a type of unsupervised learning that can be used to divide unknown cluster data into clusters. The k-means and fuzzy c-means (FCM) algorithms are examples of algorithms that can be used for clustering. Consequently, clustering is a common approach that divides an input space into several homogeneous zones; it can be achieved using a variety of algorithms. This study used three models to cluster a brain tumor dataset. The first model uses FCM, whic
... Show MoreAmplitude variation with offset (AVO) analysis is an 1 efficient tool for hydrocarbon detection and identification of elastic rock properties and fluid types. It has been applied in the present study using reprocessed pre-stack 2D seismic data (1992, Caulerpa) from north-west of the Bonaparte Basin, Australia. The AVO response along the 2D pre-stack seismic data in the Laminaria High NW shelf of Australia was also investigated. Three hypotheses were suggested to investigate the AVO behaviour of the amplitude anomalies in which three different factors; fluid substitution, porosity and thickness (Wedge model) were tested. The AVO models with the synthetic gathers were analysed using log information to find which of these is the
... Show MoreIn this research, several estimators concerning the estimation are introduced. These estimators are closely related to the hazard function by using one of the nonparametric methods namely the kernel function for censored data type with varying bandwidth and kernel boundary. Two types of bandwidth are used: local bandwidth and global bandwidth. Moreover, four types of boundary kernel are used namely: Rectangle, Epanechnikov, Biquadratic and Triquadratic and the proposed function was employed with all kernel functions. Two different simulation techniques are also used for two experiments to compare these estimators. In most of the cases, the results have proved that the local bandwidth is the best for all the
... Show More