Cyber security is a term utilized for describing a collection of technologies, procedures, and practices that try protecting an online environment of a user or an organization. For medical images among most important and delicate data kinds in computer systems, the medical reasons require that all patient data, including images, be encrypted before being transferred over computer networks by healthcare companies. This paper presents a new direction of the encryption method research by encrypting the image based on the domain of the feature extracted to generate a key for the encryption process. The encryption process is started by applying edges detection. After dividing the bits of the edge image into (3×3) windows, the diffusions on bits are applied to create a key used for encrypting the edge image. Four randomness tests are passed through NIST randomness tests to ensure whether the generated key is accepted as true. This process is reversible in the state of decryption to retrieve the original image. The encryption image that will be gained can be used in any cyber security field such as healthcare organization. The comparative experiments prove that the proposed algorithm improves the encryption efficiency has a good security performance, and the encryption algorithm has a higher information entropy 7.42 as well as a lower correlation coefficient 0.653.
To make iron oxide nanoparticles (IONPs), a simple chemical approach was used to combine iron chloride (FeCl2+FeCl3) salt with onion peel extract. According to the study, iron salts can be converted into IONPs by the biomolecules in onion peel extract. From FeCl2+FeCl3 to γ -Fe2O3, the approach changes iron oxide NPs' size, shape, purity and phases. In water treatment, γ -Fe2O3 NPs are critical for the removal of the color methylene blue (MB). X-ray diffraction (XRD), scanning electron microscopy (SEM), ultraviolet (UV-Vis) and photoluminescence (PL) spectroscopy were used to identify IONPs. Results from the XRD experiment showed crystals having a
... Show MoreAdvances in digital technology and the World Wide Web has led to the increase of digital documents that are used for various purposes such as publishing and digital library. This phenomenon raises awareness for the requirement of effective techniques that can help during the search and retrieval of text. One of the most needed tasks is clustering, which categorizes documents automatically into meaningful groups. Clustering is an important task in data mining and machine learning. The accuracy of clustering depends tightly on the selection of the text representation method. Traditional methods of text representation model documents as bags of words using term-frequency index document frequency (TFIDF). This method ignores the relationship an
... Show MoreEstimating multivariate location and scatter with both affine equivariance and positive break down has always been difficult. Awell-known estimator which satisfies both properties is the Minimum volume Ellipsoid Estimator (MVE) Computing the exact (MVE) is often not feasible, so one usually resorts to an approximate Algorithm. In the regression setup, algorithm for positive-break down estimators like Least Median of squares typically recomputed the intercept at each step, to improve the result. This approach is called intercept adjustment. In this paper we show that a similar technique, called location adjustment, Can be applied to the (MVE). For this purpose we use the Minimum Volume Ball (MVB). In order
... Show MoreThe research aims to identify the educational plan for the private and public kindergartens. The researchers selected a sample consisted of (59) female teachers for the private kindergartens and (150) female teachers for the public kindergartens in the city of Baghdad. As for the research tool, the two researchers designed a questionnaire to measure the educational plan for the private and public kindergartens. The results revealed that private kindergartens have educational plans that contribute considerably to classroom interaction, the public kindergartens lack for educational plans. In light of the findings of the research, the researchers recommend the following: the need to set up a unified educational plan for the priv
... Show MoreIn this paper, the computational complexity will be reduced using a revised version of the selected mapping (SLM) algorithm. Where a partial SLM is achieved to reduce the mathematical operations around 50%. Although the peak to average power ratio (PAPR) reduction gain has been slightly degraded, the dramatic reduction in the computational complexity is an outshining achievement. Matlab simulation is used to evaluate the results, where the PAPR result shows the capability of the proposed method.
In this paper, the problem of resource allocation at Al-Raji Company for soft drinks and juices was studied. The company produces several types of tasks to produce juices and soft drinks, which need machines to accomplish these tasks, as it has 6 machines that want to allocate to 4 different tasks to accomplish these tasks. The machines assigned to each task are subject to failure, as these machines are repaired to participate again in the production process. From past records of the company, the probability of failure machines at each task was calculated depending on company data information. Also, the time required for each machine to complete each task was recorded. The aim of this paper is to determine the minimum expected ti
... Show More
We have presented the distribution of the exponentiated expanded power function (EEPF) with four parameters, where this distribution was created by the exponentiated expanded method created by the scientist Gupta to expand the exponential distribution by adding a new shape parameter to the cumulative function of the distribution, resulting in a new distribution, and this method is characterized by obtaining a distribution that belongs for the exponential family. We also obtained a function of survival rate and failure rate for this distribution, where some mathematical properties were derived, then we used the method of maximum likelihood (ML) and method least squares developed (LSD)
... Show MoreIn this paper we estimate the coefficients and scale parameter in linear regression model depending on the residuals are of type 1 of extreme value distribution for the largest values . This can be regard as an improvement for the studies with the smallest values . We study two estimation methods ( OLS & MLE ) where we resort to Newton – Raphson (NR) and Fisher Scoring methods to get MLE estimate because the difficulty of using the usual approach with MLE . The relative efficiency criterion is considered beside to the statistical inference procedures for the extreme value regression model of type 1 for largest values . Confidence interval , hypothesis testing for both scale parameter and regression coefficients
... Show More