Security concerns in the transfer of medical images have drawn a lot of attention to the topic of medical picture encryption as of late. Furthermore, recent events have brought attention to the fact that medical photographs are constantly being produced and circulated online, necessitating safeguards against their inappropriate use. To improve the design of the AES algorithm standard for medical picture encryption, this research presents several new criteria. It was created so that needs for higher levels of safety and higher levels of performance could be met. First, the pixels in the image are diffused to randomly mix them up and disperse them all over the screen. Rather than using rounds, the suggested technique utilizes a cascaded-looking composition of F-functions in a quadrate architecture. The proposed F-function architecture is a three-input, three-output Type-3 AES-Feistel network with additional integer parameters representing the subkeys in use. The suggested system makes use of the AES block cipher as a function on a Type-3 AES-Feistel network. Blocks in the proposed system are 896 bits in length, whereas keys are 128 bits. The production of subkeys is encrypted using a chain of E8- algorithms. The necessary subkeys are then generated with a recursion. The results are reviewed to verify that the new layout improves the security of the AES block cipher when used to encrypt medical images in a computer system.
Between the duality of sound and image, the completeness of the actor’s personality at the director comes to announce the birth of the appropriate theatrical role for that character as the basic and inherent element of the artwork, within his working system in the pattern of vocal behavior as well as motor/signal behavior as he searches for aesthetic and skill proficiency at the same time.
This is done through the viewer’s relationship with the theatrical event, which the director considers as an area of active creative activity in relation to (the work of the actor) through vocal recitation and the signs it broadcasts in order to fulfill the requirements of the dramatic situation and what it requires of a visual vision drawn in t
FG Mohammed, HM Al-Dabbas, Iraqi journal of science, 2018 - Cited by 6
In the lifetime process in some systems, most data cannot belong to one single population. In fact, it can represent several subpopulations. In such a case, the known distribution cannot be used to model data. Instead, a mixture of distribution is used to modulate the data and classify them into several subgroups. The mixture of Rayleigh distribution is best to be used with the lifetime process. This paper aims to infer model parameters by the expectation-maximization (EM) algorithm through the maximum likelihood function. The technique is applied to simulated data by following several scenarios. The accuracy of estimation has been examined by the average mean square error (AMSE) and the average classification success rate (ACSR). T
... Show MoreThe aim of this study is to evaluate the anti fungal activity of a combination of essential oils against water molds. HPLC analysis was done to evaluate the quantity and quality of the active compounds in this combination which extracted from three herbs( Peppermint Menthapiperita ,Thyme Thymusvulgaris, Common sage Salvia officinalis L.) and the active compounds are Camphor,Menthol,,Thujone and Thymol with different concentrations. In this study (MIC) , (MFC) were measured and (LD50) determined after 48,96 h from fingerlings treatment of common carp in aquariums .The results of (MIC) were 0.025µl/ml for Aphanomyces sp. and 0.015µl/ml for both Achlya sp. and Fusariumsolani which showed significant differences(p<0.05) from Malachite gre
... Show MoreImage compression is one of the data compression types applied to digital images in order to reduce their high cost for storage and/or transmission. Image compression algorithms may take the benefit of visual sensitivity and statistical properties of image data to deliver superior results in comparison with generic data compression schemes, which are used for other digital data. In the first approach, the input image is divided into blocks, each of which is 16 x 16, 32 x 32, or 64 x 64 pixels. The blocks are converted first into a string; then, encoded by using a lossless and dictionary-based algorithm known as arithmetic coding. The more occurrence of the pixels values is codded in few bits compare with pixel values of less occurre
... Show MoreThe widespread use of the Internet of things (IoT) in different aspects of an individual’s life like banking, wireless intelligent devices and smartphones has led to new security and performance challenges under restricted resources. The Elliptic Curve Digital Signature Algorithm (ECDSA) is the most suitable choice for the environments due to the smaller size of the encryption key and changeable security related parameters. However, major performance metrics such as area, power, latency and throughput are still customisable and based on the design requirements of the device.
The present paper puts forward an enhancement for the throughput performance metric by p
... Show MoreIn this paper, visible image watermarking algorithm based on biorthogonal wavelet
transform is proposed. The watermark (logo) of type binary image can be embedded in the
host gray image by using coefficients bands of the transformed host image by biorthogonal
transform domain. The logo image can be embedded in the top-left corner or spread over the
whole host image. A scaling value (α) in the frequency domain is introduced to control the
perception of the watermarked image. Experimental results show that this watermark
algorithm gives visible logo with and no losses in the recovery process of the original image,
the calculated PSNR values support that. Good robustness against attempt to remove the
watermark was s
WA Shukur, FA Abdullatif, Ibn Al-Haitham Journal For Pure and Applied Sciences, 2011 With wide spread of internet, and increase the price of information, steganography become very important to communication. Over many years used different types of digital cover to hide information as a cover channel, image from important digital cover used in steganography because widely use in internet without suspicious.
This paper presents a combination of enhancement techniques for fingerprint images affected by different type of noise. These techniques were applied to improve image quality and come up with an acceptable image contrast. The proposed method included five different enhancement techniques: Normalization, Histogram Equalization, Binarization, Skeletonization and Fusion. The Normalization process standardized the pixel intensity which facilitated the processing of subsequent image enhancement stages. Subsequently, the Histogram Equalization technique increased the contrast of the images. Furthermore, the Binarization and Skeletonization techniques were implemented to differentiate between the ridge and valley structures and to obtain one
... Show More