Security concerns in the transfer of medical images have drawn a lot of attention to the topic of medical picture encryption as of late. Furthermore, recent events have brought attention to the fact that medical photographs are constantly being produced and circulated online, necessitating safeguards against their inappropriate use. To improve the design of the AES algorithm standard for medical picture encryption, this research presents several new criteria. It was created so that needs for higher levels of safety and higher levels of performance could be met. First, the pixels in the image are diffused to randomly mix them up and disperse them all over the screen. Rather than using rounds, the suggested technique utilizes a cascaded-looking composition of F-functions in a quadrate architecture. The proposed F-function architecture is a three-input, three-output Type-3 AES-Feistel network with additional integer parameters representing the subkeys in use. The suggested system makes use of the AES block cipher as a function on a Type-3 AES-Feistel network. Blocks in the proposed system are 896 bits in length, whereas keys are 128 bits. The production of subkeys is encrypted using a chain of E8- algorithms. The necessary subkeys are then generated with a recursion. The results are reviewed to verify that the new layout improves the security of the AES block cipher when used to encrypt medical images in a computer system.
In this work a fragile watermarking scheme is presented. This scheme is applied to digital color images in spatial domain. The image is divided into blocks, and each block has its authentication mark embedded in it, we would be able to insure which parts of the image are authentic and which parts have been modified. This authentication carries out without need to exist the original image. The results show the quality of the watermarked image is remaining very good and the watermark survived some type of unintended modification such as familiar compression software like WINRAR and ZIP
Compressing an image and reconstructing it without degrading its original quality is one of the challenges that still exist now a day. A coding system that considers both quality and compression rate is implemented in this work. The implemented system applies a high synthetic entropy coding schema to store the compressed image at the smallest size as possible without affecting its original quality. This coding schema is applied with two transform-based techniques, one with Discrete Cosine Transform and the other with Discrete Wavelet Transform. The implemented system was tested with different standard color images and the obtained results with different evaluation metrics have been shown. A comparison was made with some previous rel
... Show MoreThe mobile services are the most important media between many of telecommunication means such as the Internet and Telephone networks, And thatis be cause of its advantage represented by the high availability and independence of physical location and time,Therefore, the need to protect the mobile information appeared against the changing and the misuse especially with the rapid and wide grow of the mobile network and its wide usage through different types of information such as messages, images and videos. The proposed system uses the watermark as tool to protect images on a mobile device by registering them on a proposed watermarking server. This server allows the owner to protect his images by using invisible wat
... Show MoreImage Fusion is being used to gather important data from such an input image array and to place it in a single output picture to make it much more meaningful & usable than either of the input images. Image fusion boosts the quality and application of data. The accuracy of the image that has fused depending on the application. It is widely used in smart robotics, audio camera fusion, photonics, system control and output, construction and inspection of electronic circuits, complex computer, software diagnostics, also smart line assembling robots. In this paper provides a literature review of different image fusion techniques in the spatial domain and frequency domain, such as averaging, min-max, block substitution, Intensity-Hue-Saturation(IH
... Show MoreHuman skin detection, which usually performed before image processing, is the method of discovering skin-colored pixels and regions that may be of human faces or limbs in videos or photos. Many computer vision approaches have been developed for skin detection. A skin detector usually transforms a given pixel into a suitable color space and then uses a skin classifier to mark the pixel as a skin or a non-skin pixel. A skin classifier explains the decision boundary of the class of a skin color in the color space based on skin-colored pixels. The purpose of this research is to build a skin detection system that will distinguish between skin and non-skin pixels in colored still pictures. This performed by introducing a metric that measu
... Show MoreRoot-finding is an oldest classical problem, which is still an important research topic, due to its impact on computational algebra and geometry. In communications systems, when the impulse response of the channel is minimum phase the state of equalization algorithm is reduced and the spectral efficiency will improved. To make the channel impulse response minimum phase the prefilter which is called minimum phase filter is used, the adaptation of the minimum phase filter need root finding algorithm. In this paper, the VHDL implementation of the root finding algorithm introduced by Clark and Hau is introduced.
VHDL program is used in the work, to find the roots of two channels and make them minimum phase, the obtained output results are
The huge amount of documents in the internet led to the rapid need of text classification (TC). TC is used to organize these text documents. In this research paper, a new model is based on Extreme Machine learning (EML) is used. The proposed model consists of many phases including: preprocessing, feature extraction, Multiple Linear Regression (MLR) and ELM. The basic idea of the proposed model is built upon the calculation of feature weights by using MLR. These feature weights with the extracted features introduced as an input to the ELM that produced weighted Extreme Learning Machine (WELM). The results showed a great competence of the proposed WELM compared to the ELM.