Image compression is one of the data compression types applied to digital images in order to reduce their high cost for storage and/or transmission. Image compression algorithms may take the benefit of visual sensitivity and statistical properties of image data to deliver superior results in comparison with generic data compression schemes, which are used for other digital data. In the first approach, the input image is divided into blocks, each of which is 16 x 16, 32 x 32, or 64 x 64 pixels. The blocks are converted first into a string; then, encoded by using a lossless and dictionary-based algorithm known as arithmetic coding. The more occurrence of the pixels values is codded in few bits compare with pixel values of less occurrence through the sub intervals between the range 0 and 1. Finally, the stream of compressed tables is reassembled for decompressing (image restoration). The results showed a compression gain of 10-12% and less time consumption when applying this type of coding to each block rather than the entire image. To improve the compression ratio, the second approach was used based on the YCbCr colour model. In this regard, images were decomposed into four sub-bands (low-low, high-low, low-high, and high-high) by using the discrete wavelet transform compression algorithm. Then, the low-low sub-band was transmuted to frequency components (low and high) via discrete wavelet transform. Next, these components were quantized by using scalar quantization and then scanning in a zigzag way. The compression ratio result is 15.1 to 27.5 for magnetic resonance imaging with a different peak signal to noise ratio and mean square error; 25 to 43 for X-ray images; 32 to 46 for computed tomography scan images; and 19 to 36 for magnetic resonance imaging brain images. The second approach showed an improved compression scheme compared to the first approach considering compression ratio, peak signal to noise ratio, and mean square error.
Image segmentation can be defined as a cutting or segmenting process of the digital image into many useful points which are called segmentation, that includes image elements contribute with certain attributes different form Pixel that constitute other parts. Two phases were followed in image processing by the researcher in this paper. At the beginning, pre-processing image on images was made before the segmentation process through statistical confidence intervals that can be used for estimate of unknown remarks suggested by Acho & Buenestado in 2018. Then, the second phase includes image segmentation process by using "Bernsen's Thresholding Technique" in the first phase. The researcher drew a conclusion that in case of utilizing
... Show MoreCryptography algorithms play a critical role in information technology against various attacks witnessed in the digital era. Many studies and algorithms are done to achieve security issues for information systems. The high complexity of computational operations characterizes the traditional cryptography algorithms. On the other hand, lightweight algorithms are the way to solve most of the security issues that encounter applying traditional cryptography in constrained devices. However, a symmetric cipher is widely applied for ensuring the security of data communication in constraint devices. In this study, we proposed a hybrid algorithm based on two cryptography algorithms PRESENT and Salsa20. Also, a 2D logistic map of a chaotic system is a
... Show MoreUpper limb amputation is a condition that severely limits the amputee’s movement. Patients who have lost the use of one or more of their upper extremities have difficulty performing activities of daily living. To help improve the control of upper limb prosthesis with pattern recognition, non-invasive approaches (EEG and EMG signals) is proposed in this paper and are integrated with machine learning techniques to recognize the upper-limb motions of subjects. EMG and EEG signals are combined, and five features are utilized to classify seven hand movements such as (wrist flexion (WF), outward part of the wrist (WE), hand open (HO), hand close (HC), pronation (PRO), supination (SUP), and rest (RST)). Experiments demonstrate that usin
... Show MoreIt is well known that petroleum refineries are considered the largest generator of oily sludge which may cause serious threats to the environment if disposed of without treatment. Throughout the present research, it can be said that a hybrid process including ultrasonic treatment coupled with froth floatation has been shown as a green efficient treatment of oily sludge waste from the bottom of crude oil tanks in Al-Daura refinery and able to get high yield of base oil recovery which is 65% at the optimum operating conditions (treatment time = 30 min, ultrasonic wave amplitude = 60 micron, and (solvent: oily sludge) ratio = 4). Experimental results showed that 83% of the solvent used was recovered meanwhile the main water
... Show MoreFeature selection (FS) constitutes a series of processes used to decide which relevant features/attributes to include and which irrelevant features to exclude for predictive modeling. It is a crucial task that aids machine learning classifiers in reducing error rates, computation time, overfitting, and improving classification accuracy. It has demonstrated its efficacy in myriads of domains, ranging from its use for text classification (TC), text mining, and image recognition. While there are many traditional FS methods, recent research efforts have been devoted to applying metaheuristic algorithms as FS techniques for the TC task. However, there are few literature reviews concerning TC. Therefore, a comprehensive overview was systematicall
... Show MoreThe topic of the research dealt with the image of Iraq in the British press based on a sample of the newspapers (The Guardian and the Daily Telegraph), which are among the most important and largest newspapers in the United Kingdom and the world, because of its active role in guiding local and international public opinion towards important issues and events, Since these two newspapers are interested in the accuracy of sensitive political topics, the message aimed at knowing the media image that these two newspapers painted about Iraq in the period that was limited to the first quarter of 2019, and also to know the nature of the contents promoted by these newspapers about the Iraqi reality, The method of content analysis was used as an ap
... Show MoreIn this work a model of a source generating truly random quadrature phase shift keying (QPSK) signal constellation required for quantum key distribution (QKD) system based on BB84 protocol using phase coding is implemented by using the software package OPTISYSTEM9. The randomness of the sequence generated is achieved by building an optical setup based on a weak laser source, beam splitters and single-photon avalanche photodiodes operating in Geiger mode. The random string obtained from the optical setup is used to generate the quadrature phase shift keying signal constellation required for phase coding in quantum key distribution system based on BB84 protocol with a bit rate of 2GHz/s.