Image compression is one of the data compression types applied to digital images in order to reduce their high cost for storage and/or transmission. Image compression algorithms may take the benefit of visual sensitivity and statistical properties of image data to deliver superior results in comparison with generic data compression schemes, which are used for other digital data. In the first approach, the input image is divided into blocks, each of which is 16 x 16, 32 x 32, or 64 x 64 pixels. The blocks are converted first into a string; then, encoded by using a lossless and dictionary-based algorithm known as arithmetic coding. The more occurrence of the pixels values is codded in few bits compare with pixel values of less occurrence through the sub intervals between the range 0 and 1. Finally, the stream of compressed tables is reassembled for decompressing (image restoration). The results showed a compression gain of 10-12% and less time consumption when applying this type of coding to each block rather than the entire image. To improve the compression ratio, the second approach was used based on the YCbCr colour model. In this regard, images were decomposed into four sub-bands (low-low, high-low, low-high, and high-high) by using the discrete wavelet transform compression algorithm. Then, the low-low sub-band was transmuted to frequency components (low and high) via discrete wavelet transform. Next, these components were quantized by using scalar quantization and then scanning in a zigzag way. The compression ratio result is 15.1 to 27.5 for magnetic resonance imaging with a different peak signal to noise ratio and mean square error; 25 to 43 for X-ray images; 32 to 46 for computed tomography scan images; and 19 to 36 for magnetic resonance imaging brain images. The second approach showed an improved compression scheme compared to the first approach considering compression ratio, peak signal to noise ratio, and mean square error.
After the year 2003, Iraq went through multiple waves of violence and at different levels on the security, intellectual, political and social levels. Behind that stood several motives and incentives to enable violence that represented the first axis of research, the most important of which was the political motives that circulated an atmosphere that politics against society and transformed power into a field of political brutality against the individual and the group at once. There are also cultural, intellectual, media and economic motives such as weak cultural independence, poverty, marginalization, unemployment and want, and the absence of a media discourse that rejects violence but incites it, on the other ha
... Show MoreThe Hartley transform generalizes to the fractional Hartley transform (FRHT) which gives various uses in different fields of image encryption. Unfortunately, the available literature of fractional Hartley transform is unable to provide its inversion theorem. So accordingly original function cannot retrieve directly, which restrict its applications. The intension of this paper is to propose inversion theorem of fractional Hartley transform to overcome this drawback. Moreover, some properties of fractional Hartley transform are discussed in this paper.
The background subtraction is a leading technique adopted for detecting the moving objects in video surveillance systems. Various background subtraction models have been applied to tackle different challenges in many surveillance environments. In this paper, we propose a model of pixel-based color-histogram and Fuzzy C-means (FCM) to obtain the background model using cosine similarity (CS) to measure the closeness between the current pixel and the background model and eventually determine the background and foreground pixel according to a tuned threshold. The performance of this model is benchmarked on CDnet2014 dynamic scenes dataset using statistical metrics. The results show a better performance against the state-of the art
... Show MoreIschemic stroke is a significant cause of morbidity and mortality worldwide. Autophagy, a process of intracellular degradation, has been shown to play a crucial role in the pathogenesis of ischemic stroke. Long non-coding RNAs (lncRNAs) have emerged as essential regulators of autophagy in various diseases, including ischemic stroke. Recent studies have identified several lncRNAs that modulate autophagy in ischemic stroke, including MALAT1, MIAT, SNHG12, H19, AC136007. 2, C2dat2, MEG3, KCNQ1OT1, SNHG3, and RMRP. These lncRNAs regulate autophagy by interacting with key proteins involved in the autophagic process, such as Beclin-1, ATG7, and LC3. Understanding the role of lncRNAs in regulating auto
In this paper, we will focus to one of the recent applications of PU-algebras in the coding theory, namely the construction of codes by soft sets PU-valued functions. First, we shall introduce the notion of soft sets PU-valued functions on PU-algebra and investigate some of its related properties.Moreover, the codes generated by a soft sets PU-valued function are constructed and several examples are given. Furthermore, example with graphs of binary block code constructed from a soft sets PU-valued function is constructed.
Most of the Weibull models studied in the literature were appropriate for modelling a continuous random variable which assumes the variable takes on real values over the interval [0,∞]. One of the new studies in statistics is when the variables take on discrete values. The idea was first introduced by Nakagawa and Osaki, as they introduced discrete Weibull distribution with two shape parameters q and β where 0 < q < 1 and b > 0. Weibull models for modelling discrete random variables assume only non-negative integer values. Such models are useful for modelling for example; the number of cycles to failure when components are subjected to cyclical loading. Discrete Weibull models can be obta
... Show MoreThis article aims to estimate the partially linear model by using two methods, which are the Wavelet and Kernel Smoothers. Simulation experiments are used to study the small sample behavior depending on different functions, sample sizes, and variances. Results explained that the wavelet smoother is the best depending on the mean average squares error criterion for all cases that used.
The concept of employees voice has received a great deal of attention by researchers in the field of organizational behavior and human resources management especially in the last three decades of the twentieth century , this importance has deep ranges limits in terms of its discussion history, so it became a behavioral variable received a great attention and care in managerial and organizational studied and basic pillar in the success and excellence of organizations in maintaining its human resources, the research explain the concept and benefits of paying attention to the voice of employees in business organizations , and the theories interpreted to employees voicing and clearing the motivations behind employees voicing, and dis
... Show MoreThe research deals with an analytical approach between new media and traditional one in the light of the changes imposed by technology, which has been able to change a number of common concepts in the field of communication and media. The researcher tries to find an analytical explanation of the relationship between technology by being an influential factor in building the information society, which is the basis of new media, and the technical output that influenced the forms of social relations and linguistic construction as a human communication tool. The research deals with an analytical approach between new media and traditional one in the light of the changes imposed by technology, which has been able to change a number of comm
... Show More