The problem of the high peak to average ratio (PAPR) in OFDM signals is investigated with a brief presentation of the various methods used to reduce the PAPR with special attention to the clipping method. An alternative approach of clipping is presented, where the clipping is performed right after the IFFT stage unlike the conventional clipping that is performed in the power amplifier stage, which causes undesirable out of signal band spectral growth. In the proposed method, there is clipping of samples not clipping of wave, therefore, the spectral distortion is avoided. Coding is required to correct the errors introduced by the clipping and the overall system is tested for two types of modulations, the QPSK as a constant amplitude modulation and 16QAM as a varying amplitude modulation.
The undetected error probability is an important measure to assess the communication reliability provided by any error coding scheme. Two error coding schemes namely, Joint crosstalk avoidance and Triple Error Correction (JTEC) and JTEC with Simultaneous Quadruple Error Detection (JTEC-SQED), provide both crosstalk reduction and multi-bit error correction/detection features. The available undetected error probability model yields an upper bound value which does not give accurate estimation on the reliability provided. This paper presents an improved mathematical model to estimate the undetected error probability of these two joint coding schemes. According to the decoding algorithm the errors are classified into patterns and their decoding
... Show MoreUltrasound imaging is often preferred over other medical imaging modalities because it is non-invasive, non-ionizing, and low-cost. However, the main weakness of medical ultrasound image is the poor quality of images, due to presence of speckle noise and blurring. Speckle is characteristic phenomenon in ultrasound images, which can be described as random multiplicative noise that occurrence is often undesirable, since it affects the tasks of human interpretation and diagnosis. Blurring is a form of bandwidth reduction of an ideal image owing to the imperfect image formation process. Image denoising involves processing of the image data to produce a visually high quality image. The denoising algorithms may be classified into two categorie
... Show MoreDrag has long been identified as the main reason for the loss of energy in fluid transmission like pipelines and other similar transportation channels. The main contributor to this drag is the viscosity as well as friction against the pipe walls, which will results in more pumping power consumption.
The aim in this study was first to understand the role of additives in the viscosity reduction and secondly to evaluate the drag reduction efficiency when blending with different solvents.
This research investigated flow increase (%FI) in heavy oil at different flow rates (2 to 10 m3/hr) in two pipes (0.0381 m & 0.0508 m) ID By using different additives (toluene and naphtha) with different concent
... Show MoreDNA methylation is one of the main epigenetic mechanisms in cancer development and progression. Aberrant DNA methylation of CpG islands within promoter regions contributes to the dysregulation of various tumor suppressors and oncogenes; this leads to the appearance of malignant features, including rapid proliferation, metastasis, stemness, and drug resistance. The discovery of two important protein families, DNA methyltransferases (DNMTs) and Ten-eleven translocation (TET) dioxygenases, respectively, which are responsible for deregulated transcription of genes that play pivotal roles in tumorigenesis, led to further understanding of DNA methylation-related pathways. But how these enzymes can target specific genes in different malignancies;
... Show MoreImage compression is one of the data compression types applied to digital images in order to reduce their high cost for storage and/or transmission. Image compression algorithms may take the benefit of visual sensitivity and statistical properties of image data to deliver superior results in comparison with generic data compression schemes, which are used for other digital data. In the first approach, the input image is divided into blocks, each of which is 16 x 16, 32 x 32, or 64 x 64 pixels. The blocks are converted first into a string; then, encoded by using a lossless and dictionary-based algorithm known as arithmetic coding. The more occurrence of the pixels values is codded in few bits compare with pixel values of less occurre
... Show MoreAction films employ many artistic and literary elements that contribute greatly to building the general meaning of the film and push the wheel of the film forward. The element of mystery and suspense is used as two basic elements in action films. The cinematic language in action films depends on global coding, which is not models as it might be. It is based on logic, rather as units that aspire to morphology and not their homogeneity as the physical sense, but as the logical harmony of interpretive authority and enlightenment and in action films as a field of communication and a field in its origin in which the signifier contrasts with the perceptions of the meaning and in it takes a certain number of units preventing each other and thro
... Show MoreTemporomandibular Disorders (TMD) refer to a group of symptoms where pain is the most leading cause to demand a treatment by the patient. Light therapies are of great importance at current times due to its biosafety and non-invasive quality when used for the management of TMD symptoms. This study aimed to evaluate the efficacy of red LED light with low-level LASER in treating TMD patients.
A double-blind randomized clinical study was conducted and included 60 patients along 3 groups (20 for e
This paper presents a study of a syndrome coding scheme for different binary linear error correcting codes that refer to the code families such as BCH, BKLC, Golay, and Hamming. The study is implemented on Wyner’s wiretap channel model when the main channel is error-free and the eavesdropper channel is a binary symmetric channel with crossover error probability (0 < Pe ≤ 0.5) to show the security performance of error correcting codes while used in the single-staged syndrome coding scheme in terms of equivocation rate. Generally, these codes are not designed for secure information transmission, and they have low equivocation rates when they are used in the syndrome coding scheme. Therefore, to improve the transmiss
... Show More