Image compression is one of the data compression types applied to digital images in order to reduce their high cost for storage and/or transmission. Image compression algorithms may take the benefit of visual sensitivity and statistical properties of image data to deliver superior results in comparison with generic data compression schemes, which are used for other digital data. In the first approach, the input image is divided into blocks, each of which is 16 x 16, 32 x 32, or 64 x 64 pixels. The blocks are converted first into a string; then, encoded by using a lossless and dictionary-based algorithm known as arithmetic coding. The more occurrence of the pixels values is codded in few bits compare with pixel values of less occurrence through the sub intervals between the range 0 and 1. Finally, the stream of compressed tables is reassembled for decompressing (image restoration). The results showed a compression gain of 10-12% and less time consumption when applying this type of coding to each block rather than the entire image. To improve the compression ratio, the second approach was used based on the YCbCr colour model. In this regard, images were decomposed into four sub-bands (low-low, high-low, low-high, and high-high) by using the discrete wavelet transform compression algorithm. Then, the low-low sub-band was transmuted to frequency components (low and high) via discrete wavelet transform. Next, these components were quantized by using scalar quantization and then scanning in a zigzag way. The compression ratio result is 15.1 to 27.5 for magnetic resonance imaging with a different peak signal to noise ratio and mean square error; 25 to 43 for X-ray images; 32 to 46 for computed tomography scan images; and 19 to 36 for magnetic resonance imaging brain images. The second approach showed an improved compression scheme compared to the first approach considering compression ratio, peak signal to noise ratio, and mean square error.
DNA methylation is one of the main epigenetic mechanisms in cancer development and progression. Aberrant DNA methylation of CpG islands within promoter regions contributes to the dysregulation of various tumor suppressors and oncogenes; this leads to the appearance of malignant features, including rapid proliferation, metastasis, stemness, and drug resistance. The discovery of two important protein families, DNA methyltransferases (DNMTs) and Ten-eleven translocation (TET) dioxygenases, respectively, which are responsible for deregulated transcription of genes that play pivotal roles in tumorigenesis, led to further understanding of DNA methylation-related pathways. But how these enzymes can target specific genes in different malignancies;
... Show MoreImage classification is the process of finding common features in images from various classes and applying them to categorize and label them. The main problem of the image classification process is the abundance of images, the high complexity of the data, and the shortage of labeled data, presenting the key obstacles in image classification. The cornerstone of image classification is evaluating the convolutional features retrieved from deep learning models and training them with machine learning classifiers. This study proposes a new approach of “hybrid learning” by combining deep learning with machine learning for image classification based on convolutional feature extraction using the VGG-16 deep learning model and seven class
... Show MoreImage classification is the process of finding common features in images from various classes and applying them to categorize and label them. The main problem of the image classification process is the abundance of images, the high complexity of the data, and the shortage of labeled data, presenting the key obstacles in image classification. The cornerstone of image classification is evaluating the convolutional features retrieved from deep learning models and training them with machine learning classifiers. This study proposes a new approach of “hybrid learning” by combining deep learning with machine learning for image classification based on convolutional feature extraction using the VGG-16 deep learning model and seven class
... Show MoreThe problem of the high peak to average ratio (PAPR) in OFDM signals is investigated with a brief presentation of the various methods used to reduce the PAPR with special attention to the clipping method. An alternative approach of clipping is presented, where the clipping is performed right after the IFFT stage unlike the conventional clipping that is performed in the power amplifier stage, which causes undesirable out of signal band spectral growth. In the proposed method, there is clipping of samples not clipping of wave, therefore, the spectral distortion is avoided. Coding is required to correct the errors introduced by the clipping and the overall system is tested for two types of modulations, the QPSK as a constant amplitude modul
... Show MoreThis study proposes a hybrid predictive maintenance framework that integrates the Kolmogorov-Arnold Network (KAN) with Short-Time Fourier Transform (STFT) for intelligent fault diagnosis in industrial rotating machinery. The method is designed to address challenges posed by non-linear and non-stationary vibration signals under varying operational conditions. Experimental validation using the FALEX multispecimen test bench demonstrated a high classification accuracy of 97.5%, outperforming traditional models such as SVM, Random Forest, and XGBoost. The approach maintained robust performance across dynamic load scenarios and noisy environments, with precision and recall exceeding 95%. Key contributions include a hardware-accelerated K
... Show MoreIn information security, fingerprint verification is one of the most common recent approaches for verifying human identity through a distinctive pattern. The verification process works by comparing a pair of fingerprint templates and identifying the similarity/matching among them. Several research studies have utilized different techniques for the matching process such as fuzzy vault and image filtering approaches. Yet, these approaches are still suffering from the imprecise articulation of the biometrics’ interesting patterns. The emergence of deep learning architectures such as the Convolutional Neural Network (CNN) has been extensively used for image processing and object detection tasks and showed an outstanding performance compare
... Show MoreExploring the B-Spline Transform for Estimating Lévy Process Parameters: Applications in Finance and Biomodeling Exploring the B-Spline Transform for Estimating Lévy Process Parameters: Applications in Finance and Biomodeling Letters in Biomathematics · Jul 7, 2025Letters in Biomathematics · Jul 7, 2025 Show publication This paper, presents the application of the B-spline transform as an effective and precise technique for estimating key parameters i.e., drift, volatility, and jump intensity for Lévy processes. Lévy processes are powerful tools for representing phenomena with continuous trends with abrupt changes. The proposed approach is validated through a simulated biological case study on animal migration in which movements are mo
... Show More