In this study, an efficient compression system is introduced, it is based on using wavelet transform and two types of 3Dimension (3D) surface representations (i.e., Cubic Bezier Interpolation (CBI)) and 1 st order polynomial approximation. Each one is applied on different scales of the image; CBI is applied on the wide area of the image in order to prune the image components that show large scale variation, while the 1 st order polynomial is applied on the small area of residue component (i.e., after subtracting the cubic Bezier from the image) in order to prune the local smoothing components and getting better compression gain. Then, the produced cubic Bezier surface is subtracted from the image signal to get the residue component. Then, thebi-orthogonal wavelet transform is applied on the produced Bezier residue component. The resulting transform coefficients are quantized using progressive scalar quantization and the 1 st order polynomial is applied on the quantized LL subband to produce the polynomial surface, then the produced polynomial surface is subtracted from the LL subband to get the residue component (high frequency component). Then, the quantized values are represented using quad tree encoding to prune the sparse blocks, followed by high order shift coding algorithm to handle the remaining statistical redundancy and to attain efficient compression performance. The conducted tests indicated that the introduced system leads to promising compression gain.
This study aims at shedding light on the linguistic significance of collocation networks in the academic writing context. Following Firth’s principle “You shall know a word by the company it keeps.” The study intends to examine three selected nodes (i.e. research, study, and paper) shared collocations in an academic context. This is achieved by using the corpus linguistic tool; GraphColl in #LancsBox software version 5 which was announced in June 2020 in analyzing selected nodes. The study focuses on academic writing of two corpora which were designed and collected especially to serve the purpose of the study. The corpora consist of a collection of abstracts extracted from two different academic journals that publish for writ
... Show MoreThe present study aims to investigate the various request constructions used in Classical Arabic and Modern Arabic language by identifying the differences in their usage in these two different genres. Also, the study attempts to trace the cases of felicitous and infelicitous requests in the Arabic language. Methodologically, the current study employs a web-based corpus tool (Sketch Engine) to analyze different corpora: the first one is Classical Arabic, represented by King Saud University Corpus of Classical Arabic, while the second is The Arabic Web Corpus “arTenTen” representing Modern Arabic. To do so, the study relies on felicity conditions to qualitatively interpret the quantitative data, i.e., following a mixed mode method
... Show MoreThe influx of data in bioinformatics is primarily in the form of DNA, RNA, and protein sequences. This condition places a significant burden on scientists and computers. Some genomics studies depend on clustering techniques to group similarly expressed genes into one cluster. Clustering is a type of unsupervised learning that can be used to divide unknown cluster data into clusters. The k-means and fuzzy c-means (FCM) algorithms are examples of algorithms that can be used for clustering. Consequently, clustering is a common approach that divides an input space into several homogeneous zones; it can be achieved using a variety of algorithms. This study used three models to cluster a brain tumor dataset. The first model uses FCM, whic
... Show MoreBackground: The main aim of the present study is to qualify and quantify voids formation of root canals obturated with GuttaCore (GC) and experimental Hydroxyapatite polyethylene (HA/PE) as new carrier-based root canal fillings by using micro computed tomography scan. Materials and methods: In the present study, eight straight single-rooted human permanent premolar teeth are selected and disinfected, then stored in distilled water. The teeth decoronated leaving a root length of 12mm each. The root canals instrumented by using crown down technique and the apical diameter of the root canal prepared to a size # 30/0.04 for achieving standardized measurements. A 5mL of 17% EDTA used to remove the smear layer followed by 5mL of 2.5% NaOCl and r
... Show More