Image pattern classification is considered a significant step for image and video processing.Although various image pattern algorithms have been proposed so far that achieved adequate classification,achieving higher accuracy while reducing the computation time remains challenging to date. A robust imagepattern classification method is essential to obtain the desired accuracy. This method can be accuratelyclassify image blocks into plain, edge, and texture (PET) using an efficient feature extraction mechanism.Moreover, to date, most of the existing studies are focused on evaluating their methods based on specificorthogonal moments, which limits the understanding of their potential application to various DiscreteOrthogonal Moments (DOMs). Therefore, finding a fast PET classification method that accurately clas-sify image pattern is crucial. To this end, this paper proposes a new scheme for accurate and fast imagepattern classification using an efficient DOM. To reduce the computational complexity of feature extraction,an election mechanism is proposed to reduce the number of processed block patterns. In addition, supportvector machine is used to classify the extracted features for different block patterns. The proposed scheme isevaluated by comparing the accuracy of the proposed method with the accuracy achieved by state-of-the-artmethods. In addition, we compare the performance of the proposed method based on different DOMs toget the robust one. The results show that the proposed method achieves the highest classification accuracycompared with the existing methods in all the scenarios considered
Steganography is defined as hiding confidential information in some other chosen media without leaving any clear evidence of changing the media's features. Most traditional hiding methods hide the message directly in the covered media like (text, image, audio, and video). Some hiding techniques leave a negative effect on the cover image, so sometimes the change in the carrier medium can be detected by human and machine. The purpose of suggesting hiding information is to make this change undetectable. The current research focuses on using complex method to prevent the detection of hiding information by human and machine based on spiral search method, the Structural Similarity Index Metrics measures are used to get the accuracy and quality
... Show MoreThis study aims at shedding light on the linguistic significance of collocation networks in the academic writing context. Following Firth’s principle “You shall know a word by the company it keeps.” The study intends to examine three selected nodes (i.e. research, study, and paper) shared collocations in an academic context. This is achieved by using the corpus linguistic tool; GraphColl in #LancsBox software version 5 which was announced in June 2020 in analyzing selected nodes. The study focuses on academic writing of two corpora which were designed and collected especially to serve the purpose of the study. The corpora consist of a collection of abstracts extracted from two different academic journals that publish for writ
... Show MoreThis paper proposes feedback linearization control (FBLC) based on function approximation technique (FAT) to regulate the vibrational motion of a smart thin plate considering the effect of axial stretching. The FBLC includes designing a nonlinear control law for the stabilization of the target dynamic system while the closedloop dynamics are linear with ensured stability. The objective of the FAT is to estimate the cubic nonlinear restoring force vector using the linear parameterization of weighting and orthogonal basis function matrices. Orthogonal Chebyshev polynomials are used as strong approximators for adaptive schemes. The proposed control architecture is applied to a thin plate with a large deflection that stimulates the axial loadin
... Show MoreThe present study aims to investigate the various request constructions used in Classical Arabic and Modern Arabic language by identifying the differences in their usage in these two different genres. Also, the study attempts to trace the cases of felicitous and infelicitous requests in the Arabic language. Methodologically, the current study employs a web-based corpus tool (Sketch Engine) to analyze different corpora: the first one is Classical Arabic, represented by King Saud University Corpus of Classical Arabic, while the second is The Arabic Web Corpus “arTenTen” representing Modern Arabic. To do so, the study relies on felicity conditions to qualitatively interpret the quantitative data, i.e., following a mixed mode method
... Show MoreThe influx of data in bioinformatics is primarily in the form of DNA, RNA, and protein sequences. This condition places a significant burden on scientists and computers. Some genomics studies depend on clustering techniques to group similarly expressed genes into one cluster. Clustering is a type of unsupervised learning that can be used to divide unknown cluster data into clusters. The k-means and fuzzy c-means (FCM) algorithms are examples of algorithms that can be used for clustering. Consequently, clustering is a common approach that divides an input space into several homogeneous zones; it can be achieved using a variety of algorithms. This study used three models to cluster a brain tumor dataset. The first model uses FCM, whic
... Show More