The emergence of SARS-CoV-2, the virus responsible for the COVID-19 pandemic, has resulted in a global health crisis leading to widespread illness, death, and daily life disruptions. Having a vaccine for COVID-19 is crucial to controlling the spread of the virus which will help to end the pandemic and restore normalcy to society. Messenger RNA (mRNA) molecules vaccine has led the way as the swift vaccine candidate for COVID-19, but it faces key probable restrictions including spontaneous deterioration. To address mRNA degradation issues, Stanford University academics and the Eterna community sponsored a Kaggle competition.This study aims to build a deep learning (DL) model which will predict deterioration rates at each base of the mRNA molecule. A sequence DL model based on a bidirectional gated recurrent unit (GRU) is implemented. The model is applied to the Stanford COVID-19 mRNA vaccine dataset to predict the mRNA sequences deterioration by predicting five reactivity values for every base in the sequence, namely reactivity values, deterioration rates at high pH, at high temperature, at high pH with Magnesium, and at high temperature with Magnesium. The Stanford COVID-19 mRNA vaccine dataset is split into the training set, validation set, and test set. The bidirectional GRU model minimizes the mean column wise root mean squared error (MCRMSE) of deterioration rates at each base of the mRNA sequence molecule with a value of 0.32086 for the test set which outperformed the winning models with a margin of (0.02112). This study would help other researchers better understand how to forecast mRNA sequence molecule properties to develop a stable COVID-19 vaccine.
The economy is exceptionally reliant on agricultural productivity. Therefore, in domain of agriculture, plant infection discovery is a vital job because it gives promising advance towards the development of agricultural production. In this work, a framework for potato diseases classification based on feed foreword neural network is proposed. The objective of this work is presenting a system that can detect and classify four kinds of potato tubers diseases; black dot, common scab, potato virus Y and early blight based on their images. The presented PDCNN framework comprises three levels: the pre-processing is first level, which is based on K-means clustering algorithm to detect the infected area from potato image. The s
... Show MoreThe influx of data in bioinformatics is primarily in the form of DNA, RNA, and protein sequences. This condition places a significant burden on scientists and computers. Some genomics studies depend on clustering techniques to group similarly expressed genes into one cluster. Clustering is a type of unsupervised learning that can be used to divide unknown cluster data into clusters. The k-means and fuzzy c-means (FCM) algorithms are examples of algorithms that can be used for clustering. Consequently, clustering is a common approach that divides an input space into several homogeneous zones; it can be achieved using a variety of algorithms. This study used three models to cluster a brain tumor dataset. The first model uses FCM, whic
... Show MoreThis paper proposes feedback linearization control (FBLC) based on function approximation technique (FAT) to regulate the vibrational motion of a smart thin plate considering the effect of axial stretching. The FBLC includes designing a nonlinear control law for the stabilization of the target dynamic system while the closedloop dynamics are linear with ensured stability. The objective of the FAT is to estimate the cubic nonlinear restoring force vector using the linear parameterization of weighting and orthogonal basis function matrices. Orthogonal Chebyshev polynomials are used as strong approximators for adaptive schemes. The proposed control architecture is applied to a thin plate with a large deflection that stimulates the axial loadin
... Show MoreThis study employs wavelet transforms to address the issue of boundary effects. Additionally, it utilizes probit transform techniques, which are based on probit functions, to estimate the copula density function. This estimation is dependent on the empirical distribution function of the variables. The density is estimated within a transformed domain. Recent research indicates that the early implementations of this strategy may have been more efficient. Nevertheless, in this work, we implemented two novel methodologies utilizing probit transform and wavelet transform. We then proceeded to evaluate and contrast these methodologies using three specific criteria: root mean square error (RMSE), Akaike information criterion (AIC), and log
... Show MoreClassification of imbalanced data is an important issue. Many algorithms have been developed for classification, such as Back Propagation (BP) neural networks, decision tree, Bayesian networks etc., and have been used repeatedly in many fields. These algorithms speak of the problem of imbalanced data, where there are situations that belong to more classes than others. Imbalanced data result in poor performance and bias to a class without other classes. In this paper, we proposed three techniques based on the Over-Sampling (O.S.) technique for processing imbalanced dataset and redistributing it and converting it into balanced dataset. These techniques are (Improved Synthetic Minority Over-Sampling Technique (Improved SMOTE), Border
... Show MoreIn the field of data security, the critical challenge of preserving sensitive information during its transmission through public channels takes centre stage. Steganography, a method employed to conceal data within various carrier objects such as text, can be proposed to address these security challenges. Text, owing to its extensive usage and constrained bandwidth, stands out as an optimal medium for this purpose. Despite the richness of the Arabic language in its linguistic features, only a small number of studies have explored Arabic text steganography. Arabic text, characterized by its distinctive script and linguistic features, has gained notable attention as a promising domain for steganographic ventures. Arabic text steganography harn
... Show MoreBlockchain has garnered the most attention as the most important new technology that supports recent digital transactions via e-government. The most critical challenge for public e-government systems is reducing bureaucracy and increasing the efficiency and performance of administrative processes in these systems since blockchain technology can play a role in a decentralized environment and execute a high level of security transactions and transparency. So, the main objectives of this work are to survey different proposed models for e-government system architecture based on blockchain technology implementation and how these models are validated. This work studies and analyzes some research trends focused on blockchain
... Show MoreFree-Space Optical (FSO) can provide high-speed communications when the effect of turbulence is not serious. However, Space-Time-Block-Code (STBC) is a good candidate to mitigate this seriousness. This paper proposes a hybrid of an Optical Code Division Multiple Access (OCDMA) and STBC in FSO communication for last mile solutions, where access to remote areas is complicated. The main weakness effecting a FSO link is the atmospheric turbulence. The feasibility of employing STBC in OCDMA is to mitigate these effects. The current work evaluates the Bit-Error-Rate (BER) performance of OCDMA operating under the scintillation effect, where this effect can be described by the gamma-gamma model. The most obvious finding to emerge from the analysis
... Show MoreThis work implements an Electroencephalogram (EEG) signal classifier. The implemented method uses Orthogonal Polynomials (OP) to convert the EEG signal samples to moments. A Sparse Filter (SF) reduces the number of converted moments to increase the classification accuracy. A Support Vector Machine (SVM) is used to classify the reduced moments between two classes. The proposed method’s performance is tested and compared with two methods by using two datasets. The datasets are divided into 80% for training and 20% for testing, with 5 -fold used for cross-validation. The results show that this method overcomes the accuracy of other methods. The proposed method’s best accuracy is 95.6% and 99.5%, respectively. Finally, from the results, it
... Show More