The precise classification of DNA sequences is pivotal in genomics, holding significant implications for personalized medicine. The stakes are particularly high when classifying key genetic markers such as BRAC, related to breast cancer susceptibility; BRAF, associated with various malignancies; and KRAS, a recognized oncogene. Conventional machine learning techniques often necessitate intricate feature engineering and may not capture the full spectrum of sequence dependencies. To ameliorate these limitations, this study employs an adapted UNet architecture, originally designed for biomedical image segmentation, to classify DNA sequences.The attention mechanism was also tested LONG WITH u-Net architecture to precisely classify DNA sequences into BRAC, BRAF, and KRAS categories. Our comprehensive methodology includes rigorous data preprocessing, model training, and a multi-faceted evaluation approach. The adapted U-Net model exhibited exceptional performance, achieving an overall accuracy of 0.96. The model also achieved high precision and recall rates across the classes, with precision ranging from 0.93 to 1.00 and recall between 0.95 and 0.97 for the key markers BRAC, BRAF, and KRAS. The F1-score for these critical markers ranged from 0.95 to 0.98. These empirical results substantiate the architecture’s capability to capture local and global features in DNA sequences, affirming its applicability for critical, sequence-based bioinformatics challenges
DeepFake is a concern for celebrities and everyone because it is simple to create. DeepFake images, especially high-quality ones, are difficult to detect using people, local descriptors, and current approaches. On the other hand, video manipulation detection is more accessible than an image, which many state-of-the-art systems offer. Moreover, the detection of video manipulation depends entirely on its detection through images. Many worked on DeepFake detection in images, but they had complex mathematical calculations in preprocessing steps, and many limitations, including that the face must be in front, the eyes have to be open, and the mouth should be open with the appearance of teeth, etc. Also, the accuracy of their counterfeit detectio
... Show MoreThe work reported in this study focusing on the abrasive wear behavior for three types of pipes used in oil industries (Carbone steel, Alloy steel and Stainless steel) using a wear apparatus for dry and wet tests, manufactured according to ASTM G65. Silica sand with
hardness (1000-1100) HV was used as abrasive material. The abrasive wear of these pipes has been measured experimentally by measuring the wear rate for each case under different sliding speeds, applied loads, and sand conditions (dry or wet). All tests have been conducted using sand of particle size (200-425) µm, ambient temperature of 34.5 °C and humidity 22% (Lab conditions).
The results show that the material loss due to abrasive wear increased monotonically with
The research aimed: 1. Definition of family climate for the university students. 2. Definition of statistical significance of differences in family climate variable depending on the sex (males - females) and specialization (Scientific - humanity). 3. Definition of academic adjustment for university students. 4. Definition of correlation between climate and academic adjustment. The research sample formed of (300) male and female students by (150) male of scientific and humanitarian specialization and (150) female of scientific and humanitarian specialization randomly selected from the research community. To achieve the objectives of the research the researcher prepared a tool to measure family climate. And adopted the measure (Azzam 2010)
... Show MoreThis paper introduces a non-conventional approach with multi-dimensional random sampling to solve a cocaine abuse model with statistical probability. The mean Latin hypercube finite difference (MLHFD) method is proposed for the first time via hybrid integration of the classical numerical finite difference (FD) formula with Latin hypercube sampling (LHS) technique to create a random distribution for the model parameters which are dependent on time t . The LHS technique gives advantage to MLHFD method to produce fast variation of the parameters’ values via number of multidimensional simulations (100, 1000 and 5000). The generated Latin hypercube sample which is random or non-deterministic in nature is further integrated with the FD method t
... Show More This study includes Estimating scale parameter, location parameter and reliability function for Extreme Value (EXV) distribution by two methods, namely: -
- Maximum Likelihood Method (MLE).
- Probability Weighted Moments Method (PWM).
Used simulations to generate the required samples to estimate the parameters and reliability function of different sizes(n=10,25,50,100) , and give real values for the parameters are and , replicate the simulation experiments (RP=1000)
... Show MoreThis study investigates the application of hydraulic acid fracturing to enhance oil production in the Mishrif Formation of the Al-Fakkah oilfield due to declining flow rates and wellhead pressures resulting from asphaltene deposition and inadequate permeability. Implementing acid fracturing, an established technique for low-permeability carbonate reserves, was essential due to the inadequacy of prior solvent cleaning and acidizing efforts. The document outlines the protocols established prior to and following the treatment, emphasizing the importance of careful oversight to guarantee safety and efficacy. In the MiniFrac treatment, 150 barrels of #30 cross-linked gel were injected at 25 barrels per minute, followed by an overflush wi
... Show MoreAggregate production planning (APP) is one of the most significant and complicated problems in production planning and aim to set overall production levels for each product category to meet fluctuating or uncertain demand in future. and to set decision concerning hiring, firing, overtime, subcontract, carrying inventory level. In this paper, we present a simulated annealing (SA) for multi-objective linear programming to solve APP. SA is considered to be a good tool for imprecise optimization problems. The proposed model minimizes total production and workforce costs. In this study, the proposed SA is compared with particle swarm optimization (PSO). The results show that the proposed SA is effective in reducing total production costs and req
... Show More