Research on the automated extraction of essential data from an electrocardiography (ECG) recording has been a significant topic for a long time. The main focus of digital processing processes is to measure fiducial points that determine the beginning and end of the P, QRS, and T waves based on their waveform properties. The presence of unavoidable noise during ECG data collection and inherent physiological differences among individuals make it challenging to accurately identify these reference points, resulting in suboptimal performance. This is done through several primary stages that rely on the idea of preliminary processing of the ECG electrical signal through a set of steps (preparing raw data and converting them into files that are read and then processed by removing empty data and unifying the width of the signal at a length of 250 in order to remove noise accurately, and then performing the process of identifying the QRS in the first place and P-T implicitly, and then the task stage is determining the required peak and making a cut based on it. The U-Net pre-trained model is used for deep learning. It takes an ECG signal with a customisable sampling rate as input and generates a list of the beginning and ending points of P and T waves, as well as QRS complexes, as output. The distinguishing features of our segmentation method are its high speed, minimal parameter requirements, and strong generalization capabilities, which are used to create data that can be used in diagnosing diseases or biometric systems.
In low-latitude areas less than 10° in latitude angle, the solar radiation that goes into the solar still increases as the cover slope approaches the latitude angle. However, the amount of water that is condensed and then falls toward the solar-still basin is also increased in this case. Consequently, the solar yield still is significantly decreased, and the accuracy of the prediction method is affected. This reduction in the yield and the accuracy of the prediction method is inversely proportional to the time in which the condensed water stays on the inner side of the condensing cover without collection because more drops will fall down into the basin of the solar-still. Different numbers of scraper motions per hour (NSM), that is
... Show MoreGrabisch and Labreuche have recently proposed a generalization of capacities, called the bi-capacities. Recently, a new approach for studying bi-capacities through introducing a notion of ternary-element sets proposed by the author. In this paper, we propose many results such as bipolar Mobius transform, importance index, and interaction index of bi-capacities based on our approach.
Graphite Coated Electrodes (GCE) based on molecularly imprinted polymers were fabricated for the selective potentiometric determination of Risperidone (Ris). The molecularly imprinted (MIP) and nonimprinted (NIP) polymers were synthesized by bulk polymerization using (Ris.) as a template, acrylic acid (AA) and acrylamide (AAm) as monomers, ethylene glycol dimethacrylate (EGDMA) as a cross-linker and benzoyl peroxide (BPO) as an initiator. The imprinted membranes and the non-imprinted membranes were prepared using dioctyl phthalate (DOP) and Dibutylphthalate (DBP) as plasticizers in PVC matrix. The membranes were coated on graphite electrodes. The MIP electrodes using
... Show MoreAbstract:In this research we prepared nanofibers by electrospinning from poly (Vinyl Alcohol) / TiO2. The spectrum of the solution (Emission) was studied at 772 nm. Several process parameter were Investigated as concentration of PVA, the effect of distance from nozzle tip to the grounded collector (gap distance), and final the effect of high voltage. We find the optimum condition to prepare a narrow nanofibers is at concentration of PVA 16gm, the fiber has 20nm diameter.
In this research we prepared nanofibers by electrospinning
from poly (Vinyl Alcohol) / TiO2. The spectrum of the solution
(Emission) was studied at 772 nm. Several process parameter were
Investigated as concentration of PVA, the effect of distance from
nozzle tip to the grounded collector (gap distance), and final the
effect of high voltage. We find the optimum condition to prepare a
narrow nanofibers is at concentration of PVA 16gm, the fiber has
20nm diameter
The majority of the environmental outputs from gas refineries are oily wastewater. This research reveals a novel combination of response surface methodology and artificial neural network to optimize and model oil content concentration in the oily wastewater. Response surface methodology based on central composite design shows a highly significant linear model with P value <0.0001 and determination coefficient R2 equal to 0.747, R adjusted was 0.706, and R predicted 0.643. In addition from analysis of variance flow highly effective parameters from other and optimization results verification revealed minimum oily content with 8.5 ± 0.7 ppm when initial oil content 991 ppm, tempe
The research include a pulsed Nd: YAG Laser with (300µs) pulse duration in the TEM00 mode at (1.06µm) wavelength for energies between (0.5-3) J was employed to drill Brass material which is use in industrial applications. The process of drill was assisted by an electric field. This resulted in an increase in the hole aspect ratio by the value (45%) and decrease in the hole taper by the value (25%) of its value under ordinary drilling conditions using the same input energy.
Abstract
The research aims to identify the level of effectiveness of the teaching practices of science and mathematics teachers in light of the national framework for future skills in Omani schools. To achieve the objectives of the study, the researchers used the descriptive approach, as he designed a note card consisting of (30) phrases distributed on three axes: basic skills, practical skills, and technical skills. After verifying the validity and reliability of the tools, they were applied to a sample of (116) teachers. The results of the research revealed that the level of effectiveness of the teaching practices of mathematics teachers has recorded a medium degree with a mean (3.05). The results a
... Show MoreThe influx of data in bioinformatics is primarily in the form of DNA, RNA, and protein sequences. This condition places a significant burden on scientists and computers. Some genomics studies depend on clustering techniques to group similarly expressed genes into one cluster. Clustering is a type of unsupervised learning that can be used to divide unknown cluster data into clusters. The k-means and fuzzy c-means (FCM) algorithms are examples of algorithms that can be used for clustering. Consequently, clustering is a common approach that divides an input space into several homogeneous zones; it can be achieved using a variety of algorithms. This study used three models to cluster a brain tumor dataset. The first model uses FCM, whic
... Show More