An experimental and numerical study has been carried out to investigate the forced convection heat transfer by clean or dusty air in a two dimensional annulus enclosure filled with porous media (glass beads) between two vertical concentric cylinders. The outer cylinder is of (82 mm) outside diameters and the inner cylinder of (27 mm) outside diameter. Under steady state condition; the inner cylinder surface is maintained at a high temperature by applying a uniform heat flux and the outer cylinder surface at an ambient temperature. The investigation covered values of input power of (6.3, 4.884, 4.04 and 3.26 W), Reynolds number values of (300, 700, 1000, 1500, and 2000) and dust ratio values (density number N) of (2, 4, 6 and 8). A computer program in MATLAB has been built to carry out the numerical solution by writing the governing equation in finite difference method. The local Nusselt number, the average Nusselt number, the contours of temperature field and velocity field were presented to show the flow and heat transfer characteristics. The results show that when clean air flow, the wall temperature gradually increases along the cylinder length in the direction of flow and decrease as Reynolds number increase while it increases with input power. For dusty air flow results show that the wall
temperature gradually increases along the axial direction and increase with Reynolds number and with input power, and the maximum reduction in heat transfer will be 30 % for N=8 at Re=2000. Comparison was made between the present experimental and numerical results and it gives good agreement. The experimental and numerical Nusselt number follows the same behavior with a mean
deviation of 12%.
In this work, the pseudoparabolic problem of the fourth order is investigated to identify the time -dependent potential term under periodic conditions, namely, the integral condition and overdetermination condition. The existence and uniqueness of the solution to the inverse problem are provided. The proposed method involves discretizing the pseudoparabolic equation by using a finite difference scheme, and an iterative optimization algorithm to resolve the inverse problem which views as a nonlinear least-square minimization. The optimization algorithm aims to minimize the difference between the numerical computing solution and the measured data. Tikhonov’s regularization method is also applied to gain stable results. Two
... Show MoreThe electrocardiogram (ECG) is the recording of the electrical potential of the heart versus time. The analysis of ECG signals has been widely used in cardiac pathology to detect heart disease. The ECGs are non-stationary signals which are often contaminated by different types of noises from different sources. In this study, simulated noise models were proposed for the power-line interference (PLI), electromyogram (EMG) noise, base line wander (BW), white Gaussian noise (WGN) and composite noise. For suppressing noises and extracting the efficient morphology of an ECG signal, various processing techniques have been recently proposed. In this paper, wavelet transform (WT) is performed for noisy ECG signals. The graphical user interface (GUI)
... Show MoreGenerally, radiologists analyse the Magnetic Resonance Imaging (MRI) by visual inspection to detect and identify the presence of tumour or abnormal tissue in brain MR images. The huge number of such MR images makes this visual interpretation process, not only laborious and expensive but often erroneous. Furthermore, the human eye and brain sensitivity to elucidate such images gets reduced with the increase of number of cases, especially when only some slices contain information of the affected area. Therefore, an automated system for the analysis and classification of MR images is mandatory. In this paper, we propose a new method for abnormality detection from T1-Weighted MRI of human head scans using three planes, including axial plane, co
... Show MoreBackground: The main objective was to compare the outcome of single layer interrupted extra-mucosal sutures with that of double layer suturing in the closure of colostomies.
Subjects and Methods: Sixty-seven patients with closure colostomy were assigned in a prospective randomized fashion into either single layer extra-mucosal anastomosis (Group A) or double layer anastomosis (Group B). Primary outcome measures included mean time taken for anastomosis, immediate postoperative complications, and mean duration of hospital stay. Secondary outcome measures assessed the postoperative return of bowel function, and the overall mean cost. Chi-square test and student t-test did the statistical analysis..
Resu
... Show MoreAn impressed current cathodic protection system (ICCP) requires measurements of extremely low-level quantities of its electrical characteristics. The current experimental work utilized the Adafruit INA219 sensor module for acquiring the values for voltage, current, and power of a default load, which consumes quite low power and simulates an ICCP system. The main problem is the adaptation of the INA219 sensor to the LabVIEW environment due to the absence of the library of this sensor. This work is devoted to the adaptation of the Adafruit INA219 sensor module in the LabVIEW environment through creating, developing, and successfully testing a Sub VI to be ready for employment in an ICCP system. The sensor output was monitored with an Arduino
... Show MoreBackground: the aim of this study was to assess the 2-year pulp survival of deep carious lesions in teeth excavated using a self-limiting protocol in a single-blind randomized controlled clinical trial. Methods: At baseline, 101 teeth with deep carious lesions in 86 patients were excavated randomly using self-limiting or control protocols. Standardized clinical examination and periapical radiographs of teeth were performed after 1- and 2-year follow-ups (REC 14/LO/0880). Results: During the 2-year period of the study, 24 teeth failed (16 and 8 at T12 and T24, respectively). Final analysis shows that 39/63 (61.9%) of teeth were deemed successful (16/33 (48.4%) and 23/30 (76.6%) in the control and experimental groups, respectively wit
... Show MoreAbstract
This research presents a on-line cognitive tuning control algorithm for the nonlinear controller of path-tracking for dynamic wheeled mobile robot to stabilize and follow a continuous reference path with minimum tracking pose error. The goal of the proposed structure of a hybrid (Bees-PSO) algorithm is to find and tune the values of the control gains of the nonlinear (neural and back-stepping method) controllers as a simple on-line with fast tuning techniques in order to obtain the best torques actions of the wheels for the cart mobile robot from the proposed two controllers. Simulation results (Matlab Package 2012a) show that the nonlinear neural controller with hybrid Bees-PSO cognitive algorithm is m
... Show MoreEach language in the world has its special methods in using articles that connected with nouns. There are languages do not have articles and others their articles from one class, that is to say it do not have masculine and feminine such as our Arabic language. Also in some languages the nouns come after article.
The main aim in our research is to analyze the usage of these articles and its presence or not in the structure of sentence for the learners of Spanish language as a foreign language.
The usage of these articles in Spanish language forms one of the problems that face students in the grammar of Spanish language, at the same time it stands as a problem in translation becau
... Show MoreData scarcity is a major challenge when training deep learning (DL) models. DL demands a large amount of data to achieve exceptional performance. Unfortunately, many applications have small or inadequate data to train DL frameworks. Usually, manual labeling is needed to provide labeled data, which typically involves human annotators with a vast background of knowledge. This annotation process is costly, time-consuming, and error-prone. Usually, every DL framework is fed by a significant amount of labeled data to automatically learn representations. Ultimately, a larger amount of data would generate a better DL model and its performance is also application dependent. This issue is the main barrier for