This paper aims to study the effect of circular Y-shaped fin arrangement to improve the low thermal response rates of a double-tube heat exchanger containing Paraffin phase change material (PCM). ANSYS software is employed to perform the computational fluid dynamic (CFD) simulations of the heat exchanger, including fluid flow, heat transfer, and the phase change process. The optimum state of the fin configuration is derived through sensitivity analysis by evaluating the geometrical parameters of the Y-shaped fin. For the same height of the fins (10 mm), the solidification time is reduced by almost 22%, and the discharging rate is enhanced by almost 26% using Y-shaped fins compared with the straight fins. The results demonstrate that the solidification time is inversely proportional to the fin's length. The heat release rate for the case with the longest fins (stem length of 10 mm) is 39 W, almost 2.8 times higher than that with the fins' stem length of 5 mm. The case with the tributary's angle of 22.5o solidified in 55 min, faster than the other studied angles. Increasing the number of fins significantly affects the solidification time and discharging rate. By increasing the number of fins from 3 to 9, the heat transfer rate improves by 194%. The advantages of circular Y-shaped fins are well known in heat transfer applications and therefore are characterized toward higher performance in this study for the first time during the solidification process.
The electrocardiogram (ECG) is the recording of the electrical potential of the heart versus time. The analysis of ECG signals has been widely used in cardiac pathology to detect heart disease. The ECGs are non-stationary signals which are often contaminated by different types of noises from different sources. In this study, simulated noise models were proposed for the power-line interference (PLI), electromyogram (EMG) noise, base line wander (BW), white Gaussian noise (WGN) and composite noise. For suppressing noises and extracting the efficient morphology of an ECG signal, various processing techniques have been recently proposed. In this paper, wavelet transform (WT) is performed for noisy ECG signals. The graphical user interface (GUI)
... Show MoreThe main objectives of this study are to study the enhancement of the load-carrying capacity of Asymmetrical castellated beams with encasement the beams by Reactive Powder Concrete (RPC) and lacing reinforcement, the effect of the gap between top and bottom parts of Asymmetrical castellated steel beam at web post, and serviceability of the confined Asymmetrical castellated steel. This study presents two concentrated loads test results for four specimens Asymmetrical castellated beams section encasement by Reactive powder concrete (RPC) with laced reinforcement. The encasement of the Asymmetrical castellated steel beam consists of, flanges unstiffened element height was filled with RPC for each side and laced reinforced which are use
... Show MoreThis research is an attempt to study aspects of syntactic deviation in AbdulWahhab Al-Bayyati with reference to English. It reviews this phenomenon from an extra-linguistic viewpoint. It adopts a functional approach depending on the stipulates of systemic Functional Grammar as developed by M.A.K. Halliday and others adopting this approach. Within related perspective, fairly’s taxonomy (1975) has been chosen to analyze the types of syntactic deviation because it has been found suitable and relevant to describe this phenomenon. The research hypothesizes that syntactic deviation is pervasive in Arabic poetry, in general and in Abdul-Wahhab Al-Bayyati Poetry in specific, and can be analyzed in the light of systemic Functional Grammar
... Show MoreBackground: The present study aimed to assess the distribution, prevalence, severity of malocclusion in Baghdad governorate in relation to gender and residency Materials and Methods: A multi-stage stratified sampling technique was used in this investigation to make the sample a representative of target population. The sample consisted of 2700 (1349 males and 1351 females) intermediate school students aged 13 years representing 3% of the total target population. A questionnaire was used to determine the perception of occlusion and orthodontic treatment demand of the students and the assessment procedures for occlusal features by direct intraoral measurement using veriner and an instrument to measure the rotated and displaced teeth. Results a
... Show MoreMultiple linear regressions are concerned with studying and analyzing the relationship between the dependent variable and a set of explanatory variables. From this relationship the values of variables are predicted. In this paper the multiple linear regression model and three covariates were studied in the presence of the problem of auto-correlation of errors when the random error distributed the distribution of exponential. Three methods were compared (general least squares, M robust, and Laplace robust method). We have employed the simulation studies and calculated the statistical standard mean squares error with sample sizes (15, 30, 60, 100). Further we applied the best method on the real experiment data representing the varieties of
... Show MoreThis paper introduces a non-conventional approach with multi-dimensional random sampling to solve a cocaine abuse model with statistical probability. The mean Latin hypercube finite difference (MLHFD) method is proposed for the first time via hybrid integration of the classical numerical finite difference (FD) formula with Latin hypercube sampling (LHS) technique to create a random distribution for the model parameters which are dependent on time [Formula: see text]. The LHS technique gives advantage to MLHFD method to produce fast variation of the parameters’ values via number of multidimensional simulations (100, 1000 and 5000). The generated Latin hypercube sample which is random or non-deterministic in nature is further integ
... Show MoreData scarcity is a major challenge when training deep learning (DL) models. DL demands a large amount of data to achieve exceptional performance. Unfortunately, many applications have small or inadequate data to train DL frameworks. Usually, manual labeling is needed to provide labeled data, which typically involves human annotators with a vast background of knowledge. This annotation process is costly, time-consuming, and error-prone. Usually, every DL framework is fed by a significant amount of labeled data to automatically learn representations. Ultimately, a larger amount of data would generate a better DL model and its performance is also application dependent. This issue is the main barrier for