Double-layer micro-perforated panels (MPPs) have been studied extensively as sound absorption systems to increase the absorption performance of single-layer MPPs. However, existing proposed models indicate that there is still room for improvement regarding the frequency bands of absorption for the double-layer MPP. This study presents a double-layer MPP formed with two single MPPs with inhomogeneous perforation backed by multiple cavities of varying depths. The theoretical formulation is developed using the electrical equivalent circuit method to calculate the absorption coefficient under a normal incident sound. The simulation results show that the proposed model can produce absorption coefficient with wider absorption bandwidth compared with the conventional double- and even triple-layer MPPs. The bandwidth can be increased to higher frequency by decreasing the cavity depth behind a sub-MPP with small hole diameter and a high perforation ratio, and to lower frequency by increasing the cavity depth behind a sub-MPP with large hole diameter and a small perforation ratio. The experimental data, measured by impedance tube, are in good agreement with the predicted results.
The main objectives of this study are to study the enhancement of the load-carrying capacity of Asymmetrical castellated beams with encasement the beams by Reactive Powder Concrete (RPC) and lacing reinforcement, the effect of the gap between top and bottom parts of Asymmetrical castellated steel beam at web post, and serviceability of the confined Asymmetrical castellated steel. This study presents two concentrated loads test results for four specimens Asymmetrical castellated beams section encasement by Reactive powder concrete (RPC) with laced reinforcement. The encasement of the Asymmetrical castellated steel beam consists of, flanges unstiffened element height was filled with RPC for each side and laced reinforced which are use
... Show MoreThis research is an attempt to study aspects of syntactic deviation in AbdulWahhab Al-Bayyati with reference to English. It reviews this phenomenon from an extra-linguistic viewpoint. It adopts a functional approach depending on the stipulates of systemic Functional Grammar as developed by M.A.K. Halliday and others adopting this approach. Within related perspective, fairly’s taxonomy (1975) has been chosen to analyze the types of syntactic deviation because it has been found suitable and relevant to describe this phenomenon. The research hypothesizes that syntactic deviation is pervasive in Arabic poetry, in general and in Abdul-Wahhab Al-Bayyati Poetry in specific, and can be analyzed in the light of systemic Functional Grammar
... Show MoreBackground: The present study aimed to assess the distribution, prevalence, severity of malocclusion in Baghdad governorate in relation to gender and residency Materials and Methods: A multi-stage stratified sampling technique was used in this investigation to make the sample a representative of target population. The sample consisted of 2700 (1349 males and 1351 females) intermediate school students aged 13 years representing 3% of the total target population. A questionnaire was used to determine the perception of occlusion and orthodontic treatment demand of the students and the assessment procedures for occlusal features by direct intraoral measurement using veriner and an instrument to measure the rotated and displaced teeth. Results a
... Show MoreMultiple linear regressions are concerned with studying and analyzing the relationship between the dependent variable and a set of explanatory variables. From this relationship the values of variables are predicted. In this paper the multiple linear regression model and three covariates were studied in the presence of the problem of auto-correlation of errors when the random error distributed the distribution of exponential. Three methods were compared (general least squares, M robust, and Laplace robust method). We have employed the simulation studies and calculated the statistical standard mean squares error with sample sizes (15, 30, 60, 100). Further we applied the best method on the real experiment data representing the varieties of
... Show MoreThis paper introduces a non-conventional approach with multi-dimensional random sampling to solve a cocaine abuse model with statistical probability. The mean Latin hypercube finite difference (MLHFD) method is proposed for the first time via hybrid integration of the classical numerical finite difference (FD) formula with Latin hypercube sampling (LHS) technique to create a random distribution for the model parameters which are dependent on time [Formula: see text]. The LHS technique gives advantage to MLHFD method to produce fast variation of the parameters’ values via number of multidimensional simulations (100, 1000 and 5000). The generated Latin hypercube sample which is random or non-deterministic in nature is further integ
... Show MoreThe electrocardiogram (ECG) is the recording of the electrical potential of the heart versus time. The analysis of ECG signals has been widely used in cardiac pathology to detect heart disease. The ECGs are non-stationary signals which are often contaminated by different types of noises from different sources. In this study, simulated noise models were proposed for the power-line interference (PLI), electromyogram (EMG) noise, base line wander (BW), white Gaussian noise (WGN) and composite noise. For suppressing noises and extracting the efficient morphology of an ECG signal, various processing techniques have been recently proposed. In this paper, wavelet transform (WT) is performed for noisy ECG signals. The graphical user interface (GUI)
... Show MoreData scarcity is a major challenge when training deep learning (DL) models. DL demands a large amount of data to achieve exceptional performance. Unfortunately, many applications have small or inadequate data to train DL frameworks. Usually, manual labeling is needed to provide labeled data, which typically involves human annotators with a vast background of knowledge. This annotation process is costly, time-consuming, and error-prone. Usually, every DL framework is fed by a significant amount of labeled data to automatically learn representations. Ultimately, a larger amount of data would generate a better DL model and its performance is also application dependent. This issue is the main barrier for