In this study, plain concrete simply supported beams subjected to two points loading were analyzed for the flexure. The numerical model of the beam was constructed in the meso-scale representation of concrete as a two phasic material (aggregate, and mortar). The fracture process of the concrete beams under loading was investigated in the laboratory as well as by the numerical models. The Extended Finite Element Method (XFEM) was employed for the treatment of the discontinuities that appeared during the fracture process in concrete. Finite element method with the feature standard/explicitlywas utilized for the numerical analysis. Aggregate particles were assumedof elliptic shape. Other properties such as grading and sizes of the aggregate particles were taken from standard laboratory tests that conducted on aggregate samples.Two different concrete beamswere experimentally and numerically investigated. The difference between beams was concentrated in the maximum size of aggregate particles. The comparison between experimental and numerical results showed that themeso-scale model gives a good interface for the representing the concrete models in numerical approach. It was concluded that the XFEM is a powerful technique to use for the analysis of the fracture process and crack propagation in concrete.
This study was aimed to determine a phytotoxicity experiment with kerosene as a model of a total petroleum hydrocarbon (TPHs) as Kerosene pollutant at different concentrations (1% and 6%) with aeration rate (0 and 1 L/min) and retention time (7, 14, 21, 28 and 42 days), was carried out in a subsurface flow system (SSF) on the Barley wetland. It was noted that greatest elimination 95.7% recorded at 1% kerosene levels and aeration rate 1L / min after a period of 42 days of exposure; whereas it was 47% in the control test without plants. Furthermore, the percent of elimination efficiencies of hydrocarbons from the soil was ranged between 34.155%-95.7% for all TPHs (Kerosene) concentrations at aeration rate (0 and 1 L/min). The Barley c
... Show MoreLowpass spatial filters are adopted to match the noise statistics of the degradation seeking
good quality smoothed images. This study imply different size and shape of smoothing
windows. The study shows that using a window square frame shape gives good quality
smoothing and at the same time preserving a certain level of high frequency components in
comparsion with standard smoothing filters.
Electrocardiogram (ECG) is an important physiological signal for cardiac disease diagnosis. With the increasing use of modern electrocardiogram monitoring devices that generate vast amount of data requiring huge storage capacity. In order to decrease storage costs or make ECG signals suitable and ready for transmission through common communication channels, the ECG data
volume must be reduced. So an effective data compression method is required. This paper presents an efficient technique for the compression of ECG signals. In this technique, different transforms have been used to compress the ECG signals. At first, a 1-D ECG data was segmented and aligned to a 2-D data array, then 2-D mixed transform was implemented to compress the
The wavelet transform has become a useful computational tool for a variety of signal and image processing applications.
The aim of this paper is to present the comparative study of various wavelet filters. Eleven different wavelet filters (Haar, Mallat, Symlets, Integer, Conflict, Daubechi 1, Daubechi 2, Daubechi 4, Daubechi 7, Daubechi 12 and Daubechi 20) are used to compress seven true color images of 256x256 as a samples. Image quality, parameters such as peak signal-to-noise ratio (PSNR), normalized mean square error have been used to evaluate the performance of wavelet filters.
In our work PSNR is used as a measure of accuracy performanc
... Show MoreSimulation experiments are a means of solving in many fields, and it is the process of designing a model of the real system in order to follow it and identify its behavior through certain models and formulas written according to a repeating software style with a number of iterations. The aim of this study is to build a model that deals with the behavior suffering from the state of (heteroskedasticity) by studying the models (APGARCH & NAGARCH) using (Gaussian) and (Non-Gaussian) distributions for different sample sizes (500,1000,1500,2000) through the stage of time series analysis (identification , estimation, diagnostic checking and prediction). The data was generated using the estimations of the parameters resulting f
... Show MoreEquation Boizil used to Oatae approximate value of bladder pressure for 25 healthy people compared with Amqas the Alrotinahh ways used an indirect the catheter Bashaddam and found this method is cheap and harmless and easy
In this paper two main stages for image classification has been presented. Training stage consists of collecting images of interest, and apply BOVW on these images (features extraction and description using SIFT, and vocabulary generation), while testing stage classifies a new unlabeled image using nearest neighbor classification method for features descriptor. Supervised bag of visual words gives good result that are present clearly in the experimental part where unlabeled images are classified although small number of images are used in the training process.
Gypsum Plaster is an important building materials, and because of the availabilty of its raw materials. In this research the effect of various additives on the properties of plaster was studied , like Polyvinyl Acetate, Furfural, Fumed Silica at different rate of addition and two types of fibers, Carbon Fiber and Polypropylene Fiber to the plaster at a different volumetric rate. It was found that after analysis of the results the use of Furfural as an additive to plaster by 2.5% is the optimum ratio of addition to that it improved the flexural Strength by 3.18%.
When using Polyvinyl Acetate it was found that the ratio of the additive 2% is the optimum ratio of addition to the plaster, because it improved the value of the flexural stre
Some problems want to be solved in image compression to make the process workable and more efficient. Much work had been done in the field of lossy image compression based on wavelet and Discrete Cosine Transform (DCT). In this paper, an efficient image compression scheme is proposed, based on a common encoding transform scheme; It consists of the following steps: 1) bi-orthogonal (tab 9/7) wavelet transform to split the image data into sub-bands, 2) DCT to de-correlate the data, 3) the combined transform stage's output is subjected to scalar quantization before being mapped to positive, 4) and LZW encoding to produce the compressed data. The peak signal-to-noise (PSNR), compression ratio (CR), and compression gain (CG) measures were used t
... Show MoreIn this paper a decoder of binary BCH code is implemented using a PIC microcontroller for code length n=127 bits with multiple error correction capability, the results are presented for correcting errors up to 13 errors. The Berkelam-Massey decoding algorithm was chosen for its efficiency. The microcontroller PIC18f45k22 was chosen for the implementation and programmed using assembly language to achieve highest performance. This makes the BCH decoder implementable as a low cost module that can be used as a part of larger systems. The performance evaluation is presented in terms of total number of instructions and the bit rate.