In this paper, a fast lossless image compression method is introduced for compressing medical images, it is based on splitting the image blocks according to its nature along with using the polynomial approximation to decompose image signal followed by applying run length coding on the residue part of the image, which represents the error caused by applying polynomial approximation. Then, Huffman coding is applied as a last stage to encode the polynomial coefficients and run length coding. The test results indicate that the suggested method can lead to promising performance.
Introduction to Medical Physics for Pharmacy Students and Medical Groups - ISBNiraq.org
Fe3O4:Ce thin films were deposited on glass and Si substrates by Pulse Laser Deposition Technique (PLD). Polycrystalline nature of the cubic structure with the preferred orientation of (311) are proved by X-ray diffraction. The nano size of the prepared films are revealed by SEM measurement. Undoped Iron oxide and doped with different concentration of Ce films have direct allowed transition band gap with 2.15±0.1 eV which is confirmed by PL Photoluminescence measurements. The PL spectra consist of the emission band located at two sets of peaks, set (A) at 579±2 nm , and set (B) at 650 nm, respectively when it is excited at an excitation wavelength of 280 nm at room temperature. I-V characteristics have been studied in the dark and under v
... Show More<p>The current work investigated the combustion efficiency of biodiesel engines under diverse ratios of compression (15.5, 16.5, 17.5, and 18.5) and different biodiesel fuels produced from apricot oil, papaya oil, sunflower oil, and tomato seed oil. The combustion process of the biodiesel fuel inside the engine was simulated utilizing ANSYS Fluent v16 (CFD). On AV1 diesel engines (Kirloskar), numerical simulations were conducted at 1500 rpm. The outcomes of the simulation demonstrated that increasing the compression ratio (CR) led to increased peak temperature and pressures in the combustion chamber, as well as elevated levels of CO<sub>2</sub> and NO mass fractions and decreased CO emission values un
... Show MoreDatabase is characterized as an arrangement of data that is sorted out and disseminated in a way that allows the client to get to the data being put away in a simple and more helpful way. However, in the era of big-data the traditional methods of data analytics may not be able to manage and process the large amount of data. In order to develop an efficient way of handling big-data, this work studies the use of Map-Reduce technique to handle big-data distributed on the cloud. This approach was evaluated using Hadoop server and applied on EEG Big-data as a case study. The proposed approach showed clear enhancement for managing and processing the EEG Big-data with average of 50% reduction on response time. The obtained results provide EEG r
... Show MoreA nonlinear filter for smoothing color and gray images
corrupted by Gaussian noise is presented in this paper. The proposed
filter designed to reduce the noise in the R,G, and B bands of the
color images and preserving the edges. This filter applied in order to
prepare images for further processing such as edge detection and
image segmentation.
The results of computer simulations show that the proposed
filter gave satisfactory results when compared with the results of
conventional filters such as Gaussian low pass filter and median filter
by using Cross Correlation Coefficient (ccc) criteria.
It is well known that sonography is not the first choice in detecting early breast tumors. Improving the resolution of breast sonographic image is the goal of many workers to make sonography a first choice examination as it is safe and easy procedure as well as cost effective. In this study, infrared light exposure of breast prior to ultrasound examination was implemented to see its effect on resolution of sonographic image. Results showed that significant improvement was obtained in 60% of cases.
A new algorithm is proposed to compress speech signals using wavelet transform and linear predictive coding. Signal compression based on the concept of selecting a small number of approximation coefficients after they are compressed by the wavelet decomposition (Haar and db4) at a suitable chosen level and ignored details coefficients, and then approximation coefficients are windowed by a rectangular window and fed to the linear predictor. Levinson Durbin algorithm is used to compute LP coefficients, reflection coefficients and predictor error. The compress files contain LP coefficients and previous sample. These files are very small in size compared to the size of the original signals. Compression ratio is calculated from the size of th
... Show More