In the literature, several correlations have been proposed for bubble size prediction in bubble columns. However these correlations fail to predict bubble diameter over a wide range of conditions. Based on a data bank of around 230 measurements collected from the open literature, a correlation for bubble sizes in the homogenous region in bubble columns was derived using Artificial Neural Network (ANN) modeling. The bubble diameter was found to be a function of six parameters: gas velocity, column diameter, diameter of orifice, liquid density, liquid viscosity and liquid surface tension. Statistical analysis showed that the proposed correlation has an Average Absolute Relative Error (AARE) of 7.3 % and correlation coefficient of 92.2%. A comparison with selected correlations in the literature showed that the developed ANN correlation noticeably improved the prediction of bubble sizes. The developed correlation also shows better prediction over a wide range of operation parameters in bubble columns.
Texture synthesis using genetic algorithms is one way; proposed in the previous research, to synthesis texture in a fast and easy way. In genetic texture synthesis algorithms ,the chromosome consist of random blocks selected manually by the user .However ,this method of selection is highly dependent on the experience of user .Hence, wrong selection of blocks will greatly affect the synthesized texture result. In this paper a new method is suggested for selecting the blocks automatically without the participation of user .The results show that this method of selection eliminates some blending caused from the previous manual method of selection.
The Hopfield network is one of the easiest types, and its architecture is such that each neuron in the network connects to the other, thus called a fully connected neural network. In addition, this type is considered auto-associative memory, because the network returns the pattern immediately upon recognition, this network has many limitations, including memory capacity, discrepancy, orthogonally between patterns, weight symmetry, and local minimum. This paper proposes a new strategy for designing Hopfield based on XOR operation; A new strategy is proposed to solve these limitations by suggesting a new algorithm in the Hopfield network design, this strategy will increase the performance of Hopfield by modifying the architecture of t
... Show Moren this study, data or X-ray images Fixable Image Transport System (FITS) of objects were analyzed, where energy was collected from the body by several sensors; each sensor receives energy within a specific range, and when energy was collected from all sensors, the image was formed carrying information about that body. The images can be transferred and stored easily. The images were analyzed using the DS9 program to obtain a spectrum for each object,an energy corresponding to the photons collected per second. This study analyzed images for two types of objects (globular and open clusters). The results showed that the five open star clusters contain roughly t
... Show MoreIn this paper we used frequentist and Bayesian approaches for the linear regression model to predict future observations for unemployment rates in Iraq. Parameters are estimated using the ordinary least squares method and for the Bayesian approach using the Markov Chain Monte Carlo (MCMC) method. Calculations are done using the R program. The analysis showed that the linear regression model using the Bayesian approach is better and can be used as an alternative to the frequentist approach. Two criteria, the root mean square error (RMSE) and the median absolute deviation (MAD) were used to compare the performance of the estimates. The results obtained showed that the unemployment rates will continue to increase in the next two decade
... Show MoreWith the wide developments of computer applications and networks, the security of information has high attention in our common fields of life. The most important issues is how to control and prevent unauthorized access to secure information, therefore this paper presents a combination of two efficient encryption algorithms to satisfy the purpose of information security by adding a new level of encryption in Rijndael-AES algorithm. This paper presents a proposed Rijndael encryption and decryption process with NTRU algorithm, Rijndael algorithm is widely accepted due to its strong encryption, and complex processing as well as its resistance to brute force attack. The proposed modifications are implemented by encryption and decryption Rijndael
... Show MoreIn this paper, a robust invisible watermarking system for digital video encoded by MPEG-4 is presented. The proposed scheme provides watermark hidden by embedding a secret message (watermark) in the sprite area allocated in reference frame (I-frame). The proposed system consists of two main units: (i) Embedding unit and (ii) Extraction unit. In the embedding unit, the system allocates the sprite blocks using motion compensation information. The allocated sprite area in each I–frame is used as hosting area for embedding watermark data. In the extraction unit, the system extracts the watermark data in order to check authentication and ownership of the video. The watermark data embedding method is Blocks average modulation applied on RGB dom
... Show More<p class="0abstract">Image denoising is a technique for removing unwanted signals called the noise, which coupling with the original signal when transmitting them; to remove the noise from the original signal, many denoising methods are used. In this paper, the Multiwavelet Transform (MWT) is used to denoise the corrupted image by Choosing the HH coefficient for processing based on two different filters Tri-State Median filter and Switching Median filter. With each filter, various rules are used, such as Normal Shrink, Sure Shrink, Visu Shrink, and Bivariate Shrink. The proposed algorithm is applied Salt& pepper noise with different levels for grayscale test images. The quality of the denoised image is evaluated by usi
... Show MoreThe main aim of this paper is to study how the different estimators of the two unknown parameters (shape and scale parameter) of a generalized exponential distribution behave for different sample sizes and for different parameter values. In particular,
. Maximum Likelihood, Percentile and Ordinary Least Square estimators had been implemented for different sample sizes (small, medium, and large) and assumed several contrasts initial values for the two parameters. Two indicators of performance Mean Square Error and Mean Percentile Error were used and the comparisons were carried out between different methods of estimation by using monte carlo simulation technique .. It was obse
... Show More