Electrocoagulation is an electrochemical method for treatment of different types of wastewater whereby sacrificial anodes corrode to release active coagulant (usually aluminium or iron cations) into solution, while simultaneous evolution of hydrogen at the cathode allows for pollutant removal by flotation or settling. The Taguchi method was applied as an experimental design and to determine the best conditions for chromium (VI) removal from wastewater. Various parameters in a batch stirred tank by iron metal electrodes: pH, initial chromium concentration, current density, distance between electrodes and KCl concentration were investigated, and the results have been analyzed using signal-to-noise (S/N) ratio. It was found that the removal efficiency of chromium increased with increasing current density and KCl concentration, and decreases with increasing initial chromium concentration and distance between electrodes, while pH shows peak performance curve. Experimental work have been performed for synthetic solutions and real industrial effluent. The results showed that the removal efficiency of synthetic solution is higher than industrial wastewater, the maximum removal for prepared solution is 91.72 %, while it was 73.54 % for industrial wastewater for the same conditions.
In this paper, we will provide a proposed method to estimate missing values for the Explanatory variables for Non-Parametric Multiple Regression Model and compare it with the Imputation Arithmetic mean Method, The basis of the idea of this method was based on how to employ the causal relationship between the variables in finding an efficient estimate of the missing value, we rely on the use of the Kernel estimate by Nadaraya – Watson Estimator , and on Least Squared Cross Validation (LSCV) to estimate the Bandwidth, and we use the simulation study to compare between the two methods.
This paper proposed a new method to study functional non-parametric regression data analysis with conditional expectation in the case that the covariates are functional and the Principal Component Analysis was utilized to de-correlate the multivariate response variables. It utilized the formula of the Nadaraya Watson estimator (K-Nearest Neighbour (KNN)) for prediction with different types of the semi-metrics, (which are based on Second Derivative and Functional Principal Component Analysis (FPCA)) for measureing the closeness between curves. Root Mean Square Errors is used for the implementation of this model which is then compared to the independent response method. R program is used for analysing data. Then, when the cov
... Show MoreReliability is an essential measure and important component of all power system planning and operation procedures. It is one of the key design factors when designing complex, critical and expensive systems. This paper presents a fuzzy logic approach for reliability improvement planning purposes. Evaluating the reliability of the complex and large planned Iraqi super grid ;as Al- Khairat generating station with its tie set is intended to be compact to that grid; and determination of the given reliability improvement project are the major goals of the paper. Results show that the Iraqi super grid reliability is improved by 9.64%. In the proposed technique, fuzzy set theory is used to include imprecise indices of different components in normal
... Show MoreSentiment analysis refers to the task of identifying polarity of positive and negative for particular text that yield an opinion. Arabic language has been expanded dramatically in the last decade especially with the emergence of social websites (e.g. Twitter, Facebook, etc.). Several studies addressed sentiment analysis for Arabic language using various techniques. The most efficient techniques according to the literature were the machine learning due to their capabilities to build a training model. Yet, there is still issues facing the Arabic sentiment analysis using machine learning techniques. Such issues are related to employing robust features that have the ability to discrimina
... Show MoreIn the field of data security, the critical challenge of preserving sensitive information during its transmission through public channels takes centre stage. Steganography, a method employed to conceal data within various carrier objects such as text, can be proposed to address these security challenges. Text, owing to its extensive usage and constrained bandwidth, stands out as an optimal medium for this purpose. Despite the richness of the Arabic language in its linguistic features, only a small number of studies have explored Arabic text steganography. Arabic text, characterized by its distinctive script and linguistic features, has gained notable attention as a promising domain for steganographic ventures. Arabic text steganography harn
... Show MoreThe purpose of this work is to concurrently estimate the UVvisible spectra of binary combinations of piroxicam and mefenamic acid using the chemometric approach. To create the model, spectral data from 73 samples (with wavelengths between 200 and 400 nm) were employed. A two-layer artificial neural network model was created, with two neurons in the output layer and fourteen neurons in the hidden layer. The model was trained to simulate the concentrations and spectra of piroxicam and mefenamic acid. For piroxicam and mefenamic acid, respectively, the Levenberg-Marquardt algorithm with feed-forward back-propagation learning produced root mean square errors of prediction of 0.1679 μg/mL and 0.1154 μg/mL, with coefficients of determination of
... Show MoreDeep learning convolution neural network has been widely used to recognize or classify voice. Various techniques have been used together with convolution neural network to prepare voice data before the training process in developing the classification model. However, not all model can produce good classification accuracy as there are many types of voice or speech. Classification of Arabic alphabet pronunciation is a one of the types of voice and accurate pronunciation is required in the learning of the Qur’an reading. Thus, the technique to process the pronunciation and training of the processed data requires specific approach. To overcome this issue, a method based on padding and deep learning convolution neural network is proposed to
... Show MoreIn high-dimensional semiparametric regression, balancing accuracy and interpretability often requires combining dimension reduction with variable selection. This study intro- duces two novel methods for dimension reduction in additive partial linear models: (i) minimum average variance estimation (MAVE) combined with the adaptive least abso- lute shrinkage and selection operator (MAVE-ALASSO) and (ii) MAVE with smoothly clipped absolute deviation (MAVE-SCAD). These methods leverage the flexibility of MAVE for sufficient dimension reduction while incorporating adaptive penalties to en- sure sparse and interpretable models. The performance of both methods is evaluated through simulations using the mean squared error and variable selection cri
... Show More