Solid‐waste management, particularly of aluminum (Al), is a challenge that is being confronted around the world. Therefore, it is valuable to explore methods that can minimize the exploitation of natural assets, such as recycling. In this study, using hazardous Al waste as the main electrodes in the electrocoagulation (EC) process for dye removal from wastewater was discussed. The EC process is considered to be one of the most efficient, promising, and cost‐effective ways of handling various toxic effluents. The effect of current density (10, 20, and 30 mA/cm2), electrolyte concentration (1 and 2 g/L), and initial concentration of Brilliant Blue dye (15 and 30 mg/L) on the efficiency of the EC process were examined in this study. The results show that removal efficiency increased with current density and sodium chloride (NaCl) concentration and decreased with initial dye concentration. The electrical power and electrodes consumed increased with an increase in current density and decreased notably with increased NaCl. The optimum current density and amount of NaCl were 20 mA/cm2 and 2 g/L, respectively to attain highest values of E133 brilliant blue dye removal. The EC process was examined using adsorption isotherms and kinetics models. Those results showed that the Langmuir isotherm matched the experimental data. Furthermore, the experimental data were followed the Elovich model kinetics.
The proliferation of cellular network enabled users through various positioning tools to track locations, location information is being continuously captured from mobile phones, created a prototype that enables detected location based on using the two invariant models for Global Systems for Mobile (GSM) and Universal Mobile Telecommunications System (UMTS). The smartphone application on an Android platform applies the location sensing run as a background process and the localization method is based on cell phones. The proposed application is associated with remote server and used to track a smartphone without permissions and internet. Mobile stored data location information in the database (SQLite), then transfer it into location AP
... Show MoreUsed automobile oils were subjected to filtration to remove solid material and dehydration to remove water, gasoline and light components by using vacuum distillation under moderate pressure, and then the dehydrated waste oil is subjected to extraction by using liquid solvents. Two solvents, namely n-butanol and n-hexane were used to extract base oil from automobile used oil, so that the expensive base oil can be reused again.
The recovered base oil by using n-butanol solvent gives (88.67%) reduction in carbon residue, (75.93%) reduction in ash content, (93.73%) oil recovery, (95%) solvent recovery and (100.62) viscosity index, at (5:1) solvent to used oil ratio and (40 oC) extraction temperature, while using n-hexane solvent gives (6
Investigating the human mobility patterns is a highly interesting field in the 21th century, and it takes vast attention from multi-disciplinary scientists in physics, economic, social, computer, engineering…etc. depending on the concept that relates between human mobility patterns and their communications. Hence, the necessity for a rich repository of data has emerged. Therefore, the most powerful solution is the usage of GSM network data, which gives millions of Call Details Records gained from urban regions. However, the available data still have shortcomings, because it gives only the indication of spatio-temporal data at only the moment of mobile communication activities. In th
In this paper, we used four classification methods to classify objects and compareamong these methods, these are K Nearest Neighbor's (KNN), Stochastic Gradient Descentlearning (SGD), Logistic Regression Algorithm(LR), and Multi-Layer Perceptron (MLP). Weused MCOCO dataset for classification and detection the objects, these dataset image wererandomly divided into training and testing datasets at a ratio of 7:3, respectively. In randomlyselect training and testing dataset images, converted the color images to the gray level, thenenhancement these gray images using the histogram equalization method, resize (20 x 20) fordataset image. Principal component analysis (PCA) was used for feature extraction, andfinally apply four classification metho
... Show MoreBackground: This study aimed to determine the gender of a sample of Iraqi adults using the mesio-distal width of mandibular canines, inter-canine width and standard mandibular canine index, and to determine the percentage of dimorphism as an aid in forensic dentistry. Materials and methods: The sample included 200 sets of study models belong to 200 subjects (100 males and 100 females) with an age ranged between 17-23 years. The mesio-distal crown dimension was measured manually, from the contact points for the mandibular canines (both sides), in addition to the inter-canine width using digital vernier. Descriptive statistics were obtained for the measurements for both genders; paired sample t-test was used to evaluate the side difference of
... Show MoreIn this paper, visible image watermarking algorithm based on biorthogonal wavelet
transform is proposed. The watermark (logo) of type binary image can be embedded in the
host gray image by using coefficients bands of the transformed host image by biorthogonal
transform domain. The logo image can be embedded in the top-left corner or spread over the
whole host image. A scaling value (α) in the frequency domain is introduced to control the
perception of the watermarked image. Experimental results show that this watermark
algorithm gives visible logo with and no losses in the recovery process of the original image,
the calculated PSNR values support that. Good robustness against attempt to remove the
watermark was s
In this paper we find the exact solution of Burger's equation after reducing it to Bernoulli equation. We compare this solution with that given by Kaya where he used Adomian decomposition method, the solution given by chakrone where he used the Variation iteration method (VIM)and the solution given by Eq(5)in the paper of M. Javidi. We notice that our solution is better than their solutions.
The area of character recognition has received a considerable attention by researchers all over the world during the last three decades. However, this research explores best sets of feature extraction techniques and studies the accuracy of well-known classifiers for Arabic numeral using the Statistical styles in two methods and making comparison study between them. First method Linear Discriminant function that is yield results with accuracy as high as 90% of original grouped cases correctly classified. In the second method, we proposed algorithm, The results show the efficiency of the proposed algorithms, where it is found to achieve recognition accuracy of 92.9% and 91.4%. This is providing efficiency more than the first method.
Document analysis of images snapped by camera is a growing challenge. These photos are often poor-quality compound images, composed of various objects and text; this makes automatic analysis complicated. OCR is one of the image processing techniques which is used to perform automatic identification of texts. Existing image processing techniques need to manage many parameters in order to clearly recognize the text in such pictures. Segmentation is regarded one of these essential parameters. This paper discusses the accuracy of segmentation process and its effect over the recognition process. According to the proposed method, the images were firstly filtered using the wiener filter then the active contour algorithm could b
... Show More