The goal of this research is to develop a numerical model that can be used to simulate the sedimentation process under two scenarios: first, the flocculation unit is on duty, and second, the flocculation unit is out of commission. The general equation of flow and sediment transport were solved using the finite difference method, then coded using Matlab software. The result of this study was: the difference in removal efficiency between the coded model and operational model for each particle size dataset was very close, with a difference value of +3.01%, indicating that the model can be used to predict the removal efficiency of a rectangular sedimentation basin. The study also revealed that the critical particle size was 0.01 mm, which means that most particles with diameters larger than 0.01 mm settled due to physical force, while most particles with diameters smaller than 0.01 mm settled due to flocculation process. At 10 m from the inlet zone, the removal efficiency was more than 60% of the total removal rate, indicating that increasing basin length is not a cost-effective way to improve removal efficiency. The influence of the flocculation process appears at particle sizes smaller than 0.01 mm, which is a small percentage (10%) of sieve analysis test. When the percentage reaches 20%, the difference in accumulative removal efficiency rises from +3.57% to 11.1% at the AL-Muthana sedimentation unit.
Big data analysis has important applications in many areas such as sensor networks and connected healthcare. High volume and velocity of big data bring many challenges to data analysis. One possible solution is to summarize the data and provides a manageable data structure to hold a scalable summarization of data for efficient and effective analysis. This research extends our previous work on developing an effective technique to create, organize, access, and maintain summarization of big data and develops algorithms for Bayes classification and entropy discretization of large data sets using the multi-resolution data summarization structure. Bayes classification and data discretization play essential roles in many learning algorithms such a
... Show MoreData Driven Requirement Engineering (DDRE) represents a vision for a shift from the static traditional methods of doing requirements engineering to dynamic data-driven user-centered methods. Data available and the increasingly complex requirements of system software whose functions can adapt to changing needs to gain the trust of its users, an approach is needed in a continuous software engineering process. This need drives the emergence of new challenges in the discipline of requirements engineering to meet the required changes. The problem in this study was the method in data discrepancies which resulted in the needs elicitation process being hampered and in the end software development found discrepancies and could not meet the need
... Show MorePolyaniline organic Semiconductor polymer was prepared by oxidation polymerization by adding hydrochloric acid concentration of 0.1M and potassium per sulfate concentration of 0.2M to 0.1M of aniline at room temperature, the polymer was deposited at glass substrate, the structural and optical properties were studies through UV-VIS, IR, XRD measurements, films have been operated as a sensor of vapor H2SO4 and HCl acids.
Gypseous soils are spread in several regions in the world including Iraq, where it covers more than 28.6% [1] of the surface region of the country. This soil, with high gypsum content causes different problems in construction and strategic projects. As a result of water flow through the soil mass, permeability and chemical arrangement of these soils vary over time due to the solubility and leaching of gypsum. In this study the soil of 36% gypsum content, is taken from one location about 100 km (62 mi) southwest of Baghdad, where the sample is taken from depth (0.5 - 1) m below the natural ground surface and mixed with (3%, 6%, 9%) of Copolymer and Styrene-butadiene Rubber to improve t
Adsorption techniques are widely used to remove certain classes of pollutants from wastewater. Phenolic compounds represent one of the problematic groups. Na-Y zeolite has been synthesized from locally available Iraqi kaolin clay. Characterization of the prepared zeolite was made by XRD and surface area measurement using N2 adsorption. Both synthetic Na-Y zeolite and kaolin clay have been tested for adsorption of 4-Nitro-phenol in batch mode experiments. Maximum removal efficiencies of 90% and 80% were obtained using the prepared zeolite and kaolin clay, respectively. Kinetics and equilibrium adsorption isotherms were investigated. Investigations showed that both Langmuir and Freundlich isotherms fit the experimental data quite well. On the
... Show MoreThe assessment of data quality from different sources can be considered as a key challenge in supporting effective geospatial data integration and promoting collaboration in mapping projects. This paper presents a methodology for assessing positional and shape quality for authoritative large-scale data, such as Ordnance Survey (OS) UK data and General Directorate for Survey (GDS) Iraq data, and Volunteered Geographic Information (VGI), such as OpenStreetMap (OSM) data, with the intention of assessing possible integration. It is based on the measurement of discrepancies among the datasets, addressing positional accuracy and shape fidelity, using standard procedures and also directional statistics. Line feature comparison has been und
... Show MoreE-Learning packages are content and instructional methods delivered on a computer
(whether on the Internet, or an intranet), and designed to build knowledge and skills related to
individual or organizational goals. This definition addresses: The what: Training delivered
in digital form. The how: By content and instructional methods, to help learn the content.
The why: Improve organizational performance by building job-relevant knowledge and
skills in workers.
This paper has been designed and implemented a learning package for Prolog Programming
Language. This is done by using Visual Basic.Net programming language 2010 in
conjunction with the Microsoft Office Access 2007. Also this package introduces several
fac
This research aims to investigate the color distribution of a huge sample of 613654 galaxies from the Sloan Digital Sky Survey (SDSS). Those galaxies are at a redshift of 0.001 - 0.5 and have magnitudes of g = 17 - 20. Five subsamples of galaxies at redshifts of (0.001 - 0.1), (0.1 - 0.2), (0.2 - 0.3), (0.3 - 0.4) and (0.4 - 0.5) have been extracted from the main sample. The color distributions (u-g), (g-r) and (u-r) have been produced and analysed using a Matlab code for the main sample as well as all five subsamples. Then a bimodal Gaussian fit to color distributions of data that have been carried out using minimum chi-square in Microsoft Office Excel. The results showed that the color distributions of the main sample and
... Show More