Deep learning has recently received a lot of attention as a feasible solution to a variety of artificial intelligence difficulties. Convolutional neural networks (CNNs) outperform other deep learning architectures in the application of object identification and recognition when compared to other machine learning methods. Speech recognition, pattern analysis, and image identification, all benefit from deep neural networks. When performing image operations on noisy images, such as fog removal or low light enhancement, image processing methods such as filtering or image enhancement are required. The study shows the effect of using Multi-scale deep learning Context Aggregation Network CAN on Bilateral Filtering Approximation (BFA) for de-noising noisy CCTV images. Data-store is used tomanage our dataset, which is an object or collection of data that are huge to enter in memory, it allows to read, manage, and process data located in multiple files as a single entity. The CAN architecture provides integral deep learning layers such as input, convolution, back normalization, and Leaky ReLu layers to construct multi-scale. It is also possible to add custom layers like adaptor normalization (µ) and adaptive normalization (Lambda) to the network. The performance of the developed CAN approximation operator on the bilateral filtering noisy image is proven when improving both the noisy reference image and a CCTV foggy image. The three image evaluation metrics (SSIM, NIQE, and PSNR) evaluate the developed CAN approximation visually and quantitatively when comparing the created de-noised image over the reference image.Compared with the input noisy image, these evaluation metrics for the developed CAN de-noised image were (0.92673/0.76253, 6.18105/12.1865, and 26.786/20.3254) respectively
This study was aimed to investigate the response surface methodology (RSM) to evaluate the effects of various experimental conditions on the removal of levofloxacin (LVX) from the aqueous solution by means of electrocoagulation (EC) technique with stainless steel electrodes. The EC process was achieved successfully with the efficiency of LVX removal of 90%. The results obtained from the regression analysis, showed that the data of experiential are better fitted to the polynomial model of second-order with the predicted correlation coefficient (pred. R2) of 0.723, adjusted correlation coefficient (Adj. R2) of 0.907 and correlation coefficient values (R2) of 0.952. This shows that the predicted models and experimental values are in go
... Show MoreElectrocoagulation is an electrochemical method for treatment of different types of wastewater whereby sacrificial anodes corrode to release active coagulant (usually aluminium or iron cations) into solution, while simultaneous evolution of hydrogen at the cathode allows for pollutant removal by flotation or settling. The Taguchi method was applied as an experimental design and to determine the best conditions for chromium (VI) removal from wastewater. Various parameters in a batch stirred tank by iron metal electrodes: pH, initial chromium concentration, current density, distance between electrodes and KCl concentration were investigated, and the results have been analyzed using signal-to-noise (S/N) ratio. It was found that the r
... Show MoreThe present study investigates the relation between the biliteral and triliteral roots which is the introduction to comprehend the nature of the Semitic roots during its early stage of development being unconfirmed to a single pattern. The present research is not meant to decide on the question of the biliteral roots in the Semitic languages, rather it is meant to confirm the predominance of the triliteral roots on these languages which refers, partially, to analogy adopted by the majority of linguists. This tendency is frequently seen in the languages which incline to over generalize the triliteral phenomenon, i. e., to transfer the biliteral roots to the triliteral room, that is, to subject it to the predominant pattern regarding the r
... Show MoreThis study was done to investigate the impact of different nanoparticles on diesel fuel characteristics, Iraqi diesel fuel was supplied from al-Dura refinery and was treated to enhance performance by improving its characteristics. Two types of nanoparticles were mixed with Iraqi diesel fuel at various weight fractions of 30, 60, 90, and 120 ppm. The diesel engine was tested and run at a constant speed of 1600 rpm to examine and evaluate the engine's performance and determine emissions. In general, ZnO additives' performance analysis showed they are more efficient for diesel fuel engines than CeO. The performance of engine diesel fuel tests showed that the weight fraction of nanoparticles at 90 and 120 ppm give a similar performance,
... Show MoreThis study was done to investigate the impact of different nanoparticles on diesel fuel characteristics, Iraqi diesel fuel was supplied from al-Dura refinery and was treated to enhance performance by improving its characteristics. Two types of nanoparticles were mixed with Iraqi diesel fuel at various weight fractions of 30, 60, 90, and 120 ppm. The diesel engine was tested and run at a constant speed of 1600 rpm to examine and evaluate the engine's performance and determine emissions. In general, ZnO additives' performance analysis showed they are more efficient for diesel fuel engines than CeO. The performance of engine diesel fuel tests showed that the weight fraction of nanoparticles at 90 and 120 ppm give a similar
... Show MoreRealizing the full potential of wireless sensor networks (WSNs) highlights many design issues, particularly the trade-offs concerning multiple conflicting improvements such as maximizing the route overlapping for efficient data aggregation and minimizing the total link cost. While the issues of data aggregation routing protocols and link cost function in a WSNs have been comprehensively considered in the literature, a trade-off improvement between these two has not yet been addressed. In this paper, a comprehensive weight for trade-off between different objectives has been employed, the so-called weighted data aggregation routing strategy (WDARS) which aims to maximize the overlap routes for efficient data aggregation and link cost
... Show MoreThe Dirichlet process is an important fundamental object in nonparametric Bayesian modelling, applied to a wide range of problems in machine learning, statistics, and bioinformatics, among other fields. This flexible stochastic process models rich data structures with unknown or evolving number of clusters. It is a valuable tool for encoding the true complexity of real-world data in computer models. Our results show that the Dirichlet process improves, both in distribution density and in signal-to-noise ratio, with larger sample size; achieves slow decay rate to its base distribution; has improved convergence and stability; and thrives with a Gaussian base distribution, which is much better than the Gamma distribution. The performance depen
... Show MoreAutism Spectrum Disorder, also known as ASD, is a neurodevelopmental disease that impairs speech, social interaction, and behavior. Machine learning is a field of artificial intelligence that focuses on creating algorithms that can learn patterns and make ASD classification based on input data. The results of using machine learning algorithms to categorize ASD have been inconsistent. More research is needed to improve the accuracy of the classification of ASD. To address this, deep learning such as 1D CNN has been proposed as an alternative for the classification of ASD detection. The proposed techniques are evaluated on publicly available three different ASD datasets (children, Adults, and adolescents). Results strongly suggest that 1D
... Show MoreThis paper proposed a new method for network self-fault management (NSFM) based on two technologies: intelligent agent to automate fault management tasks, and Windows Management Instrumentations (WMI) to identify the fault faster when resources are independent (different type of devices). The proposed network self-fault management reduced the load of network traffic by reducing the request and response between the server and client, which achieves less downtime for each node in state of fault occurring in the client. The performance of the proposed system is measured by three measures: efficiency, availability, and reliability. A high efficiency average is obtained depending on the faults occurred in the system which reaches to
... Show More