Deep learning has recently received a lot of attention as a feasible solution to a variety of artificial intelligence difficulties. Convolutional neural networks (CNNs) outperform other deep learning architectures in the application of object identification and recognition when compared to other machine learning methods. Speech recognition, pattern analysis, and image identification, all benefit from deep neural networks. When performing image operations on noisy images, such as fog removal or low light enhancement, image processing methods such as filtering or image enhancement are required. The study shows the effect of using Multi-scale deep learning Context Aggregation Network CAN on Bilateral Filtering Approximation (BFA) for de-noising noisy CCTV images. Data-store is used tomanage our dataset, which is an object or collection of data that are huge to enter in memory, it allows to read, manage, and process data located in multiple files as a single entity. The CAN architecture provides integral deep learning layers such as input, convolution, back normalization, and Leaky ReLu layers to construct multi-scale. It is also possible to add custom layers like adaptor normalization (µ) and adaptive normalization (Lambda) to the network. The performance of the developed CAN approximation operator on the bilateral filtering noisy image is proven when improving both the noisy reference image and a CCTV foggy image. The three image evaluation metrics (SSIM, NIQE, and PSNR) evaluate the developed CAN approximation visually and quantitatively when comparing the created de-noised image over the reference image.Compared with the input noisy image, these evaluation metrics for the developed CAN de-noised image were (0.92673/0.76253, 6.18105/12.1865, and 26.786/20.3254) respectively
The population has been trying to use clean energy instead of combustion. The choice was to use liquefied petroleum gas (LPG) for domestic use, especially for cooking due to its advantages as a light gas, a lower cost, and clean energy. Residential complexes are supplied with liquefied petroleum gas for each housing unit, transported by pipes from LPG tanks to the equipment. This research aims to simulate the design and performance design of the LPG system in the building that is applied to a residential complex in Baghdad taken as a study case with eight buildings. The building has 11 floors, and each floor has four apartments. The design in this study has been done in two parts, part one is the design of an LPG system for one building, an
... Show MoreThis paper presents the results of experimental investigations to predict the bearing capacity of square footing on geogrid-reinforced loose sand by performing model tests. The effects of several parameters were studied in order to study the general behavior of improving the soil by using the geogrid. These parameters include the eccentricity value, depth of first layer of reinforcement, and vertical spacing of reinforcement layers. The results of the experimental work indicated that there was an optimum reinforcement embedment depth at which the bearing capacity was the highest when single-layer reinforcement was used. The increase of (z/B) (vertical spacing of reinforcement layer/width of footing) above 1.5 has no effect on the re
... Show MoreRealizing the full potential of wireless sensor networks (WSNs) highlights many design issues, particularly the trade-offs concerning multiple conflicting improvements such as maximizing the route overlapping for efficient data aggregation and minimizing the total link cost. While the issues of data aggregation routing protocols and link cost function in a WSNs have been comprehensively considered in the literature, a trade-off improvement between these two has not yet been addressed. In this paper, a comprehensive weight for trade-off between different objectives has been employed, the so-called weighted data aggregation routing strategy (WDARS) which aims to maximize the overlap routes for efficient data aggregation and link cost
... Show MoreThe need for an intellectual understanding of the context from many aspects' dictates understanding the ways through which the graphic designer walks in simulating the intent of the design process and elevating it to levels of communicative perception that leads to communicating the idea to the recipient, and it is thus a need closely related to the context, if it is historical. Culturally or socially, and between the mechanisms of selecting and operating the elements and units of the graphic and design achievement. On this basis, the role of context in graphic design can be studied.
The research included four chapters, the first chapter of the research problem and the need for it, and the aim of the research was (discovering the
... Show MoreNeural cryptography deals with the problem of “key exchange” between two neural networks by using the mutual learning concept. The two networks exchange their outputs (in bits) and the key between two communicating parties ar eventually represented in the final learned weights, when the two networks are said to be synchronized. Security of neural synchronization is put at risk if an attacker is capable of synchronizing with any of the two parties during the training process.
Many authors investigated the problem of the early visibility of the new crescent moon after the conjunction and proposed many criteria addressing this issue in the literature. This article presented a proposed criterion for early crescent moon sighting based on a deep-learned pattern recognizer artificial neural network (ANN) performance. Moon sight datasets were collected from various sources and used to learn the ANN. The new criterion relied on the crescent width and the arc of vision from the edge of the crescent bright limb. The result of that criterion was a control value indicating the moon's visibility condition, which separated the datasets into four regions: invisible, telescope only, probably visible, and certai
... Show MoreIn most manufacturing processes, and in spite of statistical control, several process capability indices refer to non conformance of the true mean (µc ) from the target mean ( µT ), and the variation is also high. In this paper, data have been analyzed and studied for a blow molded plastic product (Zahi Bottle) (ZB). WinQSB software was used to facilitate the statistical process control, and process capability analysis and some of capability indices. The relationship between different process capability indices and the true mean of the process were represented, and then with the standard deviation (σ ), of achievement of process capability value that can reduce the standard deviation value and improve production out of theoretical con
... Show MoreThis work deals with thermal cracking of slack wax produced as a byproduct from solvent dewaxing process of medium lubricating oil fraction in AL-Dura refinery. The thermal cracking process was carried out at a temperature ranges 480-540 ºC and atmospheric pressure. The liquid hourly space velocity (LHSV) for thermal cracking was varied between 1.0-2.5 . It was found that the conversion increased (61 - 83) with the increasing of reaction temperature (480 - 540) and decreased (83 - 63) with the increasing of liquid hourly space velocity (1.0 - 2.5).
The maximum gasoline yield obtained by thermal cracking process (48.52 wt. % of feed) was obtained at 500 ºC and liquid hour space velocity 1 . The obtaining liquid product at the best op
The paper proposes a methodology for predicting packet flow at the data plane in smart SDN based on the intelligent controller of spike neural networks(SNN). This methodology is applied to predict the subsequent step of the packet flow, consequently reducing the overcrowding that might happen. The centralized controller acts as a reactive controller for managing the clustering head process in the Software Defined Network data layer in the proposed model. The simulation results show the capability of Spike Neural Network controller in SDN control layer to improve the (QoS) in the whole network in terms of minimizing the packet loss ratio and increased the buffer utilization ratio.