Dust is a frequent contributor to health risks and changes in the climate, one of the most dangerous issues facing people today. Desertification, drought, agricultural practices, and sand and dust storms from neighboring regions bring on this issue. Deep learning (DL) long short-term memory (LSTM) based regression was a proposed solution to increase the forecasting accuracy of dust and monitoring. The proposed system has two parts to detect and monitor the dust; at the first step, the LSTM and dense layers are used to build a system using to detect the dust, while at the second step, the proposed Wireless Sensor Networks (WSN) and Internet of Things (IoT) model is used as a forecasting and monitoring model. The experiment DL system train and test part was applied to dust phenomena historical data. Its data has been collected through the Iraqi Meteorological Organization and Seismology (IMOS) raw dataset with 170237 of 17023 rows and 10 columns. The LSTM model achieved small time, computationally complexity of, and layers number while being effective and accurate for dust prediction. The simulation results reveal that the model's mean square error test reaches 0.12877 and Mean Absolute Error (MAE) test is 0.07411 at the same rates of learning and exact features values of vector in the dense layer, representing a neural network layer deeply is connected to the LSTM training proposed model. Finally, the model suggested enhances monitoring performance.
In this research The study of Multi-level model (partial pooling model) we consider The partial pooling model which is one Multi-level models and one of the Most important models and extensive use and application in the analysis of the data .This Model characterized by the fact that the treatments take hierarchical or structural Form, in this partial pooling models, Full Maximum likelihood FML was used to estimated parameters of partial pooling models (fixed and random ), comparison between the preference of these Models, The application was on the Suspended Dust data in Iraq, The data were for four and a half years .Eight stations were selected randomly among the stations in Iraq. We use Akaik′s Informa
... Show MoreConcerns about the environment, the cost of energy, and safety mean that low-energy cold-mix asphalt materials are very interesting as a potential replacement for present-day hot mix asphalt. The main disadvantage of cold bituminous emulsion mixtures is their poor early life strength, meaning they require a long time to achieve mature strength. This research work aims to study the protentional utilization of waste and by-product materials as a filler in cold emulsion mixtures with mechanical properties comparable to those of traditional hot mix asphalt. Accordingly, cold mix asphalt was prepared to utilize paper sludge ash (PSA) and cement kiln dust (CKD) as a substitution for conventional mineral filler with percentages ranging fro
... Show MoreIn this paper, some relations between the flows and the Enveloping Semi-group were studied. It allows to associate some properties on the topological compactification to any pointed flows. These relations enable us to study a number of the properties of the principles of flows corresponding with using algebric properties. Also in this paper proofs to some theorems of these relations are given.
Today in the digital realm, where images constitute the massive resource of the social media base but unfortunately suffer from two issues of size and transmission, compression is the ideal solution. Pixel base techniques are one of the modern spatially optimized modeling techniques of deterministic and probabilistic bases that imply mean, index, and residual. This paper introduces adaptive pixel-based coding techniques for the probabilistic part of a lossy scheme by incorporating the MMSA of the C321 base along with the utilization of the deterministic part losslessly. The tested results achieved higher size reduction performance compared to the traditional pixel-based techniques and the standard JPEG by about 40% and 50%,
... Show MoreThis study investigates the feasibility of a mobile robot navigating and discovering its location in unknown environments, followed by the creation of maps of these navigated environments for future use. First, a real mobile robot named TurtleBot3 Burger was used to achieve the simultaneous localization and mapping (SLAM) technique for a complex environment with 12 obstacles of different sizes based on the Rviz library, which is built on the robot operating system (ROS) booted in Linux. It is possible to control the robot and perform this process remotely by using an Amazon Elastic Compute Cloud (Amazon EC2) instance service. Then, the map to the Amazon Simple Storage Service (Amazon S3) cloud was uploaded. This provides a database
... Show MoreThis study includes adding chemicals to gypseous soil to improve its collapse characteristics. The collapse behavior of gypseous soil brought from the north of Iraq (Salah El-Deen governorate) with a gypsum content of 59% was investigated using five types of additions (cement dust, powder sodium meta-silicate, powder activated carbon, sodium silicate solution, and granular activated carbon). The soil was mixed by weight with cement dust (10, 20, and 30%), powder sodium meta-silicate (6%), powder activated carbon (10%), sodium silicate solution (3, 6, and 9%), and granular activated carbon (5, 10, and 15%). The collapse potential is reduced by 86, 71, 43, 37, and 35% when 30% cement dust, 6% powder sodium meta-silicate, 10% powder activated
... Show MoreWatermarking operation can be defined as a process of embedding special wanted and reversible information in important secure files to protect the ownership or information of the wanted cover file based on the proposed singular value decomposition (SVD) watermark. The proposed method for digital watermark has very huge domain for constructing final number and this mean protecting watermark from conflict. The cover file is the important image need to be protected. A hidden watermark is a unique number extracted from the cover file by performing proposed related and successive operations, starting by dividing the original image into four various parts with unequal size. Each part of these four treated as a separate matrix and applying SVD
... Show More