Dust is a frequent contributor to health risks and changes in the climate, one of the most dangerous issues facing people today. Desertification, drought, agricultural practices, and sand and dust storms from neighboring regions bring on this issue. Deep learning (DL) long short-term memory (LSTM) based regression was a proposed solution to increase the forecasting accuracy of dust and monitoring. The proposed system has two parts to detect and monitor the dust; at the first step, the LSTM and dense layers are used to build a system using to detect the dust, while at the second step, the proposed Wireless Sensor Networks (WSN) and Internet of Things (IoT) model is used as a forecasting and monitoring model. The experiment DL system train and test part was applied to dust phenomena historical data. Its data has been collected through the Iraqi Meteorological Organization and Seismology (IMOS) raw dataset with 170237 of 17023 rows and 10 columns. The LSTM model achieved small time, computationally complexity of, and layers number while being effective and accurate for dust prediction. The simulation results reveal that the model's mean square error test reaches 0.12877 and Mean Absolute Error (MAE) test is 0.07411 at the same rates of learning and exact features values of vector in the dense layer, representing a neural network layer deeply is connected to the LSTM training proposed model. Finally, the model suggested enhances monitoring performance.
In order to take measures in controlling soil erosion it is required to estimate soil loss over area of interest. Soil loss due to soil erosion can be estimated using predictive models such as Universal Soil Loss Equation (USLE). The accuracy of these models depends on parameters that are used in equations. One of the most important parameters in equations used in both of models is (C) factor that represents effects of vegetation and other land covers. Estimating land cover by interpretation of remote sensing imagery involves Normalized Difference Vegetation Index (NDVI), an indicator that shows vegetation cover. The aim of this study is estimate (C) factor values for Part of Baghdad city using NDVI derived from satellite Image of Landsat-7
... Show MoreEnglish
The object of the presented study was to monitor the changes that had happened
in the main features (water, vegetation, and soil) of Al-Hammar Marsh region. To
fulfill this goal, different satellite images had been used in different times, MSS
1973, TM 1990, ETM+ 2000 and MODIS 2010. K-Means which is unsupervised
classification and Neural Net which is supervised classification was used to classify
the satellite images 0Tand finally by use 0Tadaptive classification 0Twhich is0T3T 0T3Tapply
s0Tupervised classification on the unsupervised classification. ENVI soft where used
in this study.
This study includes adding chemicals to gypseous soil to improve its collapse characteristics. The collapse behavior of gypseous soil brought from the north of Iraq (Salah El-Deen governorate) with a gypsum content of 59% was investigated using five types of additions (cement dust, powder sodium meta-silicate, powder activated carbon, sodium silicate solution, and granular activated carbon). The soil was mixed by weight with cement dust (10, 20, and 30%), powder sodium meta-silicate (6%), powder activated carbon (10%), sodium silicate solution (3, 6, and 9%), and granular activated carbon (5, 10, and 15%). The collapse potential is reduced by 86, 71, 43, 37, and 35% when 30% cement dust, 6% powder sodium meta-silicate, 10% powder activated
... Show MoreThe palm vein recognition is one of the biometric systems that use for identification and verification processes since each person have unique characteristics for the veins. In this paper we can improvement palm vein recognition system have been made. The system based on centerline extraction of veins, and employs the concept of Difference-of Gaussian (DoG) Function to construct features vector. The tests results on our database showed that the identification rate is 100 % with the minimum error rate was 0.333.
Watermarking operation can be defined as a process of embedding special wanted and reversible information in important secure files to protect the ownership or information of the wanted cover file based on the proposed singular value decomposition (SVD) watermark. The proposed method for digital watermark has very huge domain for constructing final number and this mean protecting watermark from conflict. The cover file is the important image need to be protected. A hidden watermark is a unique number extracted from the cover file by performing proposed related and successive operations, starting by dividing the original image into four various parts with unequal size. Each part of these four treated as a separate matrix and applying SVD
... Show MoreIn this research we will present the signature as a key to the biometric authentication technique. I shall use moment invariants as a tool to make a decision about any signature which is belonging to the certain person or not. Eighteen voluntaries give 108 signatures as a sample to test the proposed system, six samples belong to each person were taken. Moment invariants are used to build a feature vector stored in this system. Euclidean distance measure used to compute the distance between the specific signatures of persons saved in this system and with new sample acquired to same persons for making decision about the new signature. Each signature is acquired by scanner in jpg format with 300DPI. Matlab used to implement this system.
The huge amount of documents in the internet led to the rapid need of text classification (TC). TC is used to organize these text documents. In this research paper, a new model is based on Extreme Machine learning (EML) is used. The proposed model consists of many phases including: preprocessing, feature extraction, Multiple Linear Regression (MLR) and ELM. The basic idea of the proposed model is built upon the calculation of feature weights by using MLR. These feature weights with the extracted features introduced as an input to the ELM that produced weighted Extreme Learning Machine (WELM). The results showed a great competence of the proposed WELM compared to the ELM.