Prediction of daily rainfall is important for flood forecasting, reservoir operation, and many other hydrological applications. The artificial intelligence (AI) algorithm is generally used for stochastic forecasting rainfall which is not capable to simulate unseen extreme rainfall events which become common due to climate change. A new model is developed in this study for prediction of daily rainfall for different lead times based on sea level pressure (SLP) which is physically related to rainfall on land and thus able to predict unseen rainfall events. Daily rainfall of east coast of Peninsular Malaysia (PM) was predicted using SLP data over the climate domain. Five advanced AI algorithms such as extreme learning machine (ELM), Bayesian regularized neural networks (BRNNs), Bayesian additive regression trees (BART), extreme gradient boosting (xgBoost), and hybrid neural fuzzy inference system (HNFIS) were used considering the complex relationship of rainfall with sea level pressure. Principle components of SLP domain correlated with daily rainfall were used as predictors. The results revealed that the efficacy of AI models is predicting daily rainfall one day before. The relative performance of the models revealed the higher performance of BRNN with normalized root mean square error (NRMSE) of 0.678 compared with HNFIS (NRMSE = 0.708), BART (NRMSE = 0.784), xgBoost (NRMSE = 0.803), and ELM (NRMSE = 0.915). Visual inspection of predicted rainfall during model validation using density-scatter plot and other novel ways of visual comparison revealed the ability of BRNN to predict daily rainfall one day before reliably.
The Weibull distribution is considered one of the Type-I Generalized Extreme Value (GEV) distribution, and it plays a crucial role in modeling extreme events in various fields, such as hydrology, finance, and environmental sciences. Bayesian methods play a strong, decisive role in estimating the parameters of the GEV distribution due to their ability to incorporate prior knowledge and handle small sample sizes effectively. In this research, we compare several shrinkage Bayesian estimation methods based on the squared error and the linear exponential loss functions. They were adopted and compared by the Monte Carlo simulation method. The performance of these methods is assessed based on their accuracy and computational efficiency in estimati
... Show MoreIn this paper, we will illustrate a gamma regression model assuming that the dependent variable (Y) is a gamma distribution and that it's mean ( ) is related through a linear predictor with link function which is identity link function g(μ) = μ. It also contains the shape parameter which is not constant and depends on the linear predictor and with link function which is the log link and we will estimate the parameters of gamma regression by using two estimation methods which are The Maximum Likelihood and the Bayesian and a comparison between these methods by using the standard comparison of average squares of error (MSE), where the two methods were applied to real da
... Show MoreAbstract:
Research Topic: Ruling on the sale of big data
Its objectives: a statement of what it is, importance, source and governance.
The methodology of the curriculum is inductive, comparative and critical
One of the most important results: it is not permissible to attack it and it is a valuable money, and it is permissible to sell big data as long as it does not contain data to users who are not satisfied with selling it
Recommendation: Follow-up of studies dealing with the provisions of the issue
Subject Terms
Judgment, Sale, Data, Mega, Sayings, Jurists
The current research involves psychological pressure (educational,environment andemotionly) for secondary level to 2013-2014.This research includes comparison among students who are trained and not trained in physical education .The sample is(126) students from each gender from first education.Al-Karkh and the research found out that physical education has an effect in lessing emotional and educational in a big degree in student in secondary which affect them positively in their study. &n
... Show MoreIn this Paper, we proposed two new predictor corrector methods for solving Kepler's equation in hyperbolic case using quadrature formula which plays an important and significant rule in the evaluation of the integrals. The two procedures are developed that, in two or three iterations, solve the hyperbolic orbit equation in a very efficient manner, and to an accuracy that proves to be always better than 10-15. The solution is examined with and with grid size , using the first guesses hyperbolic eccentric anomaly is and , where is the eccentricity and is the hyperbolic mean anomaly.
In this paper we describe several different training algorithms for feed forward neural networks(FFNN). In all of these algorithms we use the gradient of the performance function, energy function, to determine how to adjust the weights such that the performance function is minimized, where the back propagation algorithm has been used to increase the speed of training. The above algorithms have a variety of different computation and thus different type of form of search direction and storage requirements, however non of the above algorithms has a global properties which suited to all problems.
A cut-off low is a closed low with a low value of geopotential height at the upper atmospheric levels that has been fully detached (cut-off) from the westerly flow and move independently. A cut-off low causes extreme rainfall events in the mid-latitudes regions. The main aim of this paper is to investigate the cut-off low at 500 hPa over Iraq from a synoptic point of view and the behavior of geopotential height at 500 hPa. To examine the association of the cut-off low at 500 hPa with rainfall events across Iraq, two case studies of heavy rainfall events from different times were conducted. The results showed that the cut-off low at 500 hPa with a low value of geopotential height will strengthen the low-pressure system at the surface, lea
... Show MoreDifferent ANN architectures of MLP have been trained by BP and used to analyze Landsat TM images. Two different approaches have been applied for training: an ordinary approach (for one hidden layer M-H1-L & two hidden layers M-H1-H2-L) and one-against-all strategy (for one hidden layer (M-H1-1)xL, & two hidden layers (M-H1-H2-1)xL). Classification accuracy up to 90% has been achieved using one-against-all strategy with two hidden layers architecture. The performance of one-against-all approach is slightly better than the ordinary approach
Big data analysis is essential for modern applications in areas such as healthcare, assistive technology, intelligent transportation, environment and climate monitoring. Traditional algorithms in data mining and machine learning do not scale well with data size. Mining and learning from big data need time and memory efficient techniques, albeit the cost of possible loss in accuracy. We have developed a data aggregation structure to summarize data with large number of instances and data generated from multiple data sources. Data are aggregated at multiple resolutions and resolution provides a trade-off between efficiency and accuracy. The structure is built once, updated incrementally, and serves as a common data input for multiple mining an
... Show More