In this paper, the propose is to use the xtreme value distribution as the rate of occurrence of the non-homogenous Poisson process, in order to improve the rate of occurrence of the non-homogenous process, which has been called the Extreme value Process. To estimate the parameters of this process, it is proposed to use the Maximum Likelihood method, Method of Moment and a smart method represented by the Artificial Bee Colony:(ABC) algorithm to reach an estimator for this process which represents the best data representation. The results of the three methods are compared through a simulation of the model, and it is concluded that the estimator of (ABC) is better than the estimator of the maximum likelihood method and method of moment in estimating the time rate of occurrence of the proposed Extreme value process. The research also includes a realistic application that deals with the operating periods of two successive stops for the raw materials factory from the General Company for Northern Cement / Badush Cement Factories (new) during the period from 1/4/2018 to 31/1/2019, in order to reach the time rate of factory stops.
In real situations all observations and measurements are not exact numbers but more or less non-exact, also called fuzzy. So, in this paper, we use approximate non-Bayesian computational methods to estimate inverse Weibull parameters and reliability function with fuzzy data. The maximum likelihood and moment estimations are obtained as non-Bayesian estimation. The maximum likelihood estimators have been derived numerically based on two iterative techniques namely “Newton-Raphson†and the “Expectation-Maximization†techniques. In addition, we provide compared numerically through Monte-Carlo simulation study to obtained estimates of the parameters and reliability function i
... Show MoreThis study examines the impact of adopting International Financial Reporting Standards (IFRS) on the value of economic units. Given the global push toward standardization of financial reporting to enhance financial statement transparency, comparability, and reliability, this research seeks to understand the implications of these standards for economic valuation within a region characterized by its unique economic and regulatory challenges. A questionnaire was distributed to 86 Iraqi academics specializing in economics, accounting, and finance to collect their views on the impact of adopting international financial reporting standards. Through careful statistical analysis, the study concluded that applying international financial reporting s
... Show MoreIn this paper, a Bayesian analysis is made to estimate the Reliability of two stress-strength model systems. First: the reliability of a one component strengths X under stress Y. Second, reliability of one component strength under three stresses. Where X and Y are independent generalized exponential-Poison random variables with parameters (α,λ,θ) and (β,λ,θ) . The analysis is concerned with and based on doubly type II censored samples using gamma prior under four different loss functions, namely quadratic loss function, weighted loss functions, linear and non-linear exponential loss function. The estimators are compared by mean squared error criteria due to a simulation study. We also find that the mean square error is
... Show MoreThis research aims to study the radiation concentration distribution of the old District of Najaf (Iraq), where 15 samples were taken from featured sites in the District, which represents archaeological, religious, and heritage sites. Track detector CR-39 was used to calculate the concentration of three different soil weights for each sample site after being exposed for a month. Geographical information systems (GIS) were used to distribute the radioactive concentration on the sites of the samples, where two interpolation methods, namely the inverse distance weight method (IDW) and the triangle irregular network method (NIT), to study the distribution of the radioactivity concentration. The study showed that the western part of the district
... Show MoreThe map of permeability distribution in the reservoirs is considered one of the most essential steps of the geologic model building due to its governing the fluid flow through the reservoir which makes it the most influential parameter on the history matching than other parameters. For that, it is the most petrophysical properties that are tuned during the history matching. Unfortunately, the prediction of the relationship between static petrophysics (porosity) and dynamic petrophysics (permeability) from conventional wells logs has a sophisticated problem to solve by conventional statistical methods for heterogeneous formations. For that, this paper examines the ability and performance of the artificial intelligence method in perme
... Show MoreTwo unsupervised classifiers for optimum multithreshold are presented; fast Otsu and k-means. The unparametric methods produce an efficient procedure to separate the regions (classes) by select optimum levels, either on the gray levels of image histogram (as Otsu classifier), or on the gray levels of image intensities(as k-mean classifier), which are represent threshold values of the classes. In order to compare between the experimental results of these classifiers, the computation time is recorded and the needed iterations for k-means classifier to converge with optimum classes centers. The variation in the recorded computation time for k-means classifier is discussed.
Optimizing system performance in dynamic and heterogeneous environments and the efficient management of computational tasks are crucial. This paper therefore looks at task scheduling and resource allocation algorithms in some depth. The work evaluates five algorithms: Genetic Algorithms (GA), Particle Swarm Optimization (PSO), Ant Colony Optimization (ACO), Firefly Algorithm (FA) and Simulated Annealing (SA) across various workloads achieved by varying the task-to-node ratio. The paper identifies Finish Time and Deadline as two key performance metrics for gauging the efficacy of an algorithm, and a comprehensive investigation of the behaviors of these algorithms across different workloads was carried out. Results from the experiment
... Show More