Fuzzy C-means (FCM) is a clustering method used for collecting similar data elements within the group according to specific measurements. Tabu is a heuristic algorithm. In this paper, Probabilistic Tabu Search for FCM implemented to find a global clustering based on the minimum value of the Fuzzy objective function. The experiments designed for different networks, and cluster’s number the results show the best performance based on the comparison that is done between the values of the objective function in the case of using standard FCM and Tabu-FCM, for the average of ten runs.
In this paper we present a new method for solving fully fuzzy multi-objective linear programming problems and find the fuzzy optimal solution of it. Numerical examples are provided to illustrate the method.
In this paper, suggested formula as well a conventional method for estimating the twoparameters (shape and scale) of the Generalized Rayleigh Distribution was proposed. For different sample sizes (small, medium, and large) and assumed several contrasts for the two parameters a percentile estimator was been used. Mean Square Error was implemented as an indicator of performance and comparisons of the performance have been carried out through data analysis and computer simulation between the suggested formulas versus the studied formula according to the applied indicator. It was observed from the results that the suggested method which was performed for the first time (as far as we know), had highly advantage than t
... Show MoreTo ascertain the stability or instability of time series, three versions of the model proposed by Dickie-Voller were used in this paper. The aim of this study is to explain the extent of the impact of some economic variables such as the supply of money, gross domestic product, national income, after reaching the stability of these variables. The results show that the variable money supply, the GDP variable, and the exchange rate variable were all stable at the level of the first difference in the time series. This means that the series is an integrated first-class series. Hence, the gross fixed capital formation variable, the variable national income, and the variable interest rate
... Show MoreThe research aims to apply one of the techniques of management accounting, which is the technique of the quality function deployment on the men's leather shoe product Model (79043) in the General Company for Textile and Leather Industries by determining the basic requirements of the customer and then designing the characteristics and specifications of the product according to the preferences of the customer in order to respond to the customer's voice in agreement With the characteristics and technical characteristics of the product, taking into account the products of the competing companies to achieve the maximum customer satisfaction, the highest quality and the lowest costs. Hence, the importance of research has emerged, which indicat
... Show MoreThis paper presents a statistical study for a suitable distribution of rainfall in the provinces of Iraq
Using two types of distributions for the period (2005-2015). The researcher suggested log normal distribution, Mixed exponential distribution of each rovince were tested with the distributions to determine the optimal distribution of rainfall in Iraq. The distribution will be selected on the basis of minimum standards produced some goodness of fit tests, which are to determine
Akaike (CAIC), Bayesian Akaike (BIC), Akaike (AIC). It has been applied to distributions to find the right distribution of the data of rainfall in the provinces of Iraq was used (maximu
... Show MoreIn real situations all observations and measurements are not exact numbers but more or less non-exact, also called fuzzy. So, in this paper, we use approximate non-Bayesian computational methods to estimate inverse Weibull parameters and reliability function with fuzzy data. The maximum likelihood and moment estimations are obtained as non-Bayesian estimation. The maximum likelihood estimators have been derived numerically based on two iterative techniques namely “Newton-Raphson†and the “Expectation-Maximization†techniques. In addition, we provide compared numerically through Monte-Carlo simulation study to obtained estimates of the parameters and reliability function i
... Show MoreCoronavirus is considered the first virus to sweep the world in the twenty-first century, it appeared by the end of 2019. It started in the Chinese city of Wuhan and began to spread in different regions around the world too quickly and uncontrollable due to the lack of medical examinations and their inefficiency. So, the process of detecting the disease needs an accurate and quickly detection techniques and tools. The X-Ray images are good and quick in diagnosing the disease, but an automatic and accurate diagnosis is needed. Therefore, this paper presents an automated methodology based on deep learning in diagnosing COVID-19. In this paper, the proposed system is using a convolutional neural network, which is considered one o
... Show MoreReal life scheduling problems require the decision maker to consider a number of criteria before arriving at any decision. In this paper, we consider the multi-criteria scheduling problem of n jobs on single machine to minimize a function of five criteria denoted by total completion times (∑), total tardiness (∑), total earliness (∑), maximum tardiness () and maximum earliness (). The single machine total tardiness problem and total earliness problem are already NP-hard, so the considered problem is strongly NP-hard.
We apply two local search algorithms (LSAs) descent method (DM) and simulated annealing method (SM) for the 1// (∑∑∑
... Show MoreWireless sensor applications are susceptible to energy constraints. Most of the energy is consumed in communication between wireless nodes. Clustering and data aggregation are the two widely used strategies for reducing energy usage and increasing the lifetime of wireless sensor networks. In target tracking applications, large amount of redundant data is produced regularly. Hence, deployment of effective data aggregation schemes is vital to eliminate data redundancy. This work aims to conduct a comparative study of various research approaches that employ clustering techniques for efficiently aggregating data in target tracking applications as selection of an appropriate clustering algorithm may reflect positive results in the data aggregati
... Show More