Fuzzy C-means (FCM) is a clustering method used for collecting similar data elements within the group according to specific measurements. Tabu is a heuristic algorithm. In this paper, Probabilistic Tabu Search for FCM implemented to find a global clustering based on the minimum value of the Fuzzy objective function. The experiments designed for different networks, and cluster’s number the results show the best performance based on the comparison that is done between the values of the objective function in the case of using standard FCM and Tabu-FCM, for the average of ten runs.
In this paper, we derived an estimators and parameters of Reliability and Hazard function of new mix distribution ( Rayleigh- Logarithmic) with two parameters and increasing failure rate using Bayes Method with Square Error Loss function and Jeffery and conditional probability random variable of observation. The main objective of this study is to find the efficiency of the derived of Bayesian estimator compared to the to the Maximum Likelihood of this function using Simulation technique by Monte Carlo method under different Rayleigh- Logarithmic parameter and sample sizes. The consequences have shown that Bayes estimator has been more efficient than the maximum likelihood estimator in all sample sizes with application
This work is aimed to design a system which is able to diagnose two types of tumors in a human brain (benign and malignant), using curvelet transform and probabilistic neural network. Our proposed method follows an approach in which the stages are preprocessing using Gaussian filter, segmentation using fuzzy c-means and feature extraction using curvelet transform. These features are trained and tested the probabilistic neural network. Curvelet transform is to extract the feature of MRI images. The proposed screening technique has successfully detected the brain cancer from MRI images of an almost 100% recognition rate accuracy.
Nowad ays, with the development of internet communication that provides many facilities to the user leads in turn to growing unauthorized access. As a result, intrusion detection system (IDS) becomes necessary to provide a high level of security for huge amount of information transferred in the network to protect them from threats. One of the main challenges for IDS is the high dimensionality of the feature space and how the relevant features to distinguish the normal network traffic from attack network are selected. In this paper, multi-objective evolutionary algorithm with decomposition (MOEA/D) and MOEA/D with the injection of a proposed local search operator are adopted to solve the Multi-objective optimization (MOO) followed by Naï
... Show MoreIn this paper, one of the Machine Scheduling Problems is studied, which is the problem of scheduling a number of products (n-jobs) on one (single) machine with the multi-criteria objective function. These functions are (completion time, the tardiness, the earliness, and the late work) which formulated as . The branch and bound (BAB) method are used as the main method for solving the problem, where four upper bounds and one lower bound are proposed and a number of dominance rules are considered to reduce the number of branches in the search tree. The genetic algorithm (GA) and the particle swarm optimization (PSO) are used to obtain two of the upper bounds. The computational results are calculated by coding (progr
... Show MoreIt is frequently asserted that an advantage of a binary search tree implementation of a set over linked list implementation is that for reasonably well balanced binary search trees the average search time (to discover whether or not a particular element is present in the set) is O(log N) to the base 2 where N is the number of element in the set (the size of the tree). This paper presents an experiment for measuring and comparing the obtained binary search tree time with the expected time (theoretical), this experiment proved the correctness of the hypothesis, the experiment is carried out using a program in turbo Pascal with recursion technique implementation and a statistical method to prove th
... Show MoreIn this paper we introduce several estimators for Binwidth of histogram estimators' .We use simulation technique to compare these estimators .In most cases, the results proved that the rule of thumb estimator is better than other estimators.
The purpose of this paper is to apply different transportation models in their minimum and maximum values by finding starting basic feasible solution and finding the optimal solution. The requirements of transportation models were presented with one of their applications in the case of minimizing the objective function, which was conducted by the researcher as real data, which took place one month in 2015, in one of the poultry farms for the production of eggs
... Show MoreThe approach of the research is to simulate residual chlorine decay through potable water distribution networks of Gukookcity. EPANET software was used for estimating and predicting chlorine concentration at different water network points . Data requiredas program inputs (pipe properties) were taken from the Baghdad Municipality, factors that affect residual chlorine concentrationincluding (pH ,Temperature, pressure ,flow rate) were measured .Twenty five samples were tested from November 2016 to July 2017.The residual chlorine values varied between ( 0.2-2mg/L) , and pH values varied between (7.6 -8.2) and the pressure was very weak inthis region. Statistical analyses were used to evaluated errors. The calculated concentrations by the calib
... Show MoreIn this paper, time spent and the repetition of using the Social Network Sites (SNS) in Android applications are investigated. In this approach, we seek to raise the awareness and limit, but not eliminate the repeated uses of SNS, by introducing AndroidTrack. This AndroidTrack is an android application that was designed to monitor and apply valid experimental studies in order to improve the impacts of social media on Iraqi users. Data generated from the app were aggregated and updated periodically at Google Firebase Real-time Database. The statistical factor analysis (FA) was presented as a result of the user’s interactions.