Fuzzy C-means (FCM) is a clustering method used for collecting similar data elements within the group according to specific measurements. Tabu is a heuristic algorithm. In this paper, Probabilistic Tabu Search for FCM implemented to find a global clustering based on the minimum value of the Fuzzy objective function. The experiments designed for different networks, and cluster’s number the results show the best performance based on the comparison that is done between the values of the objective function in the case of using standard FCM and Tabu-FCM, for the average of ten runs.
Business organizations have faced many challenges in recent times, most important of which is information technology, because it is widely spread and easy to use. Its use has led to an increase in the amount of data that business organizations deal with an unprecedented manner. The amount of data available through the internet is a problem that many parties seek to find solutions for. Why is it available there in this huge amount randomly? Many expectations have revealed that in 2017, there will be devices connected to the internet estimated at three times the population of the Earth, and in 2015 more than one and a half billion gigabytes of data was transferred every minute globally. Thus, the so-called data mining emerged as a
... Show MoreGetting knowledge from raw data has delivered beneficial information in several domains. The prevalent utilizing of social media produced extraordinary quantities of social information. Simply, social media delivers an available podium for employers for sharing information. Data Mining has ability to present applicable designs that can be useful for employers, commercial, and customers. Data of social media are strident, massive, formless, and dynamic in the natural case, so modern encounters grow. Investigation methods of data mining utilized via social networks is the purpose of the study, accepting investigation plans on the basis of criteria, and by selecting a number of papers to serve as the foundation for this arti
... Show MoreThe aim of this paper, is to discuss several high performance training algorithms fall into two main categories. The first category uses heuristic techniques, which were developed from an analysis of the performance of the standard gradient descent algorithm. The second category of fast algorithms uses standard numerical optimization techniques such as: quasi-Newton . Other aim is to solve the drawbacks related with these training algorithms and propose an efficient training algorithm for FFNN
Calculating similarities between texts that have been written in one language or multiple languages still one of the most important challenges facing the natural language processing. This work offers many approaches that used for the texts similarity. The proposed system will find the similarity between two Arabic texts by using hybrid similarity measures techniques: Semantic similarity measure, Cosine similarity measure and N-gram ( using the Dice similarity measure). In our proposed system we will design Arabic SemanticNet that store the keywords for a specific field(computer science), by this network we can find semantic similarity between words according to specific equations. Cosine and N-gram similarity measures are used in order t
... Show MoreIn this paper we investigate the use of two types of local search methods (LSM), the Simulated Annealing (SA) and Particle Swarm Optimization (PSO), to solve the problems ( ) and . The results of the two LSMs are compared with the Branch and Bound method and good heuristic methods. This work shows the good performance of SA and PSO compared with the exact and heuristic methods in terms of best solutions and CPU time.
Human detection represents a main problem of interest when using video based monitoring. In this paper, artificial neural networks, namely multilayer perceptron (MLP) and radial basis function (RBF) are used to detect humans among different objects in a sequence of frames (images) using classification approach. The classification used is based on the shape of the object instead of depending on the contents of the frame. Initially, background subtraction is depended to extract objects of interest from the frame, then statistical and geometric information are obtained from vertical and horizontal projections of the objects that are detected to stand for the shape of the object. Next to this step, two ty
... Show MoreBackground: image processing of medical images is major method to increase reliability of cancer diagnosis.
Methods: The proposed system proceeded into two stages: First, enhancement stage which was performed using of median filter to reduce the noise and artifacts that present in a CT image of a human lung with a cancer, Second: implementation of k-means clustering algorithm.
Results: the result image of k-means algorithm compared with the image resulted from implementation of fuzzy c-means (FCM) algorithm.
Conclusion: We found that the time required for k-means algorithm implementation is less than that of FCM algorithm.MATLAB package (version 7.3) was used in writing the programming code of our w
Wireless Multimedia Sensor Networks (WMSNs) are a type of sensor network that contains sensor nodes equipped with cameras, microphones; therefore the WMSNS are able to produce multimedia data such as video and audio streams, still images, and scalar data from the surrounding environment. Most multimedia applications typically produce huge volumes of data, this leads to congestion. To address this challenge, This paper proposes Modify Spike Neural Network control for Traffic Load Parameter with Exponential Weight of Priority Based Rate Control algorithm (MSNTLP with EWBPRC). The Modify Spike Neural Network controller (MSNC) can calculate the appropriate traffi
... Show MoreInternet of Vehicle (IoV) is one of the most basic branches of the Internet of Things (IoT), which provides many advantages for drivers and passengers to ensure safety and traffic efficiency. Most IoV applications are delay-sensitive and require resources for data storage and computation that cannot be afforded by vehicles. Thus, such tasks are always offloaded to more powerful nodes, like cloud or fog. Vehicular Fog Computing (VFC), which extends cloud computing and brings resources closer to the edge of the network, has the potential to reduce both traffic congestion and load on the cloud. Resources management and allocation process is very critical for satisfying both user and provider needs. However, th
... Show More