The emphasis of Master Production Scheduling (MPS) or tactic planning is on time and spatial disintegration of the cumulative planning targets and forecasts, along with the provision and forecast of the required resources. This procedure eventually becomes considerably difficult and slow as the number of resources, products and periods considered increases. A number of studies have been carried out to understand these impediments and formulate algorithms to optimise the production planning problem, or more specifically the master production scheduling (MPS) problem. These algorithms include an Evolutionary Algorithm called Genetic Algorithm, a Swarm Intelligence methodology called Gravitational Search Algorithm (GSA), Bat Algorithm (BAT), Teaching Learning Based Algorithm and Harmony Search Algorithm (GA, TLBO and HS). This segment of the research constitutes of two parts the first emphasises on the outcome of the five simulation algorithms and then identifies the best, while the second part is about comparing the best with the search algorithm work in 2009, the applicability of BAT Algorithm and Gravitational Search Algorithm (GSABAT) to planning problems, the technique used for calculating master production scheduling, and the very important results and recommendations for future studies
Scheduling problems have been treated as single criterion problems until recently. Many of these problems are computationally hard to solve three as single criterion problems. However, there is a need to consider multiple criteria in a real life scheduling problem in general. In this paper, we study the problem of scheduling jobs on a single machine to minimize total tardiness subject to maximum earliness or tardiness for each job. And we give algorithm (ETST) to solve the first problem (p1) and algorithm (TEST) to solve the second problem (p2) to find an efficient solution.
Extractive multi-document text summarization – a summarization with the aim of removing redundant information in a document collection while preserving its salient sentences – has recently enjoyed a large interest in proposing automatic models. This paper proposes an extractive multi-document text summarization model based on genetic algorithm (GA). First, the problem is modeled as a discrete optimization problem and a specific fitness function is designed to effectively cope with the proposed model. Then, a binary-encoded representation together with a heuristic mutation and a local repair operators are proposed to characterize the adopted GA. Experiments are applied to ten topics from Document Understanding Conference DUC2002 datas
... Show MoreIntelligent systems can be used to build systems that simulate human behavior. One such system is lip reading. Hence, lip reading is considered one of the hardest problems in image analysis, and thus machine learning is used to solve this problem, which achieves remarkable results, especially when using a deep neural network, in which it dives deeply into the texture of any input. Microlearning is the new trend in E-learning. It is based on small pieces of information to make the learning process easier and more productive. In this paper, a proposed system for multi-layer lip reading is presented. The proposed system is based on micro content (letters) to achieve the lip reading process using deep learning and auto-correction mo
... Show MoreFinding a path solution in a dynamic environment represents a challenge for the robotics researchers, furthermore, it is the main issue for autonomous robots and manipulators since nowadays the world is looking forward to this challenge. The collision free path for robot in an environment with moving obstacles such as different objects, humans, animals or other robots is considered as an actual problem that needs to be solved. In addition, the local minima and sharp edges are the most common problems in all path planning algorithms. The main objective of this work is to overcome these problems by demonstrating the robot path planning and obstacle avoidance using D star (D*) algorithm based on Particle Swarm Optimization (PSO)
... Show MoreThe influx of data in bioinformatics is primarily in the form of DNA, RNA, and protein sequences. This condition places a significant burden on scientists and computers. Some genomics studies depend on clustering techniques to group similarly expressed genes into one cluster. Clustering is a type of unsupervised learning that can be used to divide unknown cluster data into clusters. The k-means and fuzzy c-means (FCM) algorithms are examples of algorithms that can be used for clustering. Consequently, clustering is a common approach that divides an input space into several homogeneous zones; it can be achieved using a variety of algorithms. This study used three models to cluster a brain tumor dataset. The first model uses FCM, whic
... Show MoreThe main focus of this research is to examine the Travelling Salesman Problem (TSP) and the methods used to solve this problem where this problem is considered as one of the combinatorial optimization problems which met wide publicity and attention from the researches for to it's simple formulation and important applications and engagement to the rest of combinatorial problems , which is based on finding the optimal path through known number of cities where the salesman visits each city only once before returning to the city of departure n this research , the benefits of( FMOLP) algorithm is employed as one of the best methods to solve the (TSP) problem and the application of the algorithm in conjun
... Show MoreFaces blurring is one of the important complex processes that is considered one of the advanced computer vision fields. The face blurring processes generally have two main steps to be done. The first step has detected the faces that appear in the frames while the second step is tracking the detected faces which based on the information extracted during the detection step. In the proposed method, an image is captured by the camera in real time, then the Viola Jones algorithm used for the purpose of detecting multiple faces in the captured image and for the purpose of reducing the time consumed to handle the entire captured image, the image background is removed and only the motion areas are processe
... Show More