Scheduling Timetables for courses in the big departments in the universities is a very hard problem and is often be solved by many previous works although results are partially optimal. This work implements the principle of an evolutionary algorithm by using genetic theories to solve the timetabling problem to get a random and full optimal timetable with the ability to generate a multi-solution timetable for each stage in the collage. The major idea is to generate course timetables automatically while discovering the area of constraints to get an optimal and flexible schedule with no redundancy through the change of a viable course timetable. The main contribution in this work is indicated by increasing the flexibility of generating optimal timetable schedules with different copies by increasing the probability of giving the best schedule for each stage in the campus with the ability to replace the timetable when needed. The Evolutionary Algorithm (EA) utilized in this paper is the Genetic Algorithm (GA) which is a common multi-solution metaheuristic search based on the evolutionary population that can be applied to solve complex combinatorial problems like timetabling problems. In this work, all inputs: courses, teachers, and time acted by one array to achieve local search and combined this acting of the timetable by using the heuristic crossover to ensure that the essential conditions are not broken. The result of this work is a flexible scheduling system, which shows the diversity of all possible timetables that can be created depending on user conditions and needs.
The substantial key to initiate an explicit statistical formula for a physically specified continua is to consider a derivative expression, in order to identify the definitive configuration of the continua itself. Moreover, this statistical formula is to reflect the whole distribution of the formula of which the considered continua is the most likely to be dependent. However, a somewhat mathematically and physically tedious path to arrive at the required statistical formula is needed. The procedure in the present research is to establish, modify, and implement an optimized amalgamation between Airy stress function for elastically-deformed media and the multi-canonical joint probability density functions for multivariate distribution complet
... Show MoreThe Estimation Of The Reliability Function Depends On The Accuracy Of The Data Used To Estimate The Parameters Of The Probability distribution, and Because Some Data Suffer from a Skew in their Data to Estimate the Parameters and Calculate the Reliability Function in light of the Presence of Some Skew in the Data, there must be a Distribution that has flexibility in dealing with that Data. As in the data of Diyala Company for Electrical Industries, as it was observed that there was a positive twisting in the data collected from the Power and Machinery Department, which required distribution that deals with those data and searches for methods that accommodate this problem and lead to accurate estimates of the reliability function,
... Show MoreA condense study was done to compare between the ordinary estimators. In particular the maximum likelihood estimator and the robust estimator, to estimate the parameters of the mixed model of order one, namely ARMA(1,1) model.
Simulation study was done for a varieties the model. using: small, moderate and large sample sizes, were some new results were obtained. MAPE was used as a statistical criterion for comparison.
The basic concepts of some near open subgraphs, near rough, near exact and near fuzzy graphs are introduced and sufficiently illustrated. The Gm-closure space induced by closure operators is used to generalize the basic rough graph concepts. We introduce the near exactness and near roughness by applying the near concepts to make more accuracy for definability of graphs. We give a new definition for a membership function to find near interior, near boundary and near exterior vertices. Moreover, proved results, examples and counter examples are provided. The Gm-closure structure which suggested in this paper opens up the way for applying rich amount of topological facts and methods in the process of granular computing.
This paper study two stratified quantile regression models of the marginal and the conditional varieties. We estimate the quantile functions of these models by using two nonparametric methods of smoothing spline (B-spline) and kernel regression (Nadaraya-Watson). The estimates can be obtained by solve nonparametric quantile regression problem which means minimizing the quantile regression objective functions and using the approach of varying coefficient models. The main goal is discussing the comparison between the estimators of the two nonparametric methods and adopting the best one between them
The concept of separation axioms constitutes a key role in general topology and all generalized forms of topologies. The present authors continued the study of gpα-closed sets by utilizing this concept, new separation axioms, namely gpα-regular and gpα-normal spaces are studied and established their characterizations. Also, new spaces namely gpα-Tk for k = 0, 1, 2 are studied.
This study includes the application of non-parametric methods in estimating the conditional survival function of the Beran method using both the Nadaraya-Waston and the Priestley-chao weights and using data for Interval censored and Right censored of breast cancer and two types of treatment, Chemotherapy and radiation therapy Considering age is continuous variable, through using (MATLAB) use of the (MSE) To compare weights The results showed a superior weight (Nadaraya-Waston) in estimating the survival function and condition of Both for chemotherapy and radiation therapy.
The main aim of this paper is to use the notion which was introduced in [1], to offered new classes of separation axioms in ideal spaces. So, we offered new type of notions of convergence in ideal spaces via the set. Relations among several types of separation axioms that offered were explained.
The theories of metric spaces and fuzzy metric spaces are crucial topics in mathematics.
Compactness is one of the most important and fundamental properties that have been widely used in Functional Analysis. In this paper, the definition of compact fuzzy soft metric space is introduced and some of its important theorems are investigated. Also, sequentially compact fuzzy soft metric space and locally compact fuzzy soft metric space are defined and the relationships between them are studied. Moreover, the relationships between each of the previous two concepts and several other known concepts are investigated separately. Besides, the compact fuzzy soft continuous functions are studie
... Show MoreIn this paper, we used four classification methods to classify objects and compareamong these methods, these are K Nearest Neighbor's (KNN), Stochastic Gradient Descentlearning (SGD), Logistic Regression Algorithm(LR), and Multi-Layer Perceptron (MLP). Weused MCOCO dataset for classification and detection the objects, these dataset image wererandomly divided into training and testing datasets at a ratio of 7:3, respectively. In randomlyselect training and testing dataset images, converted the color images to the gray level, thenenhancement these gray images using the histogram equalization method, resize (20 x 20) fordataset image. Principal component analysis (PCA) was used for feature extraction, andfinally apply four classification metho
... Show More