This paper presents a hybrid approach for solving null values problem; it hybridizes rough set theory with intelligent swarm algorithm. The proposed approach is a supervised learning model. A large set of complete data called learning data is used to find the decision rule sets that then have been used in solving the incomplete data problem. The intelligent swarm algorithm is used for feature selection which represents bees algorithm as heuristic search algorithm combined with rough set theory as evaluation function. Also another feature selection algorithm called ID3 is presented, it works as statistical algorithm instead of intelligent algorithm. A comparison between those two approaches is made in their performance for null values estimation through working with rough set theory. The results obtained from most code sets show that Bees algorithm better than ID3 in decreasing the number of extracted rules without affecting the accuracy and increasing the accuracy ratio of null values estimation, especially when the number of null values is increasing
Political Discourse Analysis is an important linguistic study approach used by politicians to gain people support. The present paper sheds light on the figures of speech of emphasis in the televised debate between the two presidential elections candidates, Emmanuel Macron and Marine Le Pen and the distinctive effect they add to the political discourse to win general public support as well as the presidential elections.
The present paper provides a rudimentary definition and an analysis of the terms “discourse” and “political discourse” and traces the significant role played by politically directed televised Media and internet to support political pa
... Show MoreThe reaction of 2-amino-benzothiazole with bis [O,O-2,3,O,O – 5,6 – (chloro(carboxylic) methiylidene) ] – L – ascorbic acid (L-AsCl2) gave new product 3-(Benzo[d]Thaizole-2-Yl) – 9-Oxo-6,7,7a,9-Tertrahydro-2H-2,10:4,7-Diepoxyfuro [3,2-f][1,5,3] Dioxazonine – 2,4 (3H) – Dicarboxylic Acid, Hydro-chloride (L-as-am)), which has been insulated and identified by (C, H, N) elemental microanalysis (Ft-IR),(U.v–vis), mass spectroscopy and H-NMR techniques. The (L-as am) ligand complexes were obtained by the reaction of (L-as-am) with [M(II) = Co,Ni,Cu, and Zn] metal ions. The synthesized complexes are characterized by Uv–Visible (Ft –IR), mass spectroscopy molar ratio, molar conductivity, and Magnetic susceptibility techniques. (
... Show MoreThe current work concerns preparing cobalt manganese ferrite (Co0.2Mn0.8Fe2O4) and decorating it with polyaniline (PAni) for supercapacitor applications. The X-ray diffraction findings (XRD) manifested a broad peak of PAni and a cubic structure of cobalt manganese ferrite with crystal sizes between 21 nm. The pictures were taken with a field emission scanning electron microscope (FE-SEM), which evidenced that the PAni has nanofibers (NFs) structures, grain size 33 – 55 nm, according to the method of preparation, where the hydrothermal method was used. The magnetic measurements (VSM) that were conducted at room temperature showed that the samples had definite magnetic properties. Additionally, it was noted that the saturation magnetizatio
... Show MoreThe soft sets were known since 1999, and because of their wide applications and their great flexibility to solve the problems, we used these concepts to define new types of soft limit points, that we called soft turning points.Finally, we used these points to define new types of soft separation axioms and we study their properties.
In this research, a new application has been developed for games by using the generalization of the separation axioms in topology, in particular regular, Sg-regular and SSg- regular spaces. The games under study consist of two players and the victory of the second player depends on the strategy and choice of the first player. Many regularity, Sg, SSg regularity theorems have been proven using this type of game, and many results and illustrative examples have been presented
The concept of -closedness, a kind of covering property for topological spaces, has already been studied with meticulous care from different angles and via different approaches. In this paper, we continue the said investigation in terms of a different concept viz. grills. The deliberations in the article include certain characterizations and a few necessary conditions for the -closedness of a space, the latter conditions are also shown to be equivalent to -closedness in a - almost regular space. All these and the associated discussions and results are done with grills as the prime supporting tool.
In this paper, some basic notions and facts in the b-modular space similar to those in the modular spaces as a type of generalization are given. For example, concepts of convergence, best approximate, uniformly convexity etc. And then, two results about relation between semi compactness and approximation are proved which are used to prove a theorem on the existence of best approximation for a semi-compact subset of b-modular space.
This paper shews how to estimate the parameter of generalized exponential Rayleigh (GER) distribution by three estimation methods. The first one is maximum likelihood estimator method the second one is moment employing estimation method (MEM), the third one is rank set sampling estimator method (RSSEM)The simulation technique is used for all these estimation methods to find the parameters for generalized exponential Rayleigh distribution. Finally using the mean squares error criterion to compare between these estimation methods to find which of these methods are best to the others
Abstract: Data mining is become very important at the present time, especially with the increase in the area of information it's became huge, so it was necessary to use data mining to contain them and using them, one of the data mining techniques are association rules here using the Pattern Growth method kind enhancer for the apriori. The pattern growth method depends on fp-tree structure, this paper presents modify of fp-tree algorithm called HFMFFP-Growth by divided dataset and for each part take most frequent item in fp-tree so final nodes for conditional tree less than the original fp-tree. And less memory space and time.