The development of information systems in recent years has contributed to various methods of gathering information to evaluate IS performance. The most common approach used to collect information is called the survey system. This method, however, suffers one major drawback. The decision makers consume considerable time to transform data from survey sheets to analytical programs. As such, this paper proposes a method called ‘survey algorithm based on R programming language’ or SABR, for data transformation from the survey sheets inside R environments by treating the arrangement of data as a relational format. R and Relational data format provide excellent opportunity to manage and analyse the accumulated data. Moreover, a survey system based on structured query language and R programming language is designed to optimize methods to manage survey systems by applying large features offered via combining multi data science languages. The experiments verified enhancements of flexibility, technical tools, and data visualization features employed to process the collected data from different aspects; therefore, the proposed approach demonstrates a simple case study to enhance the evaluation requirements of the proposed technique. Finally, the estimated results of this research can be used to improve the methods of information management on different aspects such as survey systems and other data models that hold the relational and non-relational models using SABR. This method demonstrated improved accuracy of data collected, reduced data processing time and arranged data to the willing model.
A simple setup of random number generator is proposed. The random number generation is based on the shot-noise fluctuations in a p-i-n photodiode. These fluctuations that are defined as shot noise are based on a stationary random process whose statistical properties reflect Poisson statistics associated with photon streams. It has its origin in the quantum nature of light and it is related to vacuum fluctuations. Two photodiodes were used and their shot noise fluctuations were subtracted. The difference was applied to a comparator to obtain the random sequence.
Administrative procedures in various organizations produce numerous crucial records and data. These
records and data are also used in other processes like customer relationship management and accounting
operations.It is incredibly challenging to use and extract valuable and meaningful information from these data
and records because they are frequently enormous and continuously growing in size and complexity.Data
mining is the act of sorting through large data sets to find patterns and relationships that might aid in the data
analysis process of resolving business issues. Using data mining techniques, enterprises can forecast future
trends and make better business decisions.The Apriori algorithm has bee
Air pollution refers to the release of pollutants into the air that are detrimental to human health and the planet as a whole.In this research, the air pollutants concentration measurements such as Total Suspended Particles(TSP), Carbon Monoxides(CO),Carbon Dioxide (CO2) and meteorological parameters including temperature (T), relative humidity (RH) and wind speed & direction were conducted in Baghdad city by several stations measuring numbered (22) stations located in different regions, and were classified into (industrial, commercial and residential) stations. Using Arc-GIS program ( spatial Analyses), different maps have been prepared for the distribution of different pollutant
Chemical pollution is a very important issue that people suffer from and it often affects the nature of health of society and the future of the health of future generations. Consequently, it must be considered in order to discover suitable models and find descriptions to predict the performance of it in the forthcoming years. Chemical pollution data in Iraq take a great scope and manifold sources and kinds, which brands it as Big Data that need to be studied using novel statistical methods. The research object on using Proposed Nonparametric Procedure NP Method to develop an (OCMT) test procedure to estimate parameters of linear regression model with large size of data (Big Data) which comprises many indicators associated with chemi
... Show MoreThe current research discusses the topic of the formal data within the methodological framework through defining the research problem, limits and objectives and defining the most important terms mentioned in this research. The theoretical framework in the first section addressed (the concept of the Bauhaus school, the philosophy of the Bauhaus school and the logical bases of this school). The second section dealt with (the most important elements and structural bases of the Bauhaus school) which are considered the most important formal data of this school and their implications on the fabrics and costumes design. The research came up with the most important indicators resulting from the theoretical framework.
Chapter three defined the
Linear programming currently occupies a prominent position in various fields and has wide applications, as its importance lies in being a means of studying the behavior of a large number of systems as well. It is also the simplest and easiest type of models that can be created to address industrial, commercial, military and other dilemmas. Through which to obtain the optimal quantitative value. In this research, we dealt with the post optimality solution, or what is known as sensitivity analysis, using the principle of shadow prices. The scientific solution to any problem is not a complete solution once the optimal solution is reached. Any change in the values of the model constants or what is known as the inputs of the model that will chan
... Show MoreVisual analytics becomes an important approach for discovering patterns in big data. As visualization struggles from high dimensionality of data, issues like concept hierarchy on each dimension add more difficulty and make visualization a prohibitive task. Data cube offers multi-perspective aggregated views of large data sets and has important applications in business and many other areas. It has high dimensionality, concept hierarchy, vast number of cells, and comes with special exploration operations such as roll-up, drill-down, slicing and dicing. All these issues make data cubes very difficult to visually explore. Most existing approaches visualize a data cube in 2D space and require preprocessing steps. In this paper, we propose a visu
... Show MoreThe using of the parametric models and the subsequent estimation methods require the presence of many of the primary conditions to be met by those models to represent the population under study adequately, these prompting researchers to search for more flexible parametric models and these models were nonparametric, many researchers, are interested in the study of the function of permanence and its estimation methods, one of these non-parametric methods.
For work of purpose statistical inference parameters around the statistical distribution for life times which censored data , on the experimental section of this thesis has been the comparison of non-parametric methods of permanence function, the existence
... Show MoreThe paper present design of a control structure that enables integration of a Kinematic neural controller for trajectory tracking of a nonholonomic differential two wheeled mobile robot, then proposes a Kinematic neural controller to direct a National Instrument mobile robot (NI Mobile Robot). The controller is to make the actual velocity of the wheeled mobile robot close the required velocity by guarantees that the trajectory tracking mean squire error converges at minimum tracking error. The proposed tracking control system consists of two layers; The first layer is a multi-layer perceptron neural network system that controls the mobile robot to track the required path , The second layer is an optimization layer ,which is impleme
... Show More