Every so often, a confluence of novel technologies emerges that radically transforms every aspect of the industry, the global economy, and finally, the way we live. These sharp leaps of human ingenuity are known as industrial revolutions, and we are currently in the midst of the fourth such revolution, coined Industry 4.0 by the World Economic Forum. Building on their guideline set of technologies that encompass Industry 4.0, we present a full set of pillar technologies on which Industry 4.0 project portfolio management rests as well as the foundation technologies that support these pillars. A complete model of an Industry 4.0 factory which relies on these pillar technologies is presented. The full set of pillars encompasses cyberphysical systems and Internet of Things (IoT), artificial intelligence (AI), machine learning (ML) and big data, robots and drones, cloud computing, 5G and 6G networks, 3D printing, virtual and augmented reality, and blockchain technology. These technologies are based on a set of foundation technologies which include advances in computing, nanotechnology, biotechnology, materials, energy, and finally cube satellites. We illustrate the confluence of all these technologies in a single model factory. This new factory model succinctly demonstrates the advancements in manufacturing introduced by these modern technologies, which qualifies this as a seminal industrial revolutionary event in human history.
It is considered as one of the statistical methods used to describe and estimate the relationship between randomness (Y) and explanatory variables (X). The second is the homogeneity of the variance, in which the dependent variable is a binary response takes two values (One when a specific event occurred and zero when that event did not happen) such as (injured and uninjured, married and unmarried) and that a large number of explanatory variables led to the emergence of the problem of linear multiplicity that makes the estimates inaccurate, and the method of greatest possibility and the method of declination of the letter was used in estimating A double-response logistic regression model by adopting the Jackna
... Show MoreOften phenomena suffer from disturbances in their data as well as the difficulty of formulation, especially with a lack of clarity in the response, or the large number of essential differences plaguing the experimental units that have been taking this data from them. Thus emerged the need to include an estimation method implicit rating of these experimental units using the method of discrimination or create blocks for each item of these experimental units in the hope of controlling their responses and make it more homogeneous. Because of the development in the field of computers and taking the principle of the integration of sciences it has been found that modern algorithms used in the field of Computer Science genetic algorithm or ant colo
... Show MoreThe paper shows how to estimate the three parameters of the generalized exponential Rayleigh distribution by utilizing the three estimation methods, namely, the moment employing estimation method (MEM), ordinary least squares estimation method (OLSEM), and maximum entropy estimation method (MEEM). The simulation technique is used for all these estimation methods to find the parameters for the generalized exponential Rayleigh distribution. In order to find the best method, we use the mean squares error criterion. Finally, in order to extract the experimental results, one of object oriented programming languages visual basic. net was used
This paper introduces a non-conventional approach with multi-dimensional random sampling to solve a cocaine abuse model with statistical probability. The mean Latin hypercube finite difference (MLHFD) method is proposed for the first time via hybrid integration of the classical numerical finite difference (FD) formula with Latin hypercube sampling (LHS) technique to create a random distribution for the model parameters which are dependent on time t . The LHS technique gives advantage to MLHFD method to produce fast variation of the parameters’ values via number of multidimensional simulations (100, 1000 and 5000). The generated Latin hypercube sample which is random or non-deterministic in nature is further integrated with the FD method t
... Show MoreIn this paper, we will provide a proposed method to estimate missing values for the Explanatory variables for Non-Parametric Multiple Regression Model and compare it with the Imputation Arithmetic mean Method, The basis of the idea of this method was based on how to employ the causal relationship between the variables in finding an efficient estimate of the missing value, we rely on the use of the Kernel estimate by Nadaraya – Watson Estimator , and on Least Squared Cross Validation (LSCV) to estimate the Bandwidth, and we use the simulation study to compare between the two methods.
Purpose: The present study seeks to examine various history stages in which undergone by the concept of scenarios, and development of this concept to integration with the strategic management practices:
Methodology: The current study relied on a literature review and approach in providing total picture of different stages undergone by this concept.
The main results: the scenarios did not reach maturity in their quest for integration with strategic management, and still need a great effort for the maturation of this thought in the framework of strategic management, and through it can contribute in creating important knowledge evolution.
Originality and value: providing a contemporary model linking the roots of this concept and cu
Quality is the key to success in today's world, which is based mainly on competition in the provision of high quality services through the application of the modern management method which is called total quality management in organizations. This includes describing the provision of health services and satisfaction of patients . .  
... Show MoreThe study aimed to reveal the possibility of predicting academic procrastination through both Cognitive distortions and time management among students of Al-Aqsa Community College, as well as to reveal the level of both cognitive distortions, time management, and academic procrastination. Additionally, it aimed to identify the size of the correlation between cognitive distortions, time management, and academic procrastination. The study sample consisted of (250) students from Al-Aqsa community college students. The results of the study concluded that the mean for each level of cognitive distortions and academic procrastination is average. The mean level of time management is high. There is a statistically significant positive relationshi
... Show More