The rapid and enormous growth of the Internet of Things, as well as its widespread adoption, has resulted in the production of massive quantities of data that must be processed and sent to the cloud, but the delay in processing the data and the time it takes to send it to the cloud has resulted in the emergence of fog, a new generation of cloud in which the fog serves as an extension of cloud services at the edge of the network, reducing latency and traffic. The distribution of computational resources to minimize makespan and running costs is one of the disadvantages of fog computing. This paper provides a new approach for improving the task scheduling problem in a Cloud-Fog environment in terms of execution time(makespan) and operating costs for Bag-of-Tasks applications. A task scheduling evolutionary algorithm has been proposed. A single custom representation of the problem and a uniform intersection are built for the proposed algorithm. Furthermore, the individual initialization and perturbation operators (crossover and mutation) were created to resolve the inapplicability of any solution found or reached by the proposed evolutionary algorithm. The proposed ETS (Evolutionary Task Scheduling algorithm) algorithm was evaluated on 11 datasets of varying size in a number of tasks. The ETS outperformed the Bee Life (BLA), Modified Particle Swarm (MPSO), and RR algorithms in terms of Makespan and operating costs, according to the results of the experiments.
The performance evaluation process requires a set of criteria and for the purpose of measuring the level of performance achieved by the Unit and the actual level of development of its activities, and in view of the changes and of rapid and continuous variables surrounding the Performance is a reflection of the unit's ability to achieve its objectives, as these units are designed to achieve the objectives of exploiting a range of economic resources available to it, and the performance evaluation process is a form of censorship, focusing on the analysis of the results obtained from the achievement All its activities with a view to determining the extent to which the Unit has achieved its objectives using the resources available to it and h
... Show MoreIn this research, several estimators concerning the estimation are introduced. These estimators are closely related to the hazard function by using one of the nonparametric methods namely the kernel function for censored data type with varying bandwidth and kernel boundary. Two types of bandwidth are used: local bandwidth and global bandwidth. Moreover, four types of boundary kernel are used namely: Rectangle, Epanechnikov, Biquadratic and Triquadratic and the proposed function was employed with all kernel functions. Two different simulation techniques are also used for two experiments to compare these estimators. In most of the cases, the results have proved that the local bandwidth is the best for all the
... Show MoreExtension of bandwidth for high reflectance zone for the spectral region (8-14pm) was studied adapting the concept of contiguous and overlapping high reflectance stacks. Computations was carried out using the modified characteristic matrix theory restricted to near-normal incidence of light on dielectric , homogenous and isotropic symmetrical stack. Certain precautions must be taken in the choice of stacks to avoid deep —reflectance minima from developing within the extended high reflectance region. Results illustrate that the techniques of extending the high reflectance regions are applicable not only to mirrors , but also to short-and long-edge filter and to narrow band pass filters.
This study investigates the application of hydraulic acid fracturing to enhance oil production in the Mishrif Formation of the Al-Fakkah oilfield due to declining flow rates and wellhead pressures resulting from asphaltene deposition and inadequate permeability. Implementing acid fracturing, an established technique for low-permeability carbonate reserves, was essential due to the inadequacy of prior solvent cleaning and acidizing efforts. The document outlines the protocols established prior to and following the treatment, emphasizing the importance of careful oversight to guarantee safety and efficacy. In the MiniFrac treatment, 150 barrels of #30 cross-linked gel were injected at 25 barrels per minute, followed by an overflush wi
... Show MoreAn experimental analysis was included to study and investigate the mass transport behavior of cupric ions reduction as the main reaction in the presence of 0.5M H2SO4 by weight difference technique (WDT). The experiments were carried out by electrochemical cell with a rotating cylinder electrode as cathode. The impacts of different operating conditions on mass transfer coefficient were analyzed such as rotation speeds 100-500 rpm, electrolyte temperatures 30-60 , and cupric ions concentration 250-750 ppm. The order of copper reduction reaction was investigated and it shows a first order reaction behavior. The mass transfer coefficient for the described system was correlated with the aid of dimensionless groups as fo
... Show MoreThe influx of data in bioinformatics is primarily in the form of DNA, RNA, and protein sequences. This condition places a significant burden on scientists and computers. Some genomics studies depend on clustering techniques to group similarly expressed genes into one cluster. Clustering is a type of unsupervised learning that can be used to divide unknown cluster data into clusters. The k-means and fuzzy c-means (FCM) algorithms are examples of algorithms that can be used for clustering. Consequently, clustering is a common approach that divides an input space into several homogeneous zones; it can be achieved using a variety of algorithms. This study used three models to cluster a brain tumor dataset. The first model uses FCM, whic
... Show More
It is considered as one of the statistical methods used to describe and estimate the relationship between randomness (Y) and explanatory variables (X). The second is the homogeneity of the variance, in which the dependent variable is a binary response takes two values (One when a specific event occurred and zero when that event did not happen) such as (injured and uninjured, married and unmarried) and that a large number of explanatory variables led to the emergence of the problem of linear multiplicity that makes the estimates inaccurate, and the method of greatest possibility and the method of declination of the letter was used in estimating A double-response logistic regression model by adopting the Jackna
... Show More