The rapid and enormous growth of the Internet of Things, as well as its widespread adoption, has resulted in the production of massive quantities of data that must be processed and sent to the cloud, but the delay in processing the data and the time it takes to send it to the cloud has resulted in the emergence of fog, a new generation of cloud in which the fog serves as an extension of cloud services at the edge of the network, reducing latency and traffic. The distribution of computational resources to minimize makespan and running costs is one of the disadvantages of fog computing. This paper provides a new approach for improving the task scheduling problem in a Cloud-Fog environment in terms of execution time(makespan) and operating costs for Bag-of-Tasks applications. A task scheduling evolutionary algorithm has been proposed. A single custom representation of the problem and a uniform intersection are built for the proposed algorithm. Furthermore, the individual initialization and perturbation operators (crossover and mutation) were created to resolve the inapplicability of any solution found or reached by the proposed evolutionary algorithm. The proposed ETS (Evolutionary Task Scheduling algorithm) algorithm was evaluated on 11 datasets of varying size in a number of tasks. The ETS outperformed the Bee Life (BLA), Modified Particle Swarm (MPSO), and RR algorithms in terms of Makespan and operating costs, according to the results of the experiments.
Due to the great evolution in digital commercial cameras, several studies have addressed the using of such cameras in different civil and close-range applications such as 3D models generation. However, previous studies have not discussed a precise relationship between a camera resolution and the accuracy of the models generated based on images of this camera. Therefore the current study aims to evaluate the accuracy of the derived 3D buildings models captured by different resolution cameras. The digital photogrammetric methods were devoted to derive 3D models using the data of various resolution cameras and analyze their accuracies. This investigation involves selecting three different resolution cameras (low, medium and
... Show MorePrediction of accurate values of residual entropy (SR) is necessary step for the
calculation of the entropy. In this paper, different equations of state were tested for the
available 2791 experimental data points of 20 pure superheated vapor compounds (14
pure nonpolar compounds + 6 pure polar compounds). The Average Absolute
Deviation (AAD) for SR of 2791 experimental data points of the all 20 pure
compounds (nonpolar and polar) when using equations of Lee-Kesler, Peng-
Robinson, Virial truncated to second and to third terms, and Soave-Redlich-Kwong
were 4.0591, 4.5849, 4.9686, 5.0350, and 4.3084 J/mol.K respectively. It was found
from these results that the Lee-Kesler equation was the best (more accurate) one
It is considered as one of the statistical methods used to describe and estimate the relationship between randomness (Y) and explanatory variables (X). The second is the homogeneity of the variance, in which the dependent variable is a binary response takes two values (One when a specific event occurred and zero when that event did not happen) such as (injured and uninjured, married and unmarried) and that a large number of explanatory variables led to the emergence of the problem of linear multiplicity that makes the estimates inaccurate, and the method of greatest possibility and the method of declination of the letter was used in estimating A double-response logistic regression model by adopting the Jackna
... Show MoreIn this paper, previous studies about Fuzzy regression had been presented. The fuzzy regression is a generalization of the traditional regression model that formulates a fuzzy environment's relationship to independent and dependent variables. All this can be introduced by non-parametric model, as well as a semi-parametric model. Moreover, results obtained from the previous studies and their conclusions were put forward in this context. So, we suggest a novel method of estimation via new weights instead of the old weights and introduce
Paper Type: Review article.
another suggestion based on artificial neural networks.
Empirical equations for estimating thickening time and compressive strength of bentonitic - class "G" cement slurries were derived as a function of water to cement ratio and apparent viscosity (for any ratios). How the presence of such an equations easily extract the thickening time and compressive strength values of the oil field saves time without reference to the untreated control laboratory tests such as pressurized consistometer for thickening time test and Hydraulic Cement Mortars including water bath ( 24 hours ) for compressive strength test those may have more than one day.
In this paper, a design of the broadband thin metamaterial absorber (MMA) is presented. Compared with the previously reported metamaterial absorbers, the proposed structure provides a wide bandwidth with a compatible overall size. The designed absorber consists of a combination of octagon disk and split octagon resonator to provide a wide bandwidth over the Ku and K bands' frequency range. Cheap FR-4 material is chosen to be a substate of the proposed absorber with 1.6 thicknesses and 6.5×6.5 overall unit cell size. CST Studio Suite was used for the simulation of the proposed absorber. The proposed absorber provides a wide absorption bandwidth of 14.4 GHz over a frequency range of 12.8-27.5 GHz with more than %90 absorp
... Show MoreClinical keratoconus (KCN) detection is a challenging and time-consuming task. In the diagnosis process, ophthalmologists must revise demographic and clinical ophthalmic examinations. The latter include slit-lamb, corneal topographic maps, and Pentacam indices (PI). We propose an Ensemble of Deep Transfer Learning (EDTL) based on corneal topographic maps. We consider four pretrained networks, SqueezeNet (SqN), AlexNet (AN), ShuffleNet (SfN), and MobileNet-v2 (MN), and fine-tune them on a dataset of KCN and normal cases, each including four topographic maps. We also consider a PI classifier. Then, our EDTL method combines the output probabilities of each of the five classifiers to obtain a decision b