Most Internet of Vehicles (IoV) applications are delay-sensitive and require resources for data storage and tasks processing, which is very difficult to afford by vehicles. Such tasks are often offloaded to more powerful entities, like cloud and fog servers. Fog computing is decentralized infrastructure located between data source and cloud, supplies several benefits that make it a non-frivolous extension of the cloud. The high volume data which is generated by vehicles’ sensors and also the limited computation capabilities of vehicles have imposed several challenges on VANETs systems. Therefore, VANETs is integrated with fog computing to form a paradigm namely Vehicular Fog Computing (VFC) which provide low-latency services to mobile vehicles. Several studies have tackled the task offloading problem in the VFC field. However, recent studies have not carefully addressed the transmission path to the destination node and did not consider the energy consumption of vehicles. This paper aims to optimize the task offloading process in the VFC system in terms of latency and energy objectives under deadline constraint by adopting a Multi-Objective Evolutionary Algorithm (MOEA). Road Side Units (RSUs) x-Vehicles Mutli-Objective Computation offloading method (RxV-MOC) is proposed, where an elite of vehicles are utilized as fog nodes for tasks execution and all vehicles in the system are utilized for tasks transmission. The well-known Dijkstra's algorithm is adopted to find the minimum path between each two nodes. The simulation results show that the RxV-MOC has reduced significantly the energy consumption and latency for the VFC system in comparison with First-Fit algorithm, Best-Fit algorithm, and the MOC method.
Optimization of gas lift plays a substantial role in production and maximizing the net present value of the investment of oil field projects. However, the application of the optimization techniques in gas lift project is so complex because many decision variables, objective functions and constraints are involved in the gas lift optimization problem. In addition, many computational ways; traditional and modern, have been employed to optimize gas lift processes. This research aims to present the developing of the optimization techniques applied in the gas lift. Accordingly, the research classifies the applied optimization techniques, and it presents the limitations and the range of applications of each one to get an acceptable level of accura
... Show MoreMulti-agent systems are subfield of Artificial Intelligence that has experienced rapid growth because of its flexibility and intelligence in order to solve distributed problems. Multi-agent systems (MAS) have got interest from various researchers in different disciplines for solving sophisticated problems by dividing them into smaller tasks. These tasks can be assigned to agents as autonomous entities with their private database, which act on their environment, perceive, process, retain and recall by using multiple inputs. MAS can be defined as a network of individual agents that share knowledge and communicate with each other in order to solve a problem that is beyond the scope of a single agent. It is imperative to understand the chara
... Show MoreToday, the role of cloud computing in our day-to-day lives is very prominent. The cloud computing paradigm makes it possible to provide demand-based resources. Cloud computing has changed the way that organizations manage resources due to their robustness, low cost, and pervasive nature. Data security is usually realized using different methods such as encryption. However, the privacy of data is another important challenge that should be considered when transporting, storing, and analyzing data in the public cloud. In this paper, a new method is proposed to track malicious users who use their private key to decrypt data in a system, share it with others and cause system information leakage. Security policies are also considered to be int
... Show MoreAbstract
In this work, two algorithms of Metaheuristic algorithms were hybridized. The first is Invasive Weed Optimization algorithm (IWO) it is a numerical stochastic optimization algorithm and the second is Whale Optimization Algorithm (WOA) it is an algorithm based on the intelligence of swarms and community intelligence. Invasive Weed Optimization Algorithm (IWO) is an algorithm inspired by nature and specifically from the colonizing weeds behavior of weeds, first proposed in 2006 by Mehrabian and Lucas. Due to their strength and adaptability, weeds pose a serious threat to cultivated plants, making them a threat to the cultivation process. The behavior of these weeds has been simulated and used in Invas
... Show MoreGrass trimming operation is widely done in Malaysia for the purpose of maintaining highways. Large number of operators engaged in this work encounters high level of noise generated by back pack type grass trimmer used for this purpose. High level of noise exposure gives different kinds of ill effect on human operators. Exact nature of deteriorated work performance is not known. For predicting the work efficiency deterioration, fuzzy tool has been used in present research. It has been established that a fuzzy computing system will help in identification and analysis of fuzzy models fuzzy system offers a convenient way of representing the relationships between the inputs and outputs of a system in the form of IF-THEN rules. The paper presents
... Show MoreCalculating the Inverse Kinematic (IK) equations is a complex problem due to the nonlinearity of these equations. Choosing the end effector orientation affects the reach of the target location. The Forward Kinematics (FK) of Humanoid Robotic Legs (HRL) is determined by using DenavitHartenberg (DH) method. The HRL has two legs with five Degrees of Freedom (DoF) each. The paper proposes using a Particle Swarm Optimization (PSO) algorithm to optimize the best orientation angle of the end effector of HRL. The selected orientation angle is used to solve the IK equations to reach the target location with minimum error. The performance of the proposed method is measured by six scenarios with different simulated positions of the legs. The proposed
... Show MoreABSTRACT:. The Lower Cretaceous Zubair formation is comprised of sandstones intercalated with shale sequences. The main challenges that were encountered while drilling into this formation included severe wellbore instability-related issues across the weaker formations overlaying the reservoir section (pay zone). These issues have a significant impact on well costs and timeline. In this paper, a comprehensive geomechanical study was carried out to understand the causes of the wellbore failure and to improve drilling design and drilling performance on further development wells in the field. Failure criteria known as Mogi-Coulomb was used to determine an operating mud weight window required for safe drilling. The accuracy of the geomechanical
... Show MoreA two time step stochastic multi-variables multi-sites hydrological data forecasting model was developed and verified using a case study. The philosophy of this model is to use the cross-variables correlations, cross-sites correlations and the two steps time lag correlations simultaneously, for estimating the parameters of the model which then are modified using the mutation process of the genetic algorithm optimization model. The objective function that to be minimized is the Akiake test value. The case study is of four variables and three sites. The variables are the monthly air temperature, humidity, precipitation, and evaporation; the sites are Sulaimania, Chwarta, and Penjwin, which are located north Iraq. The model performance was
... Show MoreA three-stage learning algorithm for deep multilayer perceptron (DMLP) with effective weight initialisation based on sparse auto-encoder is proposed in this paper, which aims to overcome difficulties in training deep neural networks with limited training data in high-dimensional feature space. At the first stage, unsupervised learning is adopted using sparse auto-encoder to obtain the initial weights of the feature extraction layers of the DMLP. At the second stage, error back-propagation is used to train the DMLP by fixing the weights obtained at the first stage for its feature extraction layers. At the third stage, all the weights of the DMLP obtained at the second stage are refined by error back-propagation. Network structures an
... Show More