Evolutionary algorithms (EAs), as global search methods, are proved to be more robust than their counterpart local heuristics for detecting protein complexes in protein-protein interaction (PPI) networks. Typically, the source of robustness of these EAs comes from their components and parameters. These components are solution representation, selection, crossover, and mutation. Unfortunately, almost all EA based complex detection methods suggested in the literature were designed with only canonical or traditional components. Further, topological structure of the protein network is the main information that is used in the design of almost all such components. The main contribution of this paper is to formulate a more robust EA with more biological consistency. For this purpose, a new crossover operator is suggested where biological information in terms of both gene semantic similarity and protein functional similarity is fed into its design. To reflect the heuristic roles of both semantic and functional similarities, this paper introduces two gene ontology (GO) aware crossover operators. These are direct annotation-aware and inherited annotation-aware crossover operators. The first strategy is handled with the direct gene ontology annotation of the proteins, while the second strategy is handled with the directed acyclic graph (DAG) of each gene ontology term in the gene product. To conduct our experiments, the proposed EAs with GO-aware crossover operators are compared against the state-of-the-art heuristic, canonical EAs with the traditional crossover operator, and GO-based EAs. Simulation results are evaluated in terms of recall, precision, and F measure at both complex level and protein level. The results prove that the new EA design encourages a more reliable treatment of exploration and exploitation and, thus, improves the detection ability for more accurate protein complex structures.
This paper discusses the method for determining the permeability values of Tertiary Reservoir in Ajeel field (Jeribe, dhiban, Euphrates) units and this study is very important to determine the permeability values that it is needed to detect the economic value of oil in Tertiary Formation. This study based on core data from nine wells and log data from twelve wells. The wells are AJ-1, AJ-4, AJ-6, AJ-7, AJ-10, AJ-12, AJ-13, AJ-14, AJ-15, AJ-22, AJ-25, and AJ-54, but we have chosen three wells (AJ4, AJ6, and AJ10) to study in this paper. Three methods are used for this work and this study indicates that one of the best way of obtaining permeability is the Neural network method because the values of permeability obtained be
... Show MoreThe paper proposes a methodology for predicting packet flow at the data plane in smart SDN based on the intelligent controller of spike neural networks(SNN). This methodology is applied to predict the subsequent step of the packet flow, consequently reducing the overcrowding that might happen. The centralized controller acts as a reactive controller for managing the clustering head process in the Software Defined Network data layer in the proposed model. The simulation results show the capability of Spike Neural Network controller in SDN control layer to improve the (QoS) in the whole network in terms of minimizing the packet loss ratio and increased the buffer utilization ratio.
Tigris River is the lifeline that supplies a great part of Iraq with water from north to south. Throughout its entire length, the river is battered by various types of pollutants such as wastewater effluents from municipal, industrial, agricultural activities, and others. Hence, the water quality assessment of the Tigris River is crucial in ensuring that appropriate and adequate measures are taken to save the river from as much pollution as possible. In this study, six water treatment plants (WTPs) situated on the two-banks of the Tigris within Baghdad City were Al Karkh; Sharq Dijla; Al Wathba; Al Karama; Al Doura, and Al Wahda from northern Baghdad to its south, that selected to determine the removal efficiency of turbidity and
... Show MoreWith the spread use of internet, especially the web of social media, an unusual quantity of information is found that includes a number of study fields such as psychology, entertainment, sociology, business, news, politics, and other cultural fields of nations. Data mining methodologies that deal with social media allows producing enjoyable scene on the human behaviour and interaction. This paper demonstrates the application and precision of sentiment analysis using traditional feedforward and two of recurrent neural networks (gated recurrent unit (GRU) and long short term memory (LSTM)) to find the differences between them. In order to test the system’s performance, a set of tests is applied on two public datasets. The firs
... Show MoreDust is a frequent contributor to health risks and changes in the climate, one of the most dangerous issues facing people today. Desertification, drought, agricultural practices, and sand and dust storms from neighboring regions bring on this issue. Deep learning (DL) long short-term memory (LSTM) based regression was a proposed solution to increase the forecasting accuracy of dust and monitoring. The proposed system has two parts to detect and monitor the dust; at the first step, the LSTM and dense layers are used to build a system using to detect the dust, while at the second step, the proposed Wireless Sensor Networks (WSN) and Internet of Things (IoT) model is used as a forecasting and monitoring model. The experiment DL system
... Show MoreImage compression plays an important role in reducing the size and storage of data while increasing the speed of its transmission through the Internet significantly. Image compression is an important research topic for several decades and recently, with the great successes achieved by deep learning in many areas of image processing, especially image compression, and its use is increasing Gradually in the field of image compression. The deep learning neural network has also achieved great success in the field of processing and compressing various images of different sizes. In this paper, we present a structure for image compression based on the use of a Convolutional AutoEncoder (CAE) for deep learning, inspired by the diversity of human eye
... Show MoreUsing the Neural network as a type of associative memory will be introduced in this paper through the problem of mobile position estimation where mobile estimate its location depending on the signal strength reach to it from several around base stations where the neural network can be implemented inside the mobile. Traditional methods of time of arrival (TOA) and received signal strength (RSS) are used and compared with two analytical methods, optimal positioning method and average positioning method. The data that are used for training are ideal since they can be obtained based on geometry of CDMA cell topology. The test of the two methods TOA and RSS take many cases through a nonlinear path that MS can move through that region. The result
... Show MoreUsing the Neural network as a type of associative memory will be introduced in this paper through the problem of mobile position estimation where mobile estimate its location depending on the signal strength reach to it from several around base stations where the neural network can be implemented inside the mobile. Traditional methods of time of arrival (TOA) and received signal strength (RSS) are used and compared with two analytical methods, optimal positioning method and average positioning method. The data that are used for training are ideal since they can be obtained based on geometry of CDMA cell topology. The test of the two methods TOA and RSS take many cases through a nonlinear path that MS can move through tha
... Show MoreIn this paper, we used four classification methods to classify objects and compareamong these methods, these are K Nearest Neighbor's (KNN), Stochastic Gradient Descentlearning (SGD), Logistic Regression Algorithm(LR), and Multi-Layer Perceptron (MLP). Weused MCOCO dataset for classification and detection the objects, these dataset image wererandomly divided into training and testing datasets at a ratio of 7:3, respectively. In randomlyselect training and testing dataset images, converted the color images to the gray level, thenenhancement these gray images using the histogram equalization method, resize (20 x 20) fordataset image. Principal component analysis (PCA) was used for feature extraction, andfinally apply four classification metho
... Show MoreDrilling deviated wells is a frequently used approach in the oil and gas industry to increase the productivity of wells in reservoirs with a small thickness. Drilling these wells has been a challenge due to the low rate of penetration (ROP) and severe wellbore instability issues. The objective of this research is to reach a better drilling performance by reducing drilling time and increasing wellbore stability.
In this work, the first step was to develop a model that predicts the ROP for deviated wells by applying Artificial Neural Networks (ANNs). In the modeling, azimuth (AZI) and inclination (INC) of the wellbore trajectory, controllable drilling parameters, unconfined compressive strength (UCS), formation
... Show More