Link failure refers to the failure between two connections/nodes in a perfectly working simulation scenario at a particular instance. Transport layer routing protocols form an important basis of setting up a simulation, with Transmission Control Protocol and User Datagram Protocol being the primary of them. The research makes use of Network Simulator v2.35 to conduct different simulation experiments for link failure and provide validation results. In this paper, both protocols, TCP and UDP are compared based on the throughput of packets delivered from one node to the other constrained to the condition that for a certain interval of time the link fails and the simulation time remains the same for either of the protocols. Overall, this analysis is based on determining the performance of both protocols with a fixed packet size and bandwidth. This analysis, performed with the help of NS2 and XGraph, shows that the transport layer protocol, UDP acts better than TCP in terms of throughput. This opens the questions to other fellow researchers of how different metrics act in both the cases when a link failure occurs. In UDP, the throughput drops less as compared to the TCP at the time of the link failure regardless of if simulation was executed for different time periods i.e., 70,100,300,900 and 1000 seconds. The link failure interval is also varied from 10,15,20,40,350 and 440 seconds to generalize and validate the performance of the network during the interval.
The traditional centralized network management approach presents severe efficiency and scalability limitations in large scale networks. The process of data collection and analysis typically involves huge transfers of management data to the manager which cause considerable network throughput and bottlenecks at the manager side. All these problems processed using the Agent technology as a solution to distribute the management functionality over the network elements. The proposed system consists of the server agent that is working together with clients agents to monitor the logging (off, on) of the clients computers and which user is working on it. file system watcher mechanism is used to indicate any change in files. The results were presente
... Show MoreThe main aim of image compression is to reduce the its size to be able for transforming and storage, therefore many methods appeared to compress the image, one of these methods is "Multilayer Perceptron ". Multilayer Perceptron (MLP) method which is artificial neural network based on the Back-Propagation algorithm for compressing the image. In case this algorithm depends upon the number of neurons in the hidden layer only the above mentioned will not be quite enough to reach the desired results, then we have to take into consideration the standards which the compression process depend on to get the best results. We have trained a group of TIFF images with the size of (256*256) in our research, compressed them by using MLP for each
... Show MoreAbstract
Financing is one of the important pillars for activating and activating the agricultural sector, through which we can see an agricultural project on the ground. However, supplying the agricultural sector with financial resources requires a credit policy that is capable of making the right financing decision, because the financial resources are limited. The credit policy, and the financing decision, must be the best use not only to provide the necessary money, but to work to provide everything that would develop and activate the agricultural sector.
The transformation of the Agricultural Cooperative Bank of Iraq from specialized banking to the overall would lead to a decrease in the volume
... Show MoreBiaxial hollow slab is a reinforced concrete slab system with a grid of internal spherical voids included to reduce the self-weight. This paper presents an experimental study of behavior of one-way prestressed concrete bubbled slabs. Twelve full-scale one-way concrete slabs of (3000mm) length with rectangular cross-sectional area of (460mm) width and (150mm) depth. Different parameters like type of specimen (solid or bubbled slabs), type of reinforcement (normal or prestress), range of PPR and diameter of plastic spheres (100 or 120mm) are considered. Due to the using of prestressing force in bubbled slabs (with ratio of plastic sphere diameter D to slab thickness H, D/H=0.67), the specimens showed an increase in ultimat
... Show MoreThe research focuses on determination of best location of high elevated tank using the required head of pump as a measure for this purpose. Five types of network were used to find the effect of the variation in the discharge and the node elevation on the best location. The most weakness point was determined for each network. Preliminary tank locations were chosen for test along the primary pipe with same interval distance. For each location, the water elevation in tank and pump head was calculated at each hour depending on the pump head that required to achieve the minimum pressure at the most weakness point. Then, the sum of pump heads through the day was determined. The results proved that there is a most economical lo
... Show MoreA load flow program is developed using MATLAB and based on the Newton–Raphson method,which shows very fast and efficient rate of convergence as well as computationally the proposed method is very efficient and it requires less computer memory through the use of sparsing method and other methods in programming to accelerate the run speed to be near the real time.
The designed program computes the voltage magnitudes and phase angles at each bus of the network under steady–state operating conditions. It also computes the power flow and power losses for all equipment, including transformers and transmission lines taking into consideration the effects of off–nominal, tap and phase shift transformers, generators, shunt capacitors, sh
The conventional procedures of clustering algorithms are incapable of overcoming the difficulty of managing and analyzing the rapid growth of generated data from different sources. Using the concept of parallel clustering is one of the robust solutions to this problem. Apache Hadoop architecture is one of the assortment ecosystems that provide the capability to store and process the data in a distributed and parallel fashion. In this paper, a parallel model is designed to process the k-means clustering algorithm in the Apache Hadoop ecosystem by connecting three nodes, one is for server (name) nodes and the other two are for clients (data) nodes. The aim is to speed up the time of managing the massive sc
... Show MoreInvestigating gender differences based on emotional changes becomes essential to understand various human behaviors in our daily life. Ten students from the University of Vienna have been recruited by recording the electroencephalogram (EEG) dataset while watching four short emotional video clips (anger, happiness, sadness, and neutral) of audiovisual stimuli. In this study, conventional filter and wavelet (WT) denoising techniques were applied as a preprocessing stage and Hurst exponent