Home Computer and Information Science 2009 Chapter The Stochastic Network Calculus Methodology Deah J. Kadhim, Saba Q. Jobbar, Wei Liu & Wenqing Cheng Chapter 568 Accesses 1 Citations Part of the Studies in Computational Intelligence book series (SCI,volume 208) Abstract The stochastic network calculus is an evolving new methodology for backlog and delay analysis of networks that can account for statistical multiplexing gain. This paper advances the stochastic network calculus by deriving a network service curve, which expresses the service given to a flow by the network as a whole in terms of a probabilistic bound. The presented network service curve permits the calculation of statistical end-to-end delay and backlog bounds for broad classes of arrival and service distributions. The benefits of the derived service curve are illustrated for the Exponentially Bounded Burstiness (EBB) traffic model. It is shown that end-to-end performance measures computed with a network service curve are bounded by O(H logH), where H is the number of nodes traversed by a flow. Using currently available techniques that compute end-to-end bounds by adding single node results, the corresponding performance measures are bounded by O(H3).
In this paper, we find the two solutions of two dimensional stochastic Fredholm integral equations contain two gamma processes differ by the parameters in two cases and equal in the third are solved by the Adomain decomposition method. As a result of the solutions probability density functions and their variances at the time t are derived by depending upon the maximum variances of each probability density function with respect to the three cases. The auto covariance and the power spectral density functions are also derived. To indicate which of the three cases is the best, the auto correlation coefficients are calculated.
Recently, the theory of Complex Networks gives a modern insight into a variety of applications in our life. Complex Networks are used to form complex phenomena into graph-based models that include nodes and edges connecting them. This representation can be analyzed by using network metrics such as node degree, clustering coefficient, path length, closeness, betweenness, density, and diameter, to mention a few. The topology of the complex interconnections of power grids is considered one of the challenges that can be faced in terms of understanding and analyzing them. Therefore, some countries use Complex Networks concepts to model their power grid networks. In this work, the Iraqi Power Grid network (IPG) has been modeled, visua
... Show MoreMost intrusion detection systems are signature based that work similar to anti-virus but they are unable to detect the zero-day attacks. The importance of the anomaly based IDS has raised because of its ability to deal with the unknown attacks. However smart attacks are appeared to compromise the detection ability of the anomaly based IDS. By considering these weak points the proposed
system is developed to overcome them. The proposed system is a development to the well-known payload anomaly detector (PAYL). By
combining two stages with the PAYL detector, it gives good detection ability and acceptable ratio of false positive. The proposed system improve the models recognition ability in the PAYL detector, for a filtered unencrypt
In this paper, the bowtie method was utilized by a multidisciplinary team in the Federal Board of Supreme Audit (FBSA)for the purpose of managing corruption risks threatening the Iraqi construction sector. Corruption in Iraq is a widespread phenomenon that threatens to degrade society and halt the wheel of economic development, so it must be reduced through appropriate strategies. A total of eleven corruption risks have been identified by the involved parties in corruption and were analyzed by using probability and impact matrix and their priority has been ranked. Bowtie analysis was conducted on four factors with high score risk in causing corruption in the planning stage. The number and effectiveness of the existing proactive meas
... Show MoreSupport vector machines (SVMs) are supervised learning models that analyze data for classification or regression. For classification, SVM is widely used by selecting an optimal hyperplane that separates two classes. SVM has very good accuracy and extremally robust comparing with some other classification methods such as logistics linear regression, random forest, k-nearest neighbor and naïve model. However, working with large datasets can cause many problems such as time-consuming and inefficient results. In this paper, the SVM has been modified by using a stochastic Gradient descent process. The modified method, stochastic gradient descent SVM (SGD-SVM), checked by using two simulation datasets. Since the classification of different ca
... Show MoreThe research aims to design an electronic program that allows users to assess the possibility of different practices for projects management professional according to the PMBOK methodology)) and using the requirements Data mentioned in the "knowledge and experience in project management Evaluation guide" issued by the professional Institute of project management According to the results of this program will be electronic The possible classification of project management in terms of both (proficiency_ perform tasks) as less than the desired level or within or above average in terms of best practices, and finally a number of recommendations to overcome the possible shortcomings. The most important is the need to enrich the service
... Show MoreSpecialized hardware implementations of Artificial Neural Networks (ANNs) can offer faster execution than general-purpose microprocessors by taking advantage of reusable modules, parallel processes and specialized computational components. Modern high-density Field Programmable Gate Arrays (FPGAs) offer the required flexibility and fast design-to-implementation time with the possibility of exploiting highly parallel computations like those required by ANNs in hardware. The bounded width of the data in FPGA ANNs will add an additional error to the result of the output. This paper derives the equations of the additional error value that generate from bounded width of the data and proposed a method to reduce the effect of the error to give
... Show More