أثبتت الشبكات المحددة بالبرمجيات (SDN) تفوقها في معالجة مشاكل الشبكة العادية مثل قابلية التوسع وخفة الحركة والأمن. تأتي هذه الميزة من SDN بسبب فصل مستوى التحكم عن مستوى البيانات. على الرغم من وجود العديد من الأوراق والدراسات التي تركز على إدارة SDN، والرصد، والتحكم، وتحسين QoS، إلا أن القليل منها يركز على تقديم ما يستخدمونه لتوليد حركة المرور وقياس أداء الشبكة. كما أن المؤلفات تفتقر إلى مقارنات بين الأدوات والأساليب المستخدمة في هذا السياق. تقدم هذه الورقة كيفية محاكاة إحصاءات المرور وتوليدها والحصول عليها من بيئة SDN. وبالإضافة إلى ذلك، تعالج المقارنة بين الأساليب المستخدمة في جمع بيانات شبكة المعرفة برمجياً لاستكشاف قدرة كل طريقة، وبالتالي تحديد البيئة المناسبة لكل طريقة. تمت محاكاة اختبار SDN باستخدام برنامج Mininet مع طوبولوجيا الأشجار ومفاتيح OpenFlow. تم توصيل وحدة تحكم RYU بإرسال التحكم. تُستخدم الأدوات الشهيرة iperf3 و ping و python scripts لجمع مجموعات بيانات الشبكة من عدة أجهزة في الشبكة. تم استخدام Wireshark وتطبيقات RYU وأمر ovs-ofctl لمراقبة مجموعة البيانات المجمعة. تظهر النتائج نجاحًا في إنشاء عدة أنواع من مقاييس الشبكة لاستخدامها في المستقبل لتدريب الآلة أو خوارزميات التعلم العميق. وخلصت إلى أنه عند توليد البيانات لغرض التحكم في الازدحام، فإن iperf3 هو أفضل أداة، في حين أن ping مفيد عند توليد البيانات لغرض الكشف عن هجمات DDoS. تعد تطبيقات RYU أكثر ملاءمة للاستفسار عن جميع تفاصيل طوبولوجيا الشبكة نظرًا لقدرتها على عرض الطوبولوجيا وخصائص التبديل وإحصائيات التبديل. كما تم استكشاف العديد من العقبات والأخطاء وإدراجها ليتم منعها عندما يحاول الباحثون إنشاء مجموعات البيانات هذه في جهودهم العلمية التالية.
Abstract
A surface fitting model is developed based on calorimeter data for two famous brands of household compressors. Correlation equations of ten coefficient polynomials were found as a function of refrigerant saturating and evaporating temperatures in range of (-35℃ to -10℃) using Matlab software for cooling capacity, power consumption, and refrigerant mass flow rate.
Additional correlations equations for these variables as a quick choice selection for a proper compressor use at ASHRAE standard that cover a range of swept volume range (2.24-11.15) cm3.
The result indicated that these surface fitting models are accurate with in ± 15% for 72 compressors model of cooling cap
... Show MoreThere is a great operational risk to control the day-to-day management in water treatment plants, so water companies are looking for solutions to predict how the treatment processes may be improved due to the increased pressure to remain competitive. This study focused on the mathematical modeling of water treatment processes with the primary motivation to provide tools that can be used to predict the performance of the treatment to enable better control of uncertainty and risk. This research included choosing the most important variables affecting quality standards using the correlation test. According to this test, it was found that the important parameters of raw water: Total Hardn
This research includes structure interpretation of the Yamama Formation (Lower Cretaceous) and the Naokelekan Formation (Jurassic) using 2D seismic reflection data of the Tuba oil field region, Basrah, southern Iraq. The two reflectors (Yamama and Naokelekan) were defined and picked as peak and tough depending on the 2D seismic reflection interpretation process, based on the synthetic seismogram and well log data. In order to obtain structural settings, these horizons were followed over all the regions. Two-way travel-time maps, depth maps, and velocity maps have been produced for top Yamama and top Naokelekan formations. The study concluded that certain longitudinal enclosures reflect anticlines in the east and west of the study ar
... Show MoreThe purpose of this paper is to model and forecast the white oil during the period (2012-2019) using volatility GARCH-class. After showing that squared returns of white oil have a significant long memory in the volatility, the return series based on fractional GARCH models are estimated and forecasted for the mean and volatility by quasi maximum likelihood QML as a traditional method. While the competition includes machine learning approaches using Support Vector Regression (SVR). Results showed that the best appropriate model among many other models to forecast the volatility, depending on the lowest value of Akaike information criterion and Schwartz information criterion, also the parameters must be significant. In addition, the residuals
... Show MoreIn data mining, classification is a form of data analysis that can be used to extract models describing important data classes. Two of the well known algorithms used in data mining classification are Backpropagation Neural Network (BNN) and Naïve Bayesian (NB). This paper investigates the performance of these two classification methods using the Car Evaluation dataset. Two models were built for both algorithms and the results were compared. Our experimental results indicated that the BNN classifier yield higher accuracy as compared to the NB classifier but it is less efficient because it is time-consuming and difficult to analyze due to its black-box implementation.
This paper present the fast and robust approach of English text encryption and decryption based on Pascal matrix. The technique of encryption the Arabic or English text or both and show the result when apply this method on plain text (original message) and how will form the intelligible plain text to be unintelligible plain text in order to secure information from unauthorized access and from steel information, an encryption scheme usually uses a pseudo-random enecryption key generated by an algorithm. All this done by using Pascal matrix. Encryption and decryption are done by using MATLAB as programming language and notepad ++to write the input text.This paper present the fast and robust approach of English text encryption and decryption b
... Show MoreIn this study, we made a comparison between LASSO & SCAD methods, which are two special methods for dealing with models in partial quantile regression. (Nadaraya & Watson Kernel) was used to estimate the non-parametric part ;in addition, the rule of thumb method was used to estimate the smoothing bandwidth (h). Penalty methods proved to be efficient in estimating the regression coefficients, but the SCAD method according to the mean squared error criterion (MSE) was the best after estimating the missing data using the mean imputation method
Today, the role of cloud computing in our day-to-day lives is very prominent. The cloud computing paradigm makes it possible to provide demand-based resources. Cloud computing has changed the way that organizations manage resources due to their robustness, low cost, and pervasive nature. Data security is usually realized using different methods such as encryption. However, the privacy of data is another important challenge that should be considered when transporting, storing, and analyzing data in the public cloud. In this paper, a new method is proposed to track malicious users who use their private key to decrypt data in a system, share it with others and cause system information leakage. Security policies are also considered to be int
... Show More