Software Defined Network (SDN) is a new technology that separate the control plane from the data plane. SDN provides a choice in automation and programmability faster than traditional network. It supports the Quality of Service (QoS) for video surveillance application. One of most significant issues in video surveillance is how to find the best path for routing the packets between the source (IP cameras) and destination (monitoring center). The video surveillance system requires fast transmission and reliable delivery and high QoS. To improve the QoS and to achieve the optimal path, the SDN architecture is used in this paper. In addition, different routing algorithms are used with different steps. First, we evaluate the video transmission over the SDN with Bellman Ford algorithm. Then, because the limitation of Bellman ford algorithm, the Dijkstra algorithm is used to change the path when a congestion occurs. Furthermore, the Dijkstra algorithm is used with two controllers to reduce the time consumed by the SDN controller. POX and Pyretic SDN controllers are used such that POX controller is responsible for the network monitoring, while Pyretic controller is responsible for the routing algorithm and path selection. Finally, a modified Dijkstra algorithm is further proposed and evaluated with two controllers to enhance the performance. The results show that the modified Dijkstra algorithm outperformed the other approaches in the aspect of QoS parameters.
Document source identification in printer forensics involves determining the origin of a printed document based on characteristics such as the printer model, serial number, defects, or unique printing artifacts. This process is crucial in forensic investigations, particularly in cases involving counterfeit documents or unauthorized printing. However, consistent pattern identification across various printer types remains challenging, especially when efforts are made to alter printer-generated artifacts. Machine learning models are often used in these tasks, but selecting discriminative features while minimizing noise is essential. Traditional KNN classifiers require a careful selection of distance metrics to capture relevant printing
... Show MoreThe Video effect on Youths Value
The field of Optical Character Recognition (OCR) is the process of converting an image of text into a machine-readable text format. The classification of Arabic manuscripts in general is part of this field. In recent years, the processing of Arabian image databases by deep learning architectures has experienced a remarkable development. However, this remains insufficient to satisfy the enormous wealth of Arabic manuscripts. In this research, a deep learning architecture is used to address the issue of classifying Arabic letters written by hand. The method based on a convolutional neural network (CNN) architecture as a self-extractor and classifier. Considering the nature of the dataset images (binary images), the contours of the alphabet
... Show MoreMedicine is one of the fields where the advancement of computer science is making significant progress. Some diseases require an immediate diagnosis in order to improve patient outcomes. The usage of computers in medicine improves precision and accelerates data processing and diagnosis. In order to categorize biological images, hybrid machine learning, a combination of various deep learning approaches, was utilized, and a meta-heuristic algorithm was provided in this research. In addition, two different medical datasets were introduced, one covering the magnetic resonance imaging (MRI) of brain tumors and the other dealing with chest X-rays (CXRs) of COVID-19. These datasets were introduced to the combination network that contained deep lea
... Show MoreThe aim was to design a MATLAB program to calculate the phreatic surface of the multi-well system and present the graphical shape of the water table drawdown induced by water extraction. Dupuit’s assumption is the base for representing the dewatering curve. The program will offer the volume of water to be extracted, the total number of wells, and the spacing between them as well as the expected settlement of soil surrounding the dewatering foundation pit. The dewatering well arrangement is required in execution works, and it needs more attention due to the settlement produced from increasing effective stress.
Estimating the semantic similarity between short texts plays an increasingly prominent role in many fields related to text mining and natural language processing applications, especially with the large increase in the volume of textual data that is produced daily. Traditional approaches for calculating the degree of similarity between two texts, based on the words they share, do not perform well with short texts because two similar texts may be written in different terms by employing synonyms. As a result, short texts should be semantically compared. In this paper, a semantic similarity measurement method between texts is presented which combines knowledge-based and corpus-based semantic information to build a semantic network that repre
... Show MoreThe background subtraction is a leading technique adopted for detecting the moving objects in video surveillance systems. Various background subtraction models have been applied to tackle different challenges in many surveillance environments. In this paper, we propose a model of pixel-based color-histogram and Fuzzy C-means (FCM) to obtain the background model using cosine similarity (CS) to measure the closeness between the current pixel and the background model and eventually determine the background and foreground pixel according to a tuned threshold. The performance of this model is benchmarked on CDnet2014 dynamic scenes dataset using statistical metrics. The results show a better performance against the state-of the art
... Show MoreAny software application can be divided into four distinct interconnected domains namely, problem domain, usage domain, development domain and system domain. A methodology for assistive technology software development is presented here that seeks to provide a framework for requirements elicitation studies together with their subsequent mapping implementing use-case driven object-oriented analysis for component based software architectures. Early feedback on user interface components effectiveness is adopted through process usability evaluation. A model is suggested that consists of the three environments; problem, conceptual, and representational environments or worlds. This model aims to emphasize on the relationship between the objects
... Show MoreCloth simulation and animation has been the topic of research since the mid-80's in the field of computer graphics. Enforcing incompressible is very important in real time simulation. Although, there are great achievements in this regard, it still suffers from unnecessary time consumption in certain steps that is common in real time applications. This research develops a real-time cloth simulator for a virtual human character (VHC) with wearable clothing. This research achieves success in cloth simulation on the VHC through enhancing the position-based dynamics (PBD) framework by computing a series of positional constraints which implement constant densities. Also, the self-collision and collision wit
... Show MoreToday with increase using social media, a lot of researchers have interested in topic extraction from Twitter. Twitter is an unstructured short text and messy that it is critical to find topics from tweets. While topic modeling algorithms such as Latent Semantic Analysis (LSA) and Latent Dirichlet Allocation (LDA) are originally designed to derive topics from large documents such as articles, and books. They are often less efficient when applied to short text content like Twitter. Luckily, Twitter has many features that represent the interaction between users. Tweets have rich user-generated hashtags as keywords. In this paper, we exploit the hashtags feature to improve topics learned