High vehicular mobility causes frequent changes in the density of vehicles, discontinuity in inter-vehicle communication, and constraints for routing protocols in vehicular ad hoc networks (VANETs). The routing must avoid forwarding packets through segments with low network density and high scale of network disconnections that may result in packet loss, delays, and increased communication overhead in route recovery. Therefore, both traffic and segment status must be considered. This paper presents real-time intersection-based segment aware routing (RTISAR), an intersection-based segment aware algorithm for geographic routing in VANETs. This routing algorithm provides an optimal route for forwarding the data packets toward their destination by considering the traffic segment status when choosing the next intersection. RTISAR presents a new formula for assessing segment status based on connectivity, density, load segment, and cumulative distance toward the destination. A verity period mechanism is proposed to denote the projected period when a network failure is likely to occur in a particular segment. This mechanism can be calculated for each collector packet to minimize the frequency of RTISAR execution and to control the generation of collector packets. As a result, this mechanism minimizes the communication overhead generated during the segment status computation process. Simulations are performed to evaluate RTISAR, and the results are compared with those of intersection-based connectivity aware routing and traffic flow oriented routing. The evaluation results provided evidence that RTISAR outperforms in terms of packet delivery ratio, packet delivery delay, and communication overhead.
Due to the large population of motorway users in the country of Iraq, various approaches have been adopted to manage queues such as implementation of traffic lights, avoidance of illegal parking, amongst others. However, defaulters are recorded daily, hence the need to develop a mean of identifying these defaulters and bring them to book. This article discusses the development of an approach of recognizing Iraqi licence plates such that defaulters of queue management systems are identified. Multiple agencies worldwide have quickly and widely adopted the recognition of a vehicle license plate technology to expand their ability in investigative and security matters. License plate helps detect the vehicle's information automatically ra
... Show MoreSteganography is a technique of concealing secret data within other quotidian files of the same or different types. Hiding data has been essential to digital information security. This work aims to design a stego method that can effectively hide a message inside the images of the video file. In this work, a video steganography model has been proposed through training a model to hiding video (or images) within another video using convolutional neural networks (CNN). By using a CNN in this approach, two main goals can be achieved for any steganographic methods which are, increasing security (hardness to observed and broken by used steganalysis program), this was achieved in this work as the weights and architecture are randomized. Thus,
... Show MoreIn this paper we give definitions, properties and examples of the notion of type Ntopological space. Throughout this paper N is a finite positive number, N 2. The task of this paper is to study and investigate some properties of such spaces with the existence of a relation between this space and artificial Neural Networks (ïNN'S), that is we applied the definition of this space in computer field and specially in parallel processing
<p> Traditionally, wireless networks and optical fiber Networks are independent of each other. Wireless networks are designed to meet specific service requirements, while dealing with weak physical transmission, and maximize system resources to ensure cost effectiveness and satisfaction for the end user. In optical fiber networks, on the other hand, search efforts instead concentrated on simple low-cost, future-proofness against inheritance and high services and applications through optical transparency. The ultimate goal of providing access to information when needed, was considered significantly. Whatever form it is required, not only increases the requirement sees technology convergence of wireless and optical networks but
... Show MoreThe approach of the research is to simulate residual chlorine decay through potable water distribution networks of Gukookcity. EPANET software was used for estimating and predicting chlorine concentration at different water network points . Data requiredas program inputs (pipe properties) were taken from the Baghdad Municipality, factors that affect residual chlorine concentrationincluding (pH ,Temperature, pressure ,flow rate) were measured .Twenty five samples were tested from November 2016 to July 2017.The residual chlorine values varied between ( 0.2-2mg/L) , and pH values varied between (7.6 -8.2) and the pressure was very weak inthis region. Statistical analyses were used to evaluated errors. The calculated concentrations by the calib
... Show MoreTechnically, mobile P2P network system architecture can consider as a distributed architecture system (like a community), where the nodes or users can share all or some of their own software and hardware resources such as (applications store, processing time, storage, network bandwidth) with the other nodes (users) through Internet, and these resources can be accessible directly by the nodes in that system without the need of a central coordination node. The main structure of our proposed network architecture is that all the nodes are symmetric in their functions. In this work, the security issues of mobile P2P network system architecture such as (web threats, attacks and encryption) will be discussed deeply and then we prop
... Show MoreFor several applications, it is very important to have an edge detection technique matching human visual contour perception and less sensitive to noise. The edge detection algorithm describes in this paper based on the results obtained by Maximum a posteriori (MAP) and Maximum Entropy (ME) deblurring algorithms. The technique makes a trade-off between sharpening and smoothing the noisy image. One of the advantages of the described algorithm is less sensitive to noise than that given by Marr and Geuen techniques that considered to be the best edge detection algorithms in terms of matching human visual contour perception.