There has been a great deal of research into the considerable challenge of managing of traffic at road junctions; its application to vehicular ad hoc network (VANET) has proved to be of great interest in the developed world. Dynamic topology is one of the vital challenges facing VANET; as a result, routing of packets to their destination successfully and efficiently is a non-simplistic undertaking. This paper presents a MDORA, an efficient and uncomplicated algorithm enabling intelligent wireless vehicular communications. MDORA is a robust routing algorithm that facilitates reliable routing through communication between vehicles. As a position-based routing technique, the MDORA algorithm, vehicles' precise locations are used to establish the optimal route by which the vehicles may reach their desired destinations. By determining the route containing the maximum distance with the minimum number of hops, MDORA minimizes the control overhead. The final aspect of the paper is to compare gains of MDORA with those of existing protocols such as AODV, GPSR-L and HLAR in terms of throughput, packet delivery ratio and average delay. From the analysis, it will be evident that the performance of MDORA is far better than the other protocols.
The rapid evolution of wireless networking technologies opens the door to the evolution of the Wireless Sensor Networks (WSNs) and their applications in different fields. The WSN consists of small energy sensor nodes used in a harsh environment. The energy needed to communicate between the sensors networks can be identified as one of the major challenges. It is essential to avoid massive loss, or loss of packets, as well as rapid energy depletion and grid injustice, which lead to lower node efficiency and higher packet delivery delays. For this purpose, it was very important to track the usage of energy by nodes in order to improve general network efficiency by the use of intelligent methods to reduce the energy
... Show MoreThe main goal of this paper is to make link between the subjects of projective
geometry, vector space and linear codes. The properties of codes and some examples
are shown. Furthermore, we will give some information about the geometrical
structure of the arcs. All these arcs are give rise to an error-correcting code that
corrects the maximum possible number of errors for its length.
Background: This in vitro study measure and compare the effect of light curing tip distance on the depth of cure by measuring vickers microhardness value on two recently launched bulk fill resin based composites Tetric EvoCeram Bulk Fill and Surefil SDR Flow with 4 mm thickness in comparison to Filtek Z250 Universal Restorative with 2 mm thickness. In addition, measure and compare the bottom to top microhardness ratio with different light curing tip distances. Materials and Method: One hundred fifty composite specimens were obtained from two cylindrical plastic molds the first one for bulk fill composites (Tetric EvoCeram Bulk Fill and Surefil SDR Flow) with 4 mm diameter and 4 mm depth, the second one for Filtek Z250 Universal Restorative
... Show MoreNowad ays, with the development of internet communication that provides many facilities to the user leads in turn to growing unauthorized access. As a result, intrusion detection system (IDS) becomes necessary to provide a high level of security for huge amount of information transferred in the network to protect them from threats. One of the main challenges for IDS is the high dimensionality of the feature space and how the relevant features to distinguish the normal network traffic from attack network are selected. In this paper, multi-objective evolutionary algorithm with decomposition (MOEA/D) and MOEA/D with the injection of a proposed local search operator are adopted to solve the Multi-objective optimization (MOO) followed by Naï
... Show MoreIn the present work, an image compression method have been modified by combining The Absolute Moment Block Truncation Coding algorithm (AMBTC) with a VQ-based image coding. At the beginning, the AMBTC algorithm based on Weber's law condition have been used to distinguish low and high detail blocks in the original image. The coder will transmit only mean of low detailed block (i.e. uniform blocks like background) on the channel instate of transmit the two reconstruction mean values and bit map for this block. While the high detail block is coded by the proposed fast encoding algorithm for vector quantized method based on the Triangular Inequality Theorem (TIE), then the coder will transmit the two reconstruction mean values (i.e. H&L)
... Show MoreIn this paper, a modified derivation has been introduced to analyze the construction of C-space. The profit from using C-space is to make the process of path planning more safety and easer. After getting the C-space construction and map for two-link planar robot arm, which include all the possible situations of collision between robot parts and obstacle(s), the A* algorithm, which is usually used to find a heuristic path on Cartesian W-space, has been used to find a heuristic path on C-space map. Several modifications are needed to apply the methodology for a manipulator with degrees of freedom more than two. The results of C-space map, which are derived by the modified analysis, prove the accuracy of the overall C-space mapping and cons
... Show MoreOne of the most difficult issues in the history of communication technology is the transmission of secure images. On the internet, photos are used and shared by millions of individuals for both private and business reasons. Utilizing encryption methods to change the original image into an unintelligible or scrambled version is one way to achieve safe image transfer over the network. Cryptographic approaches based on chaotic logistic theory provide several new and promising options for developing secure Image encryption methods. The main aim of this paper is to build a secure system for encrypting gray and color images. The proposed system consists of two stages, the first stage is the encryption process, in which the keys are genera
... Show MoreMost recognition system of human facial emotions are assessed solely on accuracy, even if other performance criteria are also thought to be important in the evaluation process such as sensitivity, precision, F-measure, and G-mean. Moreover, the most common problem that must be resolved in face emotion recognition systems is the feature extraction methods, which is comparable to traditional manual feature extraction methods. This traditional method is not able to extract features efficiently. In other words, there are redundant amount of features which are considered not significant, which affect the classification performance. In this work, a new system to recognize human facial emotions from images is proposed. The HOG (Histograms of Or
... Show More