High vehicular mobility causes frequent changes in the density of vehicles, discontinuity in inter-vehicle communication, and constraints for routing protocols in vehicular ad hoc networks (VANETs). The routing must avoid forwarding packets through segments with low network density and high scale of network disconnections that may result in packet loss, delays, and increased communication overhead in route recovery. Therefore, both traffic and segment status must be considered. This paper presents real-time intersection-based segment aware routing (RTISAR), an intersection-based segment aware algorithm for geographic routing in VANETs. This routing algorithm provides an optimal route for forwarding the data packets toward their destination by considering the traffic segment status when choosing the next intersection. RTISAR presents a new formula for assessing segment status based on connectivity, density, load segment, and cumulative distance toward the destination. A verity period mechanism is proposed to denote the projected period when a network failure is likely to occur in a particular segment. This mechanism can be calculated for each collector packet to minimize the frequency of RTISAR execution and to control the generation of collector packets. As a result, this mechanism minimizes the communication overhead generated during the segment status computation process. Simulations are performed to evaluate RTISAR, and the results are compared with those of intersection-based connectivity aware routing and traffic flow oriented routing. The evaluation results provided evidence that RTISAR outperforms in terms of packet delivery ratio, packet delivery delay, and communication overhead.
In this research a proposed technique is used to enhance the frame difference technique performance for extracting moving objects in video file. One of the most effective factors in performance dropping is noise existence, which may cause incorrect moving objects identification. Therefore it was necessary to find a way to diminish this noise effect. Traditional Average and Median spatial filters can be used to handle such situations. But here in this work the focus is on utilizing spectral domain through using Fourier and Wavelet transformations in order to decrease this noise effect. Experiments and statistical features (Entropy, Standard deviation) proved that these transformations can stand to overcome such problems in an elegant way.
... Show MoreA Multiple System Biometric System Based on ECG Data
Password authentication is popular approach to the system security and it is also very important system security procedure to gain access to resources of the user. This paper description password authentication method by using Modify Bidirectional Associative Memory (MBAM) algorithm for both graphical and textual password for more efficient in speed and accuracy. Among 100 test the accuracy result is 100% for graphical and textual password to authenticate a user.
Nowadays, people's expression on the Internet is no longer limited to text, especially with the rise of the short video boom, leading to the emergence of a large number of modal data such as text, pictures, audio, and video. Compared to single mode data ,the multi-modal data always contains massive information. The mining process of multi-modal information can help computers to better understand human emotional characteristics. However, because the multi-modal data show obvious dynamic time series features, it is necessary to solve the dynamic correlation problem within a single mode and between different modes in the same application scene during the fusion process. To solve this problem, in this paper, a feature extraction framework of
... Show MoreHigh frequency (HF) communications have an important role in long distances wireless communications. This frequency band is more important than VHF and UHF, as HF frequencies can cut longer distance with a single hopping. It has a low operation cost because it offers over-the-horizon communications without repeaters, therefore it can be used as a backup for satellite communications in emergency conditions. One of the main problems in HF communications is the prediction of the propagation direction and the frequency of optimum transmission (FOT) that must be used at a certain time. This paper introduces a new technique based on Oblique Ionosonde Station (OIS) to overcome this problem with a low cost and an easier way. This technique uses the
... Show MoreIn this paper, wireless network is planned; the network is predicated on the IEEE 802.16e standardization by WIMAX. The targets of this paper are coverage maximizing, service and low operational fees. WIMAX is planning through three approaches. In approach one; the WIMAX network coverage is major for extension of cell coverage, the best sites (with Band Width (BW) of 5MHz, 20MHZ per sector and four sectors per each cell). In approach two, Interference analysis in CNIR mode. In approach three of the planning, Quality of Services (QoS) is tested and evaluated. ATDI ICS software (Interference Cancellation System) using to perform styling. it shows results in planning area covered 90.49% of the Baghdad City and used 1000 mob
... Show MoreResearch is a central component of neurosurgical training and practice and is increasingly viewed as a quintessential indicator of academic productivity. In this study, we focus on identifying the current status and challenges of neurosurgical research in Iraq.
An online PubMed Medline database search was conducted to identify all articles published by Iraq-based neurosurgeons between 2003 and 2020. Information was extracted in relation to the following parameters: authors, year of publication, author’s affiliation, author’s specialty, article type, article citation, journal name, journal
In this paper, a fast lossless image compression method is introduced for compressing medical images, it is based on splitting the image blocks according to its nature along with using the polynomial approximation to decompose image signal followed by applying run length coding on the residue part of the image, which represents the error caused by applying polynomial approximation. Then, Huffman coding is applied as a last stage to encode the polynomial coefficients and run length coding. The test results indicate that the suggested method can lead to promising performance.