In this paper, we investigate and characterize the effects of multi-channel and rendezvous protocols on the connectivity of dynamic spectrum access networks using percolation theory. In particular, we focus on the scenario where the secondary nodes have plenty of vacant channels to choose from a phenomenon which we define as channel abundance. To cope with the existence of multi-channel, we use two types of rendezvous protocols: naive ones which do not guarantee a common channel and advanced ones which do. We show that, with more channel abundance, even with the use of either type of rendezvous protocols, it becomes difficult for two nodes to agree on a common channel, thereby, potentially remaining invisible to each other. We model this invisibility as a Poisson thinning process and show that invisibility is even more pronounced with channel abundance. Following the disk graph model, we represent the multiple channels as parallel edges in a graph and build a multi-layered graph (MLG) in R2. In order to study the connectivity, we show how percolation occurs in the MLG by coupling it with a typical discrete percolation. Using a Boolean model and the MLG, we study both cases of primaries' absence and presence. For both cases, we define and characterize connectivity of the secondary network in terms of the available number of channels, deployment densities, number of simultaneous transmissions per node, and communication range. When primary users are absent, we derive the critical number of channels which maintains supercriticality of the secondary network. When primary users are present, we characterize and analyze the connectivity for all the regions: channel abundance, optimal, and channel deprivation. For each region we show the requirement and the outcome of using either type of rendezvous techniques. Moreover, we find the tradeoff between deployment-density versus rendezvous probability which results in a connected network. Our results can be used to decide on the goodness of any channel rendezvous algorithm by computing the expected resultant connectivity. They also provide a guideline for achieving connectivity using minimal resources.
There has been a growing interest in the use of chaotic techniques for enabling secure communication in recent years. This need has been motivated by the emergence of a number of wireless services which require the channel to provide very low bit error rates (BER) along with information security. As more and more information is transacted over wireless media, there has been increasing criminal activity directed against such systems. This paper investigates the feasibility of using chaotic communications over Multiple-Input-Multiple-Output (MIMO) channels. We have studied the performance of differential chaos shift keying (DCSK) with 2×2 Alamouti scheme and 2×1 Alamouti scheme for different chaotic maps over additive white Gaussian noise (
... Show MoreThe aim of this paper is to approximate multidimensional functions f∈C(R^s) by developing a new type of Feedforward neural networks (FFNS) which we called it Greedy ridge function neural networks (GRGFNNS). Also, we introduce a modification to the greedy algorithm which is used to train the greedy ridge function neural networks. An error bound are introduced in Sobolov space. Finally, a comparison was made between the three algorithms (modified greedy algorithm, Backpropagation algorithm and the result in [1]).
In recent years, the number of applications utilizing mobile wireless sensor networks (WSNs) has increased, with the intent of localization for the purposes of monitoring and obtaining data from hazardous areas. Location of the event is very critical in WSN, as sensing data is almost meaningless without the location information. In this paper, two Monte Carlo based localization schemes termed MCL and MSL* are studied. MCL obtains its location through anchor nodes whereas MSL* uses both anchor nodes and normal nodes. The use of normal nodes would increase accuracy and reduce dependency on anchor nodes, but increases communication costs. For this reason, we introduce a new approach called low communication cost schemes to reduce communication
... Show MoreWireless lietworking is· constantly improving, changing and
though ba ic principle is the same. ['nstead of using standard cables to transmit information fmm one point to another (qr more), it .uses radio signals. This paper presents .a case study considedng real-time remote
cqntroJ using Wireless UDP/JP-based networks,. The aim of-this werk is to
reduce real-time· remote control system based upon a simulatio.n model,
which can operate via general communication l"]etworks, whieh on bodies. modern wireles tcchnolqgy.
The first part includes· a brief
... Show MoreThe approach of the research is to simulate residual chlorine decay through potable water distribution networks of Gukookcity. EPANET software was used for estimating and predicting chlorine concentration at different water network points . Data requiredas program inputs (pipe properties) were taken from the Baghdad Municipality, factors that affect residual chlorine concentrationincluding (pH ,Temperature, pressure ,flow rate) were measured .Twenty five samples were tested from November 2016 to July 2017.The residual chlorine values varied between ( 0.2-2mg/L) , and pH values varied between (7.6 -8.2) and the pressure was very weak inthis region. Statistical analyses were used to evaluated errors. The calculated concentrations by the calib
... Show MoreTor (The Onion Routing) network was designed to enable users to browse the Internet anonymously. It is known for its anonymity and privacy security feature against many agents who desire to observe the area of users or chase users’ browsing conventions. This anonymity stems from the encryption and decryption of Tor traffic. That is, the client’s traffic should be subject to encryption and decryption before the sending and receiving process, which leads to delay and even interruption in data flow. The exchange of cryptographic keys between network devices plays a pivotal and critical role in facilitating secure communication and ensuring the integrity of cryptographic procedures. This essential process is time-consuming, which causes del
... Show MoreThis work proposes a new video buffer framework (VBF) to acquire a favorable quality of experience (QoE) for video streaming in cellular networks. The proposed framework consists of three main parts: client selection algorithm, categorization method, and distribution mechanism. The client selection algorithm was named independent client selection algorithm (ICSA), which is proposed to select the best clients who have less interfering effects on video quality and recognize the clients’ urgency based on buffer occupancy level. In the categorization method, each frame in the video buffer is given a specific number for better estimation of the playout outage probability, so it can efficiently handle so many frames from different video
... Show More