Massive multiple-input multiple-output (massive-MIMO) is considered as the key technology to meet the huge demands of data rates in the future wireless communications networks. However, for massive-MIMO systems to realize their maximum potential gain, sufficiently accurate downlink (DL) channel state information (CSI) with low overhead to meet the short coherence time (CT) is required. Therefore, this article aims to overcome the technical challenge of DL CSI estimation in a frequency-division-duplex (FDD) massive-MIMO with short CT considering five different physical correlation models. To this end, the statistical structure of the massive-MIMO channel, which is captured by the physical correlation is exploited to find sufficiently accurate DL CSI estimation. Specifically, to reduce the DL CSI estimation overhead, the training sequence is designed based on the eigenvectors of the transmit correlation matrix. To this end, the achievable sum rate (ASR) maximization and the mean square error (MSE) of CSI estimation with short CT are investigated using the proposed training sequence design. Furthermore, this article examines the effect of channel hardening in an FDD massive-MIMO system. The results demonstrate that in high correlation scenarios, a large loss in channel hardening is obtained. The results reveal that increasing the correlation level reduces the MSE but does not increase the ASR. However, exploiting the spatial correction structure is still very essential for the FDD massive-MIMO systems under limited CT. This finding holds for all the physical correlation models considered.
The origin of this technique lies in the analysis of François Kenai (1694-1774), the leader of the School of Naturalists, presented in Tableau Economique. This method was developed by Karl Marx in his analysis of the Departmental Relationships and the nature of these relations in the models of " "He said. The current picture of this type of economic analysis is credited to the Russian economist Vasily Leontif. This analytical model is commonly used in developing economic plans in developing countries (p. 1, p. 86). There are several types of input and output models, such as static model, mobile model, regional models, and so on. However, this research will be confined to the open-ended model, which found areas in practical application.
... Show MoreMultiple Sclerosis (MS) is a progressive neurological disease characterized by periods of quiescence and exacerbation, epidemiological data suggest the notion that MS is an acquired autoimmune disease caused by environmental factors, probably infectious, in genetically susceptible individuals.The submitted research was attempted to study the possible viral (Paramyxoviruses) role in MS, the sera of 57 MS patients were assayed for anti-measles and anti-mumps IgG antibodies using ELISA technique, the results were compared in order to establish the presence or absence of a significant difference regarding both number of positive cases and antibodies titer between the two groups, the results revealed that there is no in number of measles posit
... Show MoreAccuracy in multiple objects segmentation using geometric deformable models sometimes is not achieved for reasons relating to a number of parameters. In this research, we will study the effect of changing the parameters values on the work of the geometric deformable model and define their efficient values, as well as finding out the relations that link these parameters with each other, by depending on different case studies including multiple objects different in spacing, colors, and illumination. For specific ranges of parameters values the segmentation results are found good, where the success of the work of geometric deformable models has been limited within certain limits to the values of these parameters.
In this study, a three-dimensional finite element analysis using ANSYS 12.1 program had been employed to simulate simply supported reinforced concrete (RC) T-beams with multiple web circular openings subjected to an impact loading. Three design parameters were considered, including size, location and number of the web openings. Twelve models of simply supported RC T-beams were subjected to one point of transient (impact) loading at mid span. Beams were simulated and analysis results were obtained in terms of mid span deflection-time histories and compared with the results of the solid reference one. The maximum mid span deflection is an important index for evaluating damage levels of the RC beams subjected to impact loading. Three experi
... Show MorePassive optical network (PON) is a point to multipoint, bidirectional, high rate optical network for data communication. Different standards of PONs are being implemented, first of all PON was ATM PON (APON) which evolved in Broadband PON (BPON). The two major types are Ethernet PON (EPON) and Gigabit passive optical network (GPON). PON with these different standards is called xPON. To have an efficient performance for the last two standards of PON, some important issues will considered. In our work we will integrate a network with different queuing models such M/M/1 and M/M/m model. After analyzing IPACT as a DBA scheme for this integrated network, we modulate cycle time, traffic load, throughput, utilization and overall delay
... Show MoreThe ï¤-mixing of ï§ - transition in Er 168 populated in Er(n,n ) Er 168 168 ï‚¢ï§ reaction is calculated in the present work by using a2- ratio method. This method has used in previou studies [4, 5, 6, 7] in case that the second transition is pure or for that transition which can be considered as pure only, but in one work we applied this method for two cases, in the first one for pure transition and in the 2nd one for non pure transitions. We take into accunt the experimental a2- coefficient for previous works and ï¤-values for one transition only [1]. The results obtained are, in general, in agood agreement within associated errors, with those reported previously [1], the discrepancies that occur are due to in
... Show MoreMultiple eliminations (de-multiple) are one of seismic processing steps to remove their effects and delineate the correct primary refractors. Using normal move out to flatten primaries is the way to eliminate multiples through transforming these data to frequency-wavenumber domain. The flatten primaries are aligned with zero axis of the frequency-wavenumber domain and any other reflection types (multiples and random noise) are distributed elsewhere. Dip-filter is applied to pass the aligned data and reject others will separate primaries from multiple after transforming the data back from frequency-wavenumber domain to time-distance domain. For that, a suggested name for this technique as normal move out- frequency-wavenumber domain
... Show More