Massive multiple-input multiple-output (massive-MIMO) is considered as the key technology to meet the huge demands of data rates in the future wireless communications networks. However, for massive-MIMO systems to realize their maximum potential gain, sufficiently accurate downlink (DL) channel state information (CSI) with low overhead to meet the short coherence time (CT) is required. Therefore, this article aims to overcome the technical challenge of DL CSI estimation in a frequency-division-duplex (FDD) massive-MIMO with short CT considering five different physical correlation models. To this end, the statistical structure of the massive-MIMO channel, which is captured by the physical correlation is exploited to find sufficiently accurate DL CSI estimation. Specifically, to reduce the DL CSI estimation overhead, the training sequence is designed based on the eigenvectors of the transmit correlation matrix. To this end, the achievable sum rate (ASR) maximization and the mean square error (MSE) of CSI estimation with short CT are investigated using the proposed training sequence design. Furthermore, this article examines the effect of channel hardening in an FDD massive-MIMO system. The results demonstrate that in high correlation scenarios, a large loss in channel hardening is obtained. The results reveal that increasing the correlation level reduces the MSE but does not increase the ASR. However, exploiting the spatial correction structure is still very essential for the FDD massive-MIMO systems under limited CT. This finding holds for all the physical correlation models considered.
In this paper, some estimators for the reliability function R(t) of Basic Gompertz (BG) distribution have been obtained, such as Maximum likelihood estimator, and Bayesian estimators under General Entropy loss function by assuming non-informative prior by using Jefferys prior and informative prior represented by Gamma and inverted Levy priors. Monte-Carlo simulation is conducted to compare the performance of all estimates of the R(t), based on integrated mean squared.
The behavior and shear strength of full-scale (T-section) reinforced concrete deep beams, designed according to the strut-and-tie approach of ACI Code-19 specifications, with various large web openings were investigated in this paper. A total of 7 deep beam specimens with identical shear span-to-depth ratios have been tested under mid-span concentrated load applied monotonically until beam failure. The main variables studied were the effects of width and depth of the web openings on deep beam performance. Experimental data results were calibrated with the strut-and-tie approach, adopted by ACI 318-19 code for the design of deep beams. The provided strut-and-tie design model in ACI 318-19 code provision was assessed and found to be u
... Show MoreThis paper delves into some significant performance measures (PMs) of a bulk arrival queueing system with constant batch size b, according to arrival rates and service rates being fuzzy parameters. The bulk arrival queuing system deals with observation arrival into the queuing system as a constant group size before allowing individual customers entering to the service. This leads to obtaining a new tool with the aid of generating function methods. The corresponding traditional bulk queueing system model is more convenient under an uncertain environment. The α-cut approach is applied with the conventional Zadeh's extension principle (ZEP) to transform the triangular membership functions (Mem. Fs) fuzzy queues into a family of conventional b
... Show MoreIn the presence of multi-collinearity problem, the parameter estimation method based on the ordinary least squares procedure is unsatisfactory. In 1970, Hoerl and Kennard insert analternative method labeled as estimator of ridge regression.
In such estimator, ridge parameter plays an important role in estimation. Various methods were proposed by many statisticians to select the biasing constant (ridge parameter). Another popular method that is used to deal with the multi-collinearity problem is the principal component method. In this paper,we employ the simulation technique to compare the performance of principal component estimator with some types of ordinary ridge regression estimators based on the value of t
... Show MoreIn order to select the optimal tracking of fast time variation of multipath fast time variation Rayleigh fading channel, this paper focuses on the recursive least-squares (RLS) and Extended recursive least-squares (E-RLS) algorithms and reaches the conclusion that E-RLS is more feasible according to the comparison output of the simulation program from tracking performance and mean square error over five fast time variation of Rayleigh fading channels and more than one time (send/receive) reach to 100 times to make sure from efficiency of these algorithms.
In order to evaluate the performance of introduced varieties of maize and test them under different levels of plant density, and to determine which of the introduced varieties give a high yield and at what plant density, a field experiment was carried out at Station A in the Department of Field Crops- College of Agricultural Engineering Sciences - University of Baghdad- Jadiriyah, for the fall season 2021, the RCBD design was used with four replications, in a split plot arrangement, the three plant densities (50.000, 70.000, and 90.000 Plant s ha-1) were the main plates, while the varieties represented the secondary factor, which is six varieties of maize, class 2 = 5783 DKC, Class 3 = 6315 DKC, Class 4= 6590 DKC, whic
... Show MoreThe aim of the research is to examine the multiple intelligence test item selection based on Howard Gardner's MI model using the Generalized Partial Estimation Form, generalized intelligence. The researcher adopted the scale of multiple intelligences by Kardner, it consists of (102) items with eight sub-scales. The sample consisted of (550) students from Baghdad universities, Technology University, al-Mustansiriyah university, and Iraqi University for the academic year (2019/2020). It was verified assumptions theory response to a single (one-dimensional, local autonomy, the curve of individual characteristics, speed factor and application), and analysis of the data according to specimen partial appreciation of the generalized, and limits
... Show MoreIn this paper, two parameters for the Exponential distribution were estimated using the
Bayesian estimation method under three different loss functions: the Squared error loss function,
the Precautionary loss function, and the Entropy loss function. The Exponential distribution prior
and Gamma distribution have been assumed as the priors of the scale γ and location δ parameters
respectively. In Bayesian estimation, Maximum likelihood estimators have been used as the initial
estimators, and the Tierney-Kadane approximation has been used effectively. Based on the MonteCarlo
simulation method, those estimators were compared depending on the mean squared errors (MSEs).The results showed that the Bayesian esti