This Book is intended to be textbook studied for undergraduate course in multivariate analysis. This book is designed to be used in semester system. In order to achieve the goals of the book, it is divided into the following chapters. Chapter One introduces matrix algebra. Chapter Two devotes to Linear Equation System Solution with quadratic forms, Characteristic roots & vectors. Chapter Three discusses Partitioned Matrices and how to get Inverse, Jacobi and Hessian matrices. Chapter Four deals with Multivariate Normal Distribution (MVN). Chapter Five concern with Joint, Marginal and Conditional Normal Distribution, independency and correlations. Many solved examples are intended in this book, in addition to a variety of unsolved relied pro
... Show MoreIn this research, we studied the multiple linear regression models for two variables in the presence of the autocorrelation problem for the error term observations and when the error is distributed with general logistic distribution. The auto regression model is involved in the studying and analyzing of the relationship between the variables, and through this relationship, the forecasting is completed with the variables as values. A simulation technique is used for comparison methods depending on the mean square error criteria in where the estimation methods that were used are (Generalized Least Squares, M Robust, and Laplace), and for different sizes of samples (20, 40, 60, 80, 100, 120). The M robust method is demonstrated the best metho
... Show MoreStatistics has an important role in studying the characteristics of diverse societies. By using statistical methods, the researcher can make appropriate decisions to reject or accept statistical hypotheses. In this paper, the statistical analysis of the data of variables related to patients infected with the Coronavirus was conducted through the method of multivariate analysis of variance (MANOVA) and the statement of the effect of these variables.
Pareto distribution is used in many economic, financial and social applications. This distribution is used for the study of income and wealth and the study of settlement in cities and villages and the study of the sizes of oil wells as well as in the field of communication through the speed of downloading files from the Internet according to their sizes. This distribution is used in mechanical engineering as one of the distributions of models of failure, stress and durability. Given the practical importance of this distribution on the one hand, and the scarcity of sources and statistical research that deal with it, this research touched on some statistical characteristics such as derivation of its mathematical function , probability density
... Show MoreThis research aims to suggest formulas to estimate carry-over effects with two-period change-over design, and then, all other effects in the analysis of variance of this design, and find the efficiency of the two-period change-over design relative to another design (say, completely randomized design).
The goal of the research is to find the optimization in the test of the appropriate cross-over design for the experiment that the researcher is carrying out (under assumption that there are carry-over effects of the treatments) to posterior periods after the application period (which is often assumed to be the first period). The comparison between the double cross-over design and the cross-over design with extra period. The similarities and differences between the two designs were studied by measuring the Relative Efficiency (RE) of the experiment.
This research deals with a shrinking method concerned with the principal components similar to that one which used in the multiple regression “Least Absolute Shrinkage and Selection: LASS”. The goal here is to make an uncorrelated linear combinations from only a subset of explanatory variables that may have a multicollinearity problem instead taking the whole number say, (K) of them. This shrinkage will force some coefficients to equal zero, after making some restriction on them by some "tuning parameter" say, (t) which balances the bias and variance amount from side, and doesn't exceed the acceptable percent explained variance of these components. This had been shown by MSE criterion in the regression case and the percent explained
... Show MoreThis study is dedicated to solving multicollinearity problem for the general linear model by using Ridge regression method. The basic formulation of this method and suggested forms for Ridge parameter is applied to the Gross Domestic Product data in Iraq. This data has normal distribution. The best linear regression model is obtained after solving multicollinearity problem with the suggesting of 10 k value.
In this paper, we derived an estimator of reliability function for Laplace distribution with two parameters using Bayes method with square error loss function, Jeffery’s formula and conditional probability random variable of observation. The main objective of this study is to find the efficiency of the derived Bayesian estimator compared to the maximum likelihood of this function and moment method using simulation technique by Monte Carlo method under different Laplace distribution parameters and sample sizes. The consequences have shown that Bayes estimator has been more efficient than the maximum likelihood estimator and moment estimator in all samples sizes
Average per capita GDP income is an important economic indicator. Economists use this term to determine the amount of progress or decline in the country's economy. It is also used to determine the order of countries and compare them with each other. Average per capita GDP income was first studied using the Time Series (Box Jenkins method), and the second is linear and non-linear regression; these methods are the most important and most commonly used statistical methods for forecasting because they are flexible and accurate in practice. The comparison is made to determine the best method between the two methods mentioned above using specific statistical criteria. The research found that the best approach is to build a model for predi
... Show MoreIn this paper, we designed a new efficient stream cipher cryptosystem that depend on a chaotic map to encrypt (decrypt) different types of digital images. The designed encryption system passed all basic efficiency criteria (like Randomness, MSE, PSNR, Histogram Analysis, and Key Space) that were applied to the key extracted from the random generator as well as to the digital images after completing the encryption process.
Artificial Intelligence Algorithms have been used in recent years in many scientific fields. We suggest employing artificial TABU algorithm to find the best estimate of the semi-parametric regression function with measurement errors in the explanatory variables and the dependent variable, where measurement errors appear frequently in fields such as sport, chemistry, biological sciences, medicine, and epidemiological studies, rather than an exact measurement.
أن المساهمة الأساسية لهذا البحث هي وصف كيفية تحليل الأنظمة الخدمية المعقدة ذات خصائص الطابور الموجودة في مستشفى بغداد التعليمي العام باستخدام تقنيات شبكية وهي تقنية أسلوب (Q – GERT ) وهي اختصار من الكلمات : Queuing theory _ Graphical Evaluation and Review Technique أي أسلوب التقييم والمراجعة البياني حيث سوف يتم معرفة حركة انسيابية المرضى داخل النظام وبعد استخدام هذا المدخل سيتم تمثيل النظام على هيئة مخطط شبكي احتمالي وتحل
... Show MoreIn this research The study of Multi-level model (partial pooling model) we consider The partial pooling model which is one Multi-level models and one of the Most important models and extensive use and application in the analysis of the data .This Model characterized by the fact that the treatments take hierarchical or structural Form, in this partial pooling models, Full Maximum likelihood FML was used to estimated parameters of partial pooling models (fixed and random ), comparison between the preference of these Models, The application was on the Suspended Dust data in Iraq, The data were for four and a half years .Eight stations were selected randomly among the stations in Iraq. We use Akaik′s Informa
... Show MoreCanonical correlation analysis is one of the common methods for analyzing data and know the relationship between two sets of variables under study, as it depends on the process of analyzing the variance matrix or the correlation matrix. Researchers resort to the use of many methods to estimate canonical correlation (CC); some are biased for outliers, and others are resistant to those values; in addition, there are standards that check the efficiency of estimation methods.
In our research, we dealt with robust estimation methods that depend on the correlation matrix in the analysis process to obtain a robust canonical correlation coefficient, which is the method of Biwe
... Show MoreExamination of skewness makes academics more aware of the importance of accurate statistical analysis. Undoubtedly, most phenomena contain a certain percentage of skewness which resulted to the appearance of what is -called "asymmetry" and, consequently, the importance of the skew normal family . The epsilon skew normal distribution ESN (μ, σ, ε) is one of the probability distributions which provide a more flexible model because the skewness parameter provides the possibility to fluctuate from normal to skewed distribution. Theoretically, the estimation of linear regression model parameters, with an average error value that is not zero, is considered a major challenge due to having difficulties, as no explicit formula to calcula
... Show MoreSome experiments need to know the extent of their usefulness to continue providing them or not. This is done through the fuzzy regression discontinuous model, where the Epanechnikov Kernel and Triangular Kernel were used to estimate the model by generating data from the Monte Carlo experiment and comparing the results obtained. It was found that the. Epanechnikov Kernel has a least mean squared error.
In general, researchers and statisticians in particular have been usually used non-parametric regression models when the parametric methods failed to fulfillment their aim to analyze the models precisely. In this case the parametic methods are useless so they turn to non-parametric methods for its easiness in programming. Non-parametric methods can also used to assume the parametric regression model for subsequent use. Moreover, as an advantage of using non-parametric methods is to solve the problem of Multi-Colinearity between explanatory variables combined with nonlinear data. This problem can be solved by using kernel ridge regression which depend o
... Show MoreIn this research, a factorial experiment (4*4) was studied, applied in a completely random block design, with a size of observations, where the design of experiments is used to study the effect of transactions on experimental units and thus obtain data representing experiment observations that The difference in the application of these transactions under different environmental and experimental conditions It causes noise that affects the observation value and thus an increase in the mean square error of the experiment, and to reduce this noise, multiple wavelet reduction was used as a filter for the observations by suggesting an improved threshold that takes into account the different transformation levels based on the logarithm of the b
... Show MoreA mixture model is used to model data that come from more than one component. In recent years, it became an effective tool in drawing inferences about the complex data that we might come across in real life. Moreover, it can represent a tremendous confirmatory tool in classification observations based on similarities amongst them. In this paper, several mixture regression-based methods were conducted under the assumption that the data come from a finite number of components. A comparison of these methods has been made according to their results in estimating component parameters. Also, observation membership has been inferred and assessed for these methods. The results showed that the flexible mixture model outperformed the others
... Show MoreMethods of estimating statistical distribution have attracted many researchers when it comes to fitting a specific distribution to data. However, when the data belong to more than one component, a popular distribution cannot be fitted to such data. To tackle this issue, mixture models are fitted by choosing the correct number of components that represent the data. This can be obvious in lifetime processes that are involved in a wide range of engineering applications as well as biological systems. In this paper, we introduce an application of estimating a finite mixture of Inverse Rayleigh distribution by the use of the Bayesian framework when considering the model as Markov chain Monte Carlo (MCMC). We employed the Gibbs sampler and
... Show More
Abstract:
We can notice cluster data in social, health and behavioral sciences, so this type of data have a link between its observations and we can express these clusters through the relationship between measurements on units within the same group.
In this research, I estimate the reliability function of cluster function by using the seemingly unrelate
... Show Moreلغرض تقصي وضع الطفولة في العراق كان لابد من إستعمال أدوات وأساليب إحصائية تعنى بتفسير العلاقات السببية وإتجاه تأثيراتها مع إستعمال أسلوب تصنيف للمؤثرات (المتغيرات) المهمة لرسم صورة أوضح للظاهر قيد الدراسة بحيث تكون مفيدة من خلال إستثمارها وتحديثها وتطويرها في الدراسات السكانية المستقبلية. ولذا تم استعمال أسلوبين من الأدوات الإحصائية في مجال تحليل البيانات متعدد المتغيرات وهو التحليل العنقودي والتحلي
... Show MoreSoftware-defined networking (SDN) is an innovative network paradigm, offering substantial control of network operation through a network’s architecture. SDN is an ideal platform for implementing projects involving distributed applications, security solutions, and decentralized network administration in a multitenant data center environment due to its programmability. As its usage rapidly expands, network security threats are becoming more frequent, leading SDN security to be of significant concern. Machine-learning (ML) techniques for intrusion detection of DDoS attacks in SDN networks utilize standard datasets and fail to cover all classification aspects, resulting in under-coverage of attack diversity. This paper proposes a hybr
... Show MoreThis Book is intended to be textbook studied for undergraduate course in multivariate analysis. This book is designed to be used in semester system. In order to achieve the goals of the book, it is divided into the following chapters (as done in the first edition 2019). Chapter One introduces matrix algebra. Chapter Two devotes to Linear Equation System Solution with quadratic forms, Characteristic roots & vectors. Chapter Three discusses Partitioned Matrices and how to get Inverse, Jacobi and Hessian matrices. Chapter Four deals with Multivariate Normal Distribution (MVN). Chapter Five concern with Joint, Marginal and Conditional Normal Distribution, independency and correlations. While the revised new chapters have been added (as the curr
... Show MoreIn this paper, we propose a method using continuous wavelets to study the multivariate fractional Brownian motion through the deviations of the transformed random process to find an efficient estimate of Hurst exponent using eigenvalue regression of the covariance matrix. The results of simulations experiments shown that the performance of the proposed estimator was efficient in bias but the variance get increase as signal change from short to long memory the MASE increase relatively. The estimation process was made by calculating the eigenvalues for the variance-covariance matrix of Meyer’s continuous wavelet details coefficients.