This Book is the second edition that intended to be textbook studied for undergraduate/ postgraduate course in mathematical statistics. In order to achieve the goals of the book, it is divided into the following chapters. Chapter One introduces events and probability review. Chapter Two devotes to random variables in their two types: discrete and continuous with definitions of probability mass function, probability density function and cumulative distribution function as well. Chapter Three discusses mathematical expectation with its special types such as: moments, moment generating function and other related topics. Chapter Four deals with some special discrete distributions: (Discrete Uniform, Bernoulli, Binomial, Poisson, Geometric, Negative Binomial and Hypergeometric) with their mathematical formulas of p.m.f., C.D.F. and m.g.f. Chapter Five deals with some special continuous distributions: (Uniform, Normal, Exponential, Gamma and Beta) with their mathematical formulas of p.m.f., C.D.F. and m.g.f. Many solved examples are intended in this book (obtaining mean and variance of distributions by m.g.f.). Chapter Six introduces univariate discrete and continuous transformations, i.e., one dimensional variables and their yielding probability distributions. Chapter Seven devotes to truncation of distributions from left, right or both sides, beside the probability distribution of order statistics. Chapter Eight discusses mathematical features of joint, marginal and conditional distributions, as well as independency via covariance and correlation of bivariate distributions. Chapter Nine deals with some special topics such as getting distribution for some transformation from multidimensional random variables by using moment generating function (m.g.f.) and cumulative distribution function (C.D.F.) Many solved examples (about 100) are intended in this book, in addition to a variety of unsolved relied problems (about 150) at the end of each chapter to enrich the statistical knowledge of our readers.
In this paper we present the theoretical foundation of forward error analysis of numerical algorithms under;• Approximations in "built-in" functions.• Rounding errors in arithmetic floating-point operations.• Perturbations of data.The error analysis is based on linearization method. The fundamental tools of the forward error analysis are system of linear absolute and relative a prior and a posteriori error equations and associated condition numbers constituting optimal of possible cumulative round – off errors. The condition numbers enable simple general, quantitative bounds definitions of numerical stability. The theoretical results have been applied a Gaussian elimination, and have proved to be very effective means of both a prior
... Show MoreBlockchain is an innovative technology that has gained interest in all sectors in the era of digital transformation where it manages transactions and saves them in a database. With the increasing financial transactions and the rapidly developed society with growing businesses many people looking for the dream of a better financially independent life, stray from large corporations and organizations to form startups and small businesses. Recently, the increasing demand for employees or institutes to prepare and manage contracts, papers, and the verifications process, in addition to human mistakes led to the emergence of a smart contract. The smart contract has been developed to save time and provide more confidence while dealing, as well a
... Show MoreValues are penetrating or interfering in the lives of individuals and groups and are connected to them in the sense of life itself, because they are closely linked to the motives of behavior, hopes and goals, and can be said that values are everything. The concept of values is one of the concepts that have received the attention of researchers from different disciplines. This has resulted in a kind of confusion and variation in use from one specialization to the other, and uses multiple uses within one specialization. Therefore, there is no uniform definition of values because they relate to individuals. Individuals differ in many things, such as perception,
... Show MoreThe using of the parametric models and the subsequent estimation methods require the presence of many of the primary conditions to be met by those models to represent the population under study adequately, these prompting researchers to search for more flexible models of parametric models and these models were nonparametric models.
In this manuscript were compared to the so-called Nadaraya-Watson estimator in two cases (use of fixed bandwidth and variable) through simulation with different models and samples sizes. Through simulation experiments and the results showed that for the first and second models preferred NW with fixed bandwidth fo
... Show MoreIn this research, several estimators concerning the estimation are introduced. These estimators are closely related to the hazard function by using one of the nonparametric methods namely the kernel function for censored data type with varying bandwidth and kernel boundary. Two types of bandwidth are used: local bandwidth and global bandwidth. Moreover, four types of boundary kernel are used namely: Rectangle, Epanechnikov, Biquadratic and Triquadratic and the proposed function was employed with all kernel functions. Two different simulation techniques are also used for two experiments to compare these estimators. In most of the cases, the results have proved that the local bandwidth is the best for all the
... Show MoreThe issue of image captioning, which comprises automatic text generation to understand an image’s visual information, has become feasible with the developments in object recognition and image classification. Deep learning has received much interest from the scientific community and can be very useful in real-world applications. The proposed image captioning approach involves the use of Convolution Neural Network (CNN) pre-trained models combined with Long Short Term Memory (LSTM) to generate image captions. The process includes two stages. The first stage entails training the CNN-LSTM models using baseline hyper-parameters and the second stage encompasses training CNN-LSTM models by optimizing and adjusting the hyper-parameters of
... Show MoreThis paper deals with constructing mixed probability distribution from exponential with scale parameter (β) and also Gamma distribution with (2,β), and the mixed proportions are ( .first of all, the probability density function (p.d.f) and also cumulative distribution function (c.d.f) and also the reliability function are obtained. The parameters of mixed distribution, ( ,β) are estimated by three different methods, which are maximum likelihood, and Moments method,as well proposed method (Differential Least Square Method)(DLSM).The comparison is done using simulation procedure, and all the results are explained in tables.
Industrial characteristics calculations concentrated on the physical properties for break down voltage in sf6, cf4 gases and their mixture with different concentrations are presented in our work. Calculations are achieved by using an improved modern code simulated on windows technique. Our results give rise to a compatible agreement with the other experimental published data.
Advertisement is one of the Media most efficient persuasive communicative activities designed to marketing different ideas and products with the aim of influencing consumers' perception of goods and services. The present study sheds light on the most prominent rhetorical devices that constitute the persuasive structure of the Hebrew advertisements published in various media outlet. The study is conducted by means of analyzing the linguistic structure of the advertising texts and according to the analytic and descriptive approach to know the characteristics and the functions of the oratorical devices used in the advertising industry. The research elucidates that most of the advertisements are written in slang language, and this is due to th
... Show More