Researchers have increased interest in recent years in determining the optimum sample size to obtain sufficient accuracy and estimation and to obtain high-precision parameters in order to evaluate a large number of tests in the field of diagnosis at the same time. In this research, two methods were used to determine the optimum sample size to estimate the parameters of high-dimensional data. These methods are the Bennett inequality method and the regression method. The nonlinear logistic regression model is estimated by the size of each sampling method in high-dimensional data using artificial intelligence, which is the method of artificial neural network (ANN) as it gives a high-precision estimate commensurate with the data type and type of medical study. The probabilistic values obtained from the artificial neural network are used to calculate the net reclassification index (NRI). A program was written for this purpose using the statistical programming language (R), where the mean maximum absolute error criterion (MME) of the net reclassification network index (NRI) was used to compare the methods of specifying the sample size and the presence of the number of different default parameters in light of the value of a specific error margin (ε). To verify the performance of the methods using the comparison criteria above were the most important conclusions were that the Bennett inequality method is the best in determining the optimum sample size according to the number of default parameters and the error margin value
The research emphasizes importance of preliminary drawings in design of any product. Therefore, using of simulation as tools for visual thinking in developing drawing and design skills. So that practice of drawing by hand, considering shape of ideas in first stage of visualizations, and practice of its techniques and continuous training.
Hence, the research problem arose with the role of simulation method for developing preliminary sketches in the sample of students of the Product Design Department at the College of Design and Art, PNU, as it is important tool for visual thinking that helps the designer in designing and producing innovative artistic works.
Therefore, the research axes, a number of findings and recommendations were
In this paper, we investigate the connection between the hierarchical models and the power prior distribution in quantile regression (QReg). Under specific quantile, we develop an expression for the power parameter ( ) to calibrate the power prior distribution for quantile regression to a corresponding hierarchical model. In addition, we estimate the relation between the and the quantile level via hierarchical model. Our proposed methodology is illustrated with real data example.
Abstract
The research aims to identify tax exemptions, their objectives and types, as well as to shed light on the concept of sustainable development, its objectives, dimensions and indicators (economic, social and environmental), as well as to analyze the relationship between tax exemptions and economic development, in addition to measuring and analyzing the impact of tax exemptions on economic development in Iraq for the period ( 2015 - 2021) using the NARDL model. The research problem centers on the fact that failure to employ financial policy tools correctly led to a weakness in achieving economic justice, which leads to a failure to improve social welfar
... Show MoreThe paper is concerned with the state and proof of the existence theorem of a unique solution (state vector) of couple nonlinear hyperbolic equations (CNLHEQS) via the Galerkin method (GM) with the Aubin theorem. When the continuous classical boundary control vector (CCBCV) is known, the theorem of existence a CCBOCV with equality and inequality state vector constraints (EIESVC) is stated and proved, the existence theorem of a unique solution of the adjoint couple equations (ADCEQS) associated with the state equations is studied. The Frcéhet derivative derivation of the "Hamiltonian" is obtained. Finally the necessary theorem (necessary conditions "NCs") and the sufficient theorem (sufficient conditions" SCs") for optimality of the stat
... Show Morehe assignment model represents a mathematical model that aims at expressing an important problem facing enterprises and companies in the public and private sectors, which are characterized by ensuring their activities, in order to take the appropriate decision to get the best allocation of tasks for machines or jobs or workers on the machines that he owns in order to increase profits or reduce costs and time As this model is called multi-objective assignment because it takes into account the factors of time and cost together and hence we have two goals for the assignment problem, so it is not possible to solve by the usual methods and has been resorted to the use of multiple programming The objectives were to solve the problem of
... Show MoreIs to obtain competitive advantage legitimate objective pursued by all organizations to achieve, because they live today in environments of rapid change and dynamic in order to meet the demands of the customer changing as well as intense competition between the organizations, which requires them to get the location of competitive markets in order to do this will remain to do the building and strengthening competitive advantage to be able to achieve, but that this feature is not easy and is not only through the identification and use of a successful strategy for a competitive standard and then manage it successfully. Hence the research problem of determining the sources of differentiation strategy and its impact on the dimensions of compe
... Show MoreIn this research, we dealt with the study of the Non-Homogeneous Poisson process, which is one of the most important statistical issues that have a role in scientific development as it is related to accidents that occur in reality, which are modeled according to Poisson’s operations, because the occurrence of this accident is related to time, whether with the change of time or its stability. In our research, this clarifies the Non-Homogeneous hemispheric process and the use of one of these models of processes, which is an exponentiated - Weibull model that contains three parameters (α, β, σ) as a function to estimate the time rate of occurrence of earthquakes in Erbil Governorate, as the governorate is adjacent to two countr
... Show MoreThe major goal of this research was to use the Euler method to determine the best starting value for eccentricity. Various heights were chosen for satellites that were affected by atmospheric drag. It was explained how to turn the position and velocity components into orbital elements. Also, Euler integration method was explained. The results indicated that the drag is deviated the satellite trajectory from a keplerian orbit. As a result, the Keplerian orbital elements alter throughout time. Additionally, the current analysis showed that Euler method could only be used for low Earth orbits between (100 and 500) km and very small eccentricity (e = 0.001).
In this paper, we present a comparison of double informative priors which are assumed for the parameter of inverted exponential distribution.To estimate the parameter of inverted exponential distribution by using Bayes estimation ,will be used two different kind of information in the Bayes estimation; two different priors have been selected for the parameter of inverted exponential distribution. Also assumed Chi-squared - Gamma distribution, Chi-squared - Erlang distribution, and- Gamma- Erlang distribution as double priors. The results are the derivations of these estimators under the squared error loss function with three different double priors.
Additionally Maximum likelihood estimation method
... Show More