Researchers have increased interest in recent years in determining the optimum sample size to obtain sufficient accuracy and estimation and to obtain high-precision parameters in order to evaluate a large number of tests in the field of diagnosis at the same time. In this research, two methods were used to determine the optimum sample size to estimate the parameters of high-dimensional data. These methods are the Bennett inequality method and the regression method. The nonlinear logistic regression model is estimated by the size of each sampling method in high-dimensional data using artificial intelligence, which is the method of artificial neural network (ANN) as it gives a high-precision estimate commensurate with the data type and type of medical study. The probabilistic values obtained from the artificial neural network are used to calculate the net reclassification index (NRI). A program was written for this purpose using the statistical programming language (R), where the mean maximum absolute error criterion (MME) of the net reclassification network index (NRI) was used to compare the methods of specifying the sample size and the presence of the number of different default parameters in light of the value of a specific error margin (ε). To verify the performance of the methods using the comparison criteria above were the most important conclusions were that the Bennett inequality method is the best in determining the optimum sample size according to the number of default parameters and the error margin value
When scheduling rules become incapable to tackle the presence of a variety of unexpected disruptions frequently occurred in manufacturing systems, it is necessary to develop a reactive schedule which can absorb the effects of such disruptions. Such responding requires efficient strategies, policies, and methods to controlling production & maintaining high shop performance. This can be achieved through rescheduling task which defined as an essential operating function to efficiently tackle and response to uncertainties and unexpected events. The framework proposed in this study consists of rescheduling approaches, strategies, policies, and techniques, which represents a guideline for most manufacturing companies operatin
... Show MoreUnited nation determined many basic climatic effects which affect the crust of Earth.
And the most important one is the climatic change and its effect on environmental, economic,
social, and political effects. So, the amount of rain which is considered as one of climatic
changes in Iraq should be studied.So, this research explains the factors which affect rain, its
overall average, the variation in the amounts of rain, the amount of yearly rain and variation
in both yearly and monthly rains by using standard variation and yearly fluctuation.As a
result, it is concluded that the number of rainy days doesn't mean an increase in rains amount.
And there's variation in rains amount in all study areas which is contrastive
In order to select the optimal tracking of fast time variation of multipath fast time variation Rayleigh fading channel, this paper focuses on the recursive least-squares (RLS) and Extended recursive least-squares (E-RLS) algorithms and reaches the conclusion that E-RLS is more feasible according to the comparison output of the simulation program from tracking performance and mean square error over five fast time variation of Rayleigh fading channels and more than one time (send/receive) reach to 100 times to make sure from efficiency of these algorithms.
The current rasearch which is entitled " The stylistic change in Kandinsky and Mondrian paintings – A Comparative Analytic study-" deals with the nature of change concept, its mechanisms and its constructive disciplines. The research has four chapters: The first chapter deals with the methodological Framework represented by the problem of the research which is concerned with the stylistic change and its role in activating formative disciplines. The research aims at, finding out the stylistic change in Kandinsky and Mondrian Paintings.&
... Show MoreThis study found that one of the constructive, necessary, beneficial, most effective, and cost-effective ways to meet the great challenge of rising energy prices is to develop and improve energy quality and efficiency. The process of improving the quality of energy and its means has been carried out in many buildings and around the world. It was found that the thermal insulation process in buildings and educational facilities has become the primary tool for improving energy efficiency, enabling us to improve and develop the internal thermal environment quality processes recommended for users (student - teacher). An excellent and essential empirical study has been conducted to calculate the fundamental values of the
... Show MoreA liquid-solid chromatography of Bovine Serum Albumin (BSA) on (diethylaminoethyl-cellulose) DEAE-cellulose adsorbent is worked experimentally, to study the effect of changing the influent concentration of (0.125, 0.25, 0.5, and 1 mg/ml) at constant volumetric flow rate Q=1ml/min. And the effect of changing the volumetric flow rate (1, 3, 5, and 10 ml/min) at constant influent concentration of Co=0.125mg/ml. By using a glass column of (1.5cm) I.D and (50cm) length, packed with adsorbent of DEAE-cellulose of height (7cm). The influent is introduced in to the column using peristaltic pump and the effluent concentration is investigated using UV-spectrophotometer at 30oC and 280nm wavelength. A spread (steeper) break-through curve is gained
... Show MoreA new algorithm is proposed to compress speech signals using wavelet transform and linear predictive coding. Signal compression based on the concept of selecting a small number of approximation coefficients after they are compressed by the wavelet decomposition (Haar and db4) at a suitable chosen level and ignored details coefficients, and then approximation coefficients are windowed by a rectangular window and fed to the linear predictor. Levinson Durbin algorithm is used to compute LP coefficients, reflection coefficients and predictor error. The compress files contain LP coefficients and previous sample. These files are very small in size compared to the size of the original signals. Compression ratio is calculated from the size of th
... Show MoreThis research is concerned with the re-analysis of optical data (the imaginary part of the dielectric function as a function of photon energy E) of a-Si:H films prepared by Jackson et al. and Ferlauto et al. through using nonlinear regression fitting we estimated the optical energy gap and the deviation from the Tauc model by considering the parameter of energy photon-dependence of the momentum matrix element of the p as a free parameter by assuming that density of states distribution to be a square root function. It is observed for films prepared by Jackson et al. that the value of the parameter p for the photon energy range is is close to the value assumed by the Cody model and the optical gap energy is which is also close to the value
... Show MoreThe efficiency of Nd:YAG laser radiation in removing debris and smear layer from prepared root
canal walls was studied. Fifty-seven human extracted single rooted anterior teeth were divided into three
groups. A group that was not lased is considered as a control group. The remaining teeth were exposed to
different laser parameters regarding laser energy, repetition rate and exposure time. For the case of the set of
parameters of 7 mJ laser energy, the cleaning was maximum at 3 p.p.s. repetition rate for 3 seconds exposure
time for, the coronal, middle and apical thirds. Above and below this energy level, there was an overdose
(melting) or under dose (no effect). Nevertheless for 10mJ laser energy case, the cleaning effi