Researchers have increased interest in recent years in determining the optimum sample size to obtain sufficient accuracy and estimation and to obtain high-precision parameters in order to evaluate a large number of tests in the field of diagnosis at the same time. In this research, two methods were used to determine the optimum sample size to estimate the parameters of high-dimensional data. These methods are the Bennett inequality method and the regression method. The nonlinear logistic regression model is estimated by the size of each sampling method in high-dimensional data using artificial intelligence, which is the method of artificial neural network (ANN) as it gives a high-precision estimate commensurate with the data type and type of medical study. The probabilistic values obtained from the artificial neural network are used to calculate the net reclassification index (NRI). A program was written for this purpose using the statistical programming language (R), where the mean maximum absolute error criterion (MME) of the net reclassification network index (NRI) was used to compare the methods of specifying the sample size and the presence of the number of different default parameters in light of the value of a specific error margin (ε). To verify the performance of the methods using the comparison criteria above were the most important conclusions were that the Bennett inequality method is the best in determining the optimum sample size according to the number of default parameters and the error margin value
The efficiency of Nd:YAG laser radiation in removing debris and smear layer from prepared root
canal walls was studied. Fifty-seven human extracted single rooted anterior teeth were divided into three
groups. A group that was not lased is considered as a control group. The remaining teeth were exposed to
different laser parameters regarding laser energy, repetition rate and exposure time. For the case of the set of
parameters of 7 mJ laser energy, the cleaning was maximum at 3 p.p.s. repetition rate for 3 seconds exposure
time for, the coronal, middle and apical thirds. Above and below this energy level, there was an overdose
(melting) or under dose (no effect). Nevertheless for 10mJ laser energy case, the cleaning effi
This research is concerned with the re-analysis of optical data (the imaginary part of the dielectric function as a function of photon energy E) of a-Si:H films prepared by Jackson et al. and Ferlauto et al. through using nonlinear regression fitting we estimated the optical energy gap and the deviation from the Tauc model by considering the parameter of energy photon-dependence of the momentum matrix element of the p as a free parameter by assuming that density of states distribution to be a square root function. It is observed for films prepared by Jackson et al. that the value of the parameter p for the photon energy range is is close to the value assumed by the Cody model and the optical gap energy is which is also close to the value
... Show MoreIn order to select the optimal tracking of fast time variation of multipath fast time variation Rayleigh fading channel, this paper focuses on the recursive least-squares (RLS) and Extended recursive least-squares (E-RLS) algorithms and reaches the conclusion that E-RLS is more feasible according to the comparison output of the simulation program from tracking performance and mean square error over five fast time variation of Rayleigh fading channels and more than one time (send/receive) reach to 100 times to make sure from efficiency of these algorithms.
This study found that one of the constructive, necessary, beneficial, most effective, and cost-effective ways to meet the great challenge of rising energy prices is to develop and improve energy quality and efficiency. The process of improving the quality of energy and its means has been carried out in many buildings and around the world. It was found that the thermal insulation process in buildings and educational facilities has become the primary tool for improving energy efficiency, enabling us to improve and develop the internal thermal environment quality processes recommended for users (student - teacher). An excellent and essential empirical study has been conducted to calculate the fundamental values of the
... Show MoreIn this paper, we will discuss the performance of Bayesian computational approaches for estimating the parameters of a Logistic Regression model. Markov Chain Monte Carlo (MCMC) algorithms was the base estimation procedure. We present two algorithms: Random Walk Metropolis (RWM) and Hamiltonian Monte Carlo (HMC). We also applied these approaches to a real data set.
This research deals with unusual approach for analyzing the Simple Linear Regression via Linear Programming by Two - phase method, which is known in Operations Research: “O.R.”. The estimation here is found by solving optimization problem when adding artificial variables: Ri. Another method to analyze the Simple Linear Regression is introduced in this research, where the conditional Median of (y) was taken under consideration by minimizing the Sum of Absolute Residuals instead of finding the conditional Mean of (y) which depends on minimizing the Sum of Squared Residuals, that is called: “Median Regression”. Also, an Iterative Reweighted Least Squared based on the Absolute Residuals as weights is performed here as another method to
... Show MoreA new algorithm is proposed to compress speech signals using wavelet transform and linear predictive coding. Signal compression based on the concept of selecting a small number of approximation coefficients after they are compressed by the wavelet decomposition (Haar and db4) at a suitable chosen level and ignored details coefficients, and then approximation coefficients are windowed by a rectangular window and fed to the linear predictor. Levinson Durbin algorithm is used to compute LP coefficients, reflection coefficients and predictor error. The compress files contain LP coefficients and previous sample. These files are very small in size compared to the size of the original signals. Compression ratio is calculated from the size of th
... Show MoreThe purpose of this research is to investigate the impact of corrosive environment (corrosive ferric chloride of 1, 2, 5, 6% wt. at room temperature), immersion period of (48, 72, 96, 120, 144 hours), and surface roughness on pitting corrosion characteristics and use the data to build an artificial neural network and test its ability to predict the depth and intensity of pitting corrosion in a variety of conditions. Pit density and depth were calculated using a pitting corrosion test on carbon steel (C-4130). Pitting corrosion experimental tests were used to develop artificial neural network (ANN) models for predicting pitting corrosion characteristics. It was found that artificial neural network models were shown to be
... Show MoreA liquid-solid chromatography of Bovine Serum Albumin (BSA) on (diethylaminoethyl-cellulose) DEAE-cellulose adsorbent is worked experimentally, to study the effect of changing the influent concentration of (0.125, 0.25, 0.5, and 1 mg/ml) at constant volumetric flow rate Q=1ml/min. And the effect of changing the volumetric flow rate (1, 3, 5, and 10 ml/min) at constant influent concentration of Co=0.125mg/ml. By using a glass column of (1.5cm) I.D and (50cm) length, packed with adsorbent of DEAE-cellulose of height (7cm). The influent is introduced in to the column using peristaltic pump and the effluent concentration is investigated using UV-spectrophotometer at 30oC and 280nm wavelength. A spread (steeper) break-through curve is gained
... Show More