The purpose of this work is to concurrently estimate the UVvisible spectra of binary combinations of piroxicam and mefenamic acid using the chemometric approach. To create the model, spectral data from 73 samples (with wavelengths between 200 and 400 nm) were employed. A two-layer artificial neural network model was created, with two neurons in the output layer and fourteen neurons in the hidden layer. The model was trained to simulate the concentrations and spectra of piroxicam and mefenamic acid. For piroxicam and mefenamic acid, respectively, the Levenberg-Marquardt algorithm with feed-forward back-propagation learning produced root mean square errors of prediction of 0.1679 μg/mL and 0.1154 μg/mL, with coefficients of determination of 0.99730 and 0.99942, respectively. The suggested approach’s ease of use, affordability, and environmental friendliness make it a suitable replacement for the use of hazardous chemicals in the routine investigation of the selected drugs
Two simple, rapid, and useful spectrophotometric methods were suggest or the determination of sulphadimidine sodium (SDMS) with and without using cloud point extraction technique in pure form and pharmaceutical preparation. The first method was based on diazotization of the Sulphdimidine Sodium drug by sodium nitrite at 5 ºC, followed by coupling with α –Naphthol in basic medium to form an orange colored product . The product was stabilized and its absorption was measured at 473 nm. Beer’s law was obeyed in the concentration range of (1-12) μg∙ml-1. Sandell’s sensitivity was 0.03012 μg∙cm-1, the detection limit was 0.0277 μg∙ml-1, and the limit of Quantitation was 0.03605μg
... Show MoreIn this research, several estimators concerning the estimation are introduced. These estimators are closely related to the hazard function by using one of the nonparametric methods namely the kernel function for censored data type with varying bandwidth and kernel boundary. Two types of bandwidth are used: local bandwidth and global bandwidth. Moreover, four types of boundary kernel are used namely: Rectangle, Epanechnikov, Biquadratic and Triquadratic and the proposed function was employed with all kernel functions. Two different simulation techniques are also used for two experiments to compare these estimators. In most of the cases, the results have proved that the local bandwidth is the best for all the
... Show MoreThe performance quality and searching speed of Block Matching (BM) algorithm are affected by shapes and sizes of the search patterns used in the algorithm. In this paper, Kite Cross Hexagonal Search (KCHS) is proposed. This algorithm uses different search patterns (kite, cross, and hexagonal) to search for the best Motion Vector (MV). In first step, KCHS uses cross search pattern. In second step, it uses one of kite search patterns (up, down, left, or right depending on the first step). In subsequent steps, it uses large/small Hexagonal Search (HS) patterns. This new algorithm is compared with several known fast block matching algorithms. Comparisons are based on search points and Peak Signal to Noise Ratio (PSNR). According to resul
... Show MoreIn this paper, previous studies about Fuzzy regression had been presented. The fuzzy regression is a generalization of the traditional regression model that formulates a fuzzy environment's relationship to independent and dependent variables. All this can be introduced by non-parametric model, as well as a semi-parametric model. Moreover, results obtained from the previous studies and their conclusions were put forward in this context. So, we suggest a novel method of estimation via new weights instead of the old weights and introduce
Paper Type: Review article.
another suggestion based on artificial neural networks.
It is considered as one of the statistical methods used to describe and estimate the relationship between randomness (Y) and explanatory variables (X). The second is the homogeneity of the variance, in which the dependent variable is a binary response takes two values (One when a specific event occurred and zero when that event did not happen) such as (injured and uninjured, married and unmarried) and that a large number of explanatory variables led to the emergence of the problem of linear multiplicity that makes the estimates inaccurate, and the method of greatest possibility and the method of declination of the letter was used in estimating A double-response logistic regression model by adopting the Jackna
... Show More