Variable selection is an essential and necessary task in the statistical modeling field. Several studies have triedto develop and standardize the process of variable selection, but it isdifficultto do so. The first question a researcher needs to ask himself/herself what are the most significant variables that should be used to describe a given dataset’s response. In thispaper, a new method for variable selection using Gibbs sampler techniqueshas beendeveloped.First, the model is defined, and the posterior distributions for all the parameters are derived.The new variable selection methodis tested usingfour simulation datasets. The new approachiscompared with some existingtechniques: Ordinary Least Squared (OLS), Least Absolute Shrinkage and Selection Operator (Lasso), and Tikhonov Regularization (Ridge). The simulation studiesshow that the performance of our method is better than the othersaccording to the error and the time complexity. Thesemethodsare applied to a real dataset, which is called Rock StrengthDataset.The new approach implemented using the Gibbs sampler is more powerful and effective than other approaches.All the statistical computations conducted for this paper are done using R version 4.0.3 on a single processor computer.
The purpose of this paper is applying the robustness in Linear programming(LP) to get rid of uncertainty problem in constraint parameters, and find the robust optimal solution, to maximize the profits of the general productive company of vegetable oils for the year 2019, through the modify on a mathematical model of linear programming when some parameters of the model have uncertain values, and being processed it using robust counterpart of linear programming to get robust results from the random changes that happen in uncertain values of the problem, assuming these values belong to the uncertainty set and selecting the values that cause the worst results and to depend buil
... Show MoreThe High Power Amplifiers (HPAs), which are used in wireless communication, are distinctly characterized by nonlinear properties. The linearity of the HPA can be accomplished by retreating an HPA to put it in a linear region on account of power performance loss. Meanwhile the Orthogonal Frequency Division Multiplex signal is very rough. Therefore, it will be required a large undo to the linear action area that leads to a vital loss in power efficiency. Thereby, back-off is not a positive solution. A Simplicial Canonical Piecewise-Linear (SCPWL) model based digital predistorters are widely employed to compensating the nonlinear distortion that introduced by a HPA component in OFDM technology. In this paper, the genetic al
... Show MoreArtificial Intelligence Algorithms have been used in recent years in many scientific fields. We suggest employing flower pollination algorithm in the environmental field to find the best estimate of the semi-parametric regression function with measurement errors in the explanatory variables and the dependent variable, where measurement errors appear frequently in fields such as chemistry, biological sciences, medicine, and epidemiological studies, rather than an exact measurement. We estimate the regression function of the semi-parametric model by estimating the parametric model and estimating the non-parametric model, the parametric model is estimated by using an instrumental variables method (Wald method, Bartlett’s method, and Durbin
... Show MoreImage segmentation using bi-level thresholds works well for straightforward scenarios; however, dealing with complex images that contain multiple objects or colors presents considerable computational difficulties. Multi-level thresholding is crucial for these situations, but it also introduces a challenging optimization problem. This paper presents an improved Reptile Search Algorithm (RSA) that includes a Gbest operator to enhance its performance. The proposed method determines optimal threshold values for both grayscale and color images, utilizing entropy-based objective functions derived from the Otsu and Kapur techniques. Experiments were carried out on 16 benchmark images, which inclu
Stumpff functions are an infinite series that depends on the value of z. This value results from multiplying the reciprocal semi-major axis with a universal anomaly. The purpose from those functions is to calculate the variation of the universal parameter (variable) using Kepler's equation for different orbits. In this paper, each range for the reciprocal of the semi-major axis, universal anomaly, and z is calculated in order to study the behavior of Stumpff functions C(z) and S(z). The results showed that when z grew, Stumpff functions for hyperbola, parabola, and elliptical orbits were also growing. They intersected and had a tendency towards zero for both hyperbola and parabola orbits, but for elliptical orbits, Stumpff functions
... Show MoreABSTRACT
Naproxen(NPX) imprinted liquid electrodes of polymers are built using polymerization precipitation. The molecularly imprinted (MIP) and non imprinted (NIP) polymers were synthesized using NPX as a template. In the polymerization precipitation involved, styrene(STY) was used as monomer, N,N-methylenediacrylamide (N,N-MDAM) as a cross-linker and benzoyl peroxide (BPO) as an initiator. The molecularly imprinted membranes and the non-imprinted membranes were prepared using acetophenone(AOPH) and di octylphathalate(DOP)as plasticizers in PVC matrix. The slopes and detection limits of the liquid electrodes ranged from)-18.1,-17.72 (mV/decade and )4.0 x 10-
... Show MoreThree-dimensional (3D) reconstruction from images is a most beneficial method of object regeneration by using a photo-realistic way that can be used in many fields. For industrial fields, it can be used to visualize the cracks within alloys or walls. In medical fields, it has been used as 3D scanner to reconstruct some human organs such as internal nose for plastic surgery or to reconstruct ear canal for fabricating a hearing aid device, and others. These applications need high accuracy details and measurement that represent the main issue which should be taken in consideration, also the other issues are cost, movability, and ease of use which should be taken into consideration. This work has presented an approach for design and construc
... Show MoreThe deep learning algorithm has recently achieved a lot of success, especially in the field of computer vision. This research aims to describe the classification method applied to the dataset of multiple types of images (Synthetic Aperture Radar (SAR) images and non-SAR images). In such a classification, transfer learning was used followed by fine-tuning methods. Besides, pre-trained architectures were used on the known image database ImageNet. The model VGG16 was indeed used as a feature extractor and a new classifier was trained based on extracted features.The input data mainly focused on the dataset consist of five classes including the SAR images class (houses) and the non-SAR images classes (Cats, Dogs, Horses, and Humans). The Conv
... Show More