In this paper, a new method of selection variables is presented to select some essential variables from large datasets. The new model is a modified version of the Elastic Net model. The modified Elastic Net variable selection model has been summarized in an algorithm. It is applied for Leukemia dataset that has 3051 variables (genes) and 72 samples. In reality, working with this kind of dataset is not accessible due to its large size. The modified model is compared to some standard variable selection methods. Perfect classification is achieved by applying the modified Elastic Net model because it has the best performance. All the calculations that have been done for this paper are in R program by using some existing packages.
In this research, an adaptive Canny algorithm using fast Otsu multithresholding method is presented, in which fast Otsu multithresholding method is used to calculate the optimum maximum and minimum hysteresis values and used as automatic thresholding for the fourth stage of the Canny algorithm. The new adaptive Canny algorithm and the standard Canny algorithm (manual hysteresis value) was tested on standard image (Lena) and satellite image. The results approved the validity and accuracy of the new algorithm to find the images edges for personal and satellite images as pre-step for image segmentation.
Lowpass spatial filters are adopted to match the noise statistics of the degradation seeking
good quality smoothed images. This study imply different size and shape of smoothing
windows. The study shows that using a window square frame shape gives good quality
smoothing and at the same time preserving a certain level of high frequency components in
comparsion with standard smoothing filters.
Spatial and frequency domain techniques have been adopted in this search. mean
value filter, median filter, gaussian filter. And adaptive technique consists of
duplicated two filters (median and gaussian) to enhance the noisy image. Different
block size of the filter as well as the sholding value have been tried to perform the
enhancement process.
Abstract
For sparse system identification,recent suggested algorithms are -norm Least Mean Square ( -LMS), Zero-Attracting LMS (ZA-LMS), Reweighted Zero-Attracting LMS (RZA-LMS), and p-norm LMS (p-LMS) algorithms, that have modified the cost function of the conventional LMS algorithm by adding a constraint of coefficients sparsity. And so, the proposed algorithms are named -ZA-LMS,
... Show MoreIn this study, the performance of the adaptive optics (AO) system was analyzed through a numerical computer simulation implemented in MATLAB. Making a phase screen involved turning computer-generated random numbers into two-dimensional arrays of phase values on a sample point grid with matching statistics. Von Karman turbulence was created depending on the power spectral density. Several simulated point spread functions (PSFs) and modulation transfer functions (MTFs) for different values of the Fried coherent diameter (ro) were used to show how rough the atmosphere was. To evaluate the effectiveness of the optical system (telescope), the Strehl ratio (S) was computed. The compensation procedure for an AO syst
... Show MoreNeutron differential-elastic and inelastic scattering cross-sections of Yttrium-89 isotope were calculated at energies 8,10,12,14, and 17 MeV, at angles distributed between 20o and 180o in the center of mass frame. The obtained results data were interpreted using a spherical optical potential model and Eikonal approximation, to examine the effect of the first-order Eikonal correction on the effective potential. The real and imaginary parts of optical potential were calculated. It was found that the nominal imaginary potential increase monotonically while the effective imaginary one has a pronounced minimum around r = 6fm and then increases. The analysis of the relative energy of the projectile and reaction
... Show MoreResearch on the automated extraction of essential data from an electrocardiography (ECG) recording has been a significant topic for a long time. The main focus of digital processing processes is to measure fiducial points that determine the beginning and end of the P, QRS, and T waves based on their waveform properties. The presence of unavoidable noise during ECG data collection and inherent physiological differences among individuals make it challenging to accurately identify these reference points, resulting in suboptimal performance. This is done through several primary stages that rely on the idea of preliminary processing of the ECG electrical signal through a set of steps (preparing raw data and converting them into files tha
... Show More