The denoising of a natural image corrupted by Gaussian noise is a problem in signal or image processing. Much work has been done in the field of wavelet thresholding but most of it was focused on statistical modeling of wavelet coefficients and the optimal choice of thresholds. This paper describes a new method for the suppression of noise in image by fusing the stationary wavelet denoising technique with adaptive wiener filter. The wiener filter is applied to the reconstructed image for the approximation coefficients only, while the thresholding technique is applied to the details coefficients of the transform, then get the final denoised image is obtained by combining the two results. The proposed method was applied by using MATLAB R2010a with color images contaminated by white Gaussian noise. Compared with stationary wavelet and wiener filter algorithms, the experimental results show that the proposed method provides better subjective and objective quality, and obtain up to 3.5 dB PSNR improvement.
Root-finding is an oldest classical problem, which is still an important research topic, due to its impact on computational algebra and geometry. In communications systems, when the impulse response of the channel is minimum phase the state of equalization algorithm is reduced and the spectral efficiency will improved. To make the channel impulse response minimum phase the prefilter which is called minimum phase filter is used, the adaptation of the minimum phase filter need root finding algorithm. In this paper, the VHDL implementation of the root finding algorithm introduced by Clark and Hau is introduced.
VHDL program is used in the work, to find the roots of two channels and make them minimum phase, the obtained output results are
Future wireless communication systems must be able to accommodate a large number of users and simultaneously to provide the high data rates at the required quality of service. In this paper a method is proposed to perform the N-Discrete Hartley Transform (N-DHT) mapper, which are equivalent to 4-Quadrature Amplitude Modulation (QAM), 16-QAM, 64-QAM, 256-QAM, … etc. in spectral efficiency. The N-DHT mapper is chosen in the Multi Carrier Code Division Multiple Access (MC-CDMA) structure to serve as a data mapper instead of the conventional data mapping techniques like QPSK and QAM schemes. The proposed system is simulated using MATLAB and compared with conventional MC-CDMA for Additive White Gaussian Noise, flat, and multi-path selective fa
... Show MoreSemantic segmentation is an exciting research topic in medical image analysis because it aims to detect objects in medical images. In recent years, approaches based on deep learning have shown a more reliable performance than traditional approaches in medical image segmentation. The U-Net network is one of the most successful end-to-end convolutional neural networks (CNNs) presented for medical image segmentation. This paper proposes a multiscale Residual Dilated convolution neural network (MSRD-UNet) based on U-Net. MSRD-UNet replaced the traditional convolution block with a novel deeper block that fuses multi-layer features using dilated and residual convolution. In addition, the squeeze and execution attention mechanism (SE) and the s
... Show MoreIn this paper introduce some generalizations of some definitions which are, closure converge to a point, closure directed toward a set, almost ω-converges to a set, almost condensation point, a set ωH-closed relative, ω-continuous functions, weakly ω-continuous functions, ω-compact functions, ω-rigid a set, almost ω-closed functions and ω-perfect functions with several results concerning them.
Abstract:
In this research we discussed the parameter estimation and variable selection in Tobit quantile regression model in present of multicollinearity problem. We used elastic net technique as an important technique for dealing with both multicollinearity and variable selection. Depending on the data we proposed Bayesian Tobit hierarchical model with four level prior distributions . We assumed both tuning parameter are random variable and estimated them with the other unknown parameter in the model .Simulation study was used for explain the efficiency of the proposed method and then we compared our approach with (Alhamzwi 2014 & standard QR) .The result illustrated that our approach
... Show Morethe research ptesents a proposed method to compare or determine the linear equivalence of the key-stream from linear or nonlinear key-stream
Fingerprint recognition is one among oldest procedures of identification. An important step in automatic fingerprint matching is to mechanically and dependably extract features. The quality of the input fingerprint image has a major impact on the performance of a feature extraction algorithm. The target of this paper is to present a fingerprint recognition technique that utilizes local features for fingerprint representation and matching. The adopted local features have determined: (i) the energy of Haar wavelet subbands, (ii) the normalized of Haar wavelet subbands. Experiments have been made on three completely different sets of features which are used when partitioning the fingerprint into overlapped blocks. Experiments are conducted on
... Show More
Codes of red, green, and blue data (RGB) extracted from a lab-fabricated colorimeter device were used to build a proposed classifier with the objective of classifying colors of objects based on defined categories of fundamental colors. Primary, secondary, and tertiary colors namely red, green, orange, yellow, pink, purple, blue, brown, grey, white, and black, were employed in machine learning (ML) by applying an artificial neural network (ANN) algorithm using Python. The classifier, which was based on the ANN algorithm, required a definition of the mentioned eleven colors in the form of RGB codes in order to acquire the capability of classification. The software's capacity to forecast the color of the code that belongs to an ob
... Show MoreThe Dirichlet process is an important fundamental object in nonparametric Bayesian modelling, applied to a wide range of problems in machine learning, statistics, and bioinformatics, among other fields. This flexible stochastic process models rich data structures with unknown or evolving number of clusters. It is a valuable tool for encoding the true complexity of real-world data in computer models. Our results show that the Dirichlet process improves, both in distribution density and in signal-to-noise ratio, with larger sample size; achieves slow decay rate to its base distribution; has improved convergence and stability; and thrives with a Gaussian base distribution, which is much better than the Gamma distribution. The performance depen
... Show More