Copula modeling is widely used in modern statistics. The boundary bias problem is one of the problems faced when estimating by nonparametric methods, as kernel estimators are the most common in nonparametric estimation. In this paper, the copula density function was estimated using the probit transformation nonparametric method in order to get rid of the boundary bias problem that the kernel estimators suffer from. Using simulation for three nonparametric methods to estimate the copula density function and we proposed a new method that is better than the rest of the methods by five types of copulas with different sample sizes and different levels of correlation between the copula variables and the different parameters for the function. The results showed that the best method is to combine probit transformation and mirror reflection kernel estimator (PTMRKE) and followed by the (IPE) method when using all copula functions and for all sample sizes if the correlation is strong (positive or negative). But in the case of using weak and medium correlations, it turns out that the (IPE) method is the best, followed by the proposed method(PTMRKE), depending on (RMSE, LOGL, Akaike)criteria. The results also indicated that the mirror kernel reflection method when using the five copulas is weak.
The present work reports an approach of hydrothermal growth of ZnO nanorods, which simplifies the production of low cost films with controlled morphology for H2S gas sensor application. The prepared ZnO nanorods exhibit a hexagonal wurtzite phase analyzed by the X-ray diffraction analysis. The FTIR spectra provide information that the band located between 465-570 cm-1 corresponds to the stretching bond of Zn-O, which confirms the creation of ZnO. PL spectroscopic studies showed that the doping of Ag NPs and f-MWCNT in the ZnO matrix leads to the tuning of the bandgap. The SEM analysis showed the morphology of ZnO was the nanorods. The nanocomposites Ag/ZnO and F-MWCNT/ZnO which prepared, sep
... Show MoreThe Field Programmable Gate Array (FPGA) approach is the most recent category, which takes the place in the implementation of most of the Digital Signal Processing (DSP) applications. It had proved the capability to handle such problems and supports all the necessary needs like scalability, speed, size, cost, and efficiency.
In this paper a new proposed circuit design is implemented for the evaluation of the coefficients of the two-dimensional Wavelet Transform (WT) and Wavelet Packet Transform (WPT) using FPGA is provided.
In this implementation the evaluations of the WT & WPT coefficients are depending upon filter tree decomposition using the 2-D discrete convolution algorithm. This implementation w
... Show MoreEmbedding an identifying data into digital media such as video, audio or image is known as digital watermarking. In this paper, a non-blind watermarking algorithm based on Berkeley Wavelet Transform is proposed. Firstly, the embedded image is scrambled by using Arnold transform for higher security, and then the embedding process is applied in transform domain of the host image. The experimental results show that this algorithm is invisible and has good robustness for some common image processing operations.
<p class="0abstract">Image denoising is a technique for removing unwanted signals called the noise, which coupling with the original signal when transmitting them; to remove the noise from the original signal, many denoising methods are used. In this paper, the Multiwavelet Transform (MWT) is used to denoise the corrupted image by Choosing the HH coefficient for processing based on two different filters Tri-State Median filter and Switching Median filter. With each filter, various rules are used, such as Normal Shrink, Sure Shrink, Visu Shrink, and Bivariate Shrink. The proposed algorithm is applied Salt& pepper noise with different levels for grayscale test images. The quality of the denoised image is evaluated by usi
... Show MoreComputer vision seeks to mimic the human visual system and plays an essential role in artificial intelligence. It is based on different signal reprocessing techniques; therefore, developing efficient techniques becomes essential to achieving fast and reliable processing. Various signal preprocessing operations have been used for computer vision, including smoothing techniques, signal analyzing, resizing, sharpening, and enhancement, to reduce reluctant falsifications, segmentation, and image feature improvement. For example, to reduce the noise in a disturbed signal, smoothing kernels can be effectively used. This is achievedby convolving the distributed signal with smoothing kernels. In addition, orthogonal moments (OMs) are a cruc
... Show MoreThis paper considers and proposes new estimators that depend on the sample and on prior information in the case that they either are equally or are not equally important in the model. The prior information is described as linear stochastic restrictions. We study the properties and the performances of these estimators compared to other common estimators using the mean squared error as a criterion for the goodness of fit. A numerical example and a simulation study are proposed to explain the performance of the estimators.
The alternating direction implicit method (ADI) is a common classical numerical method that was first introduced to solve the heat equation in two or more spatial dimensions and can also be used to solve parabolic and elliptic partial differential equations as well. In this paper, We introduce an improvement to the alternating direction implicit (ADI) method to get an equivalent scheme to Crank-Nicolson differences scheme in two dimensions with the main feature of ADI method. The new scheme can be solved by similar ADI algorithm with some modifications. A numerical example was provided to support the theoretical results in the research.
The research presents the reliability. It is defined as the probability of accomplishing any part of the system within a specified time and under the same circumstances. On the theoretical side, the reliability, the reliability function, and the cumulative function of failure are studied within the one-parameter Raleigh distribution. This research aims to discover many factors that are missed the reliability evaluation which causes constant interruptions of the machines in addition to the problems of data. The problem of the research is that there are many methods for estimating the reliability function but no one has suitable qualifications for most of these methods in the data such
The gravity method is a measurement of relatively noticeable variations in the Earth’s gravitational field caused by lateral variations in rock's density. In the current research, a new technique is applied on the previous Bouguer map of gravity surveys (conducted from 1940–1950) of the last century, by selecting certain areas in the South-Western desert of Iraqi-territory within the provinces' administrative boundary of Najaf and Anbar. Depending on the theory of gravity inversion where gravity values could be reflected to density-contrast variations with the depths; so, gravity data inversion can be utilized to calculate the models of density and velocity from four selected depth-slices 9.63 Km, 1.1 Km, 0.682 Km and 0.407 Km.
... Show More