Copula modeling is widely used in modern statistics. The boundary bias problem is one of the problems faced when estimating by nonparametric methods, as kernel estimators are the most common in nonparametric estimation. In this paper, the copula density function was estimated using the probit transformation nonparametric method in order to get rid of the boundary bias problem that the kernel estimators suffer from. Using simulation for three nonparametric methods to estimate the copula density function and we proposed a new method that is better than the rest of the methods by five types of copulas with different sample sizes and different levels of correlation between the copula variables and the different parameters for the function. The results showed that the best method is to combine probit transformation and mirror reflection kernel estimator (PTMRKE) and followed by the (IPE) method when using all copula functions and for all sample sizes if the correlation is strong (positive or negative). But in the case of using weak and medium correlations, it turns out that the (IPE) method is the best, followed by the proposed method(PTMRKE), depending on (RMSE, LOGL, Akaike)criteria. The results also indicated that the mirror kernel reflection method when using the five copulas is weak.
The using of the parametric models and the subsequent estimation methods require the presence of many of the primary conditions to be met by those models to represent the population under study adequately, these prompting researchers to search for more flexible models of parametric models and these models were nonparametric models.
In this manuscript were compared to the so-called Nadaraya-Watson estimator in two cases (use of fixed bandwidth and variable) through simulation with different models and samples sizes. Through simulation experiments and the results showed that for the first and second models preferred NW with fixed bandwidth fo
... Show MoreIn this paper, a handwritten digit classification system is proposed based on the Discrete Wavelet Transform and Spike Neural Network. The system consists of three stages. The first stage is for preprocessing the data and the second stage is for feature extraction, which is based on Discrete Wavelet Transform (DWT). The third stage is for classification and is based on a Spiking Neural Network (SNN). To evaluate the system, two standard databases are used: the MADBase database and the MNIST database. The proposed system achieved a high classification accuracy rate with 99.1% for the MADBase database and 99.9% for the MNIST database
In this paper a method to determine whether an image is forged (spliced) or not is presented. The proposed method is based on a classification model to determine the authenticity of a tested image. Image splicing causes many sharp edges (high frequencies) and discontinuities to appear in the spliced image. Capturing these high frequencies in the wavelet domain rather than in the spatial domain is investigated in this paper. Correlation between high-frequency sub-bands coefficients of Discrete Wavelet Transform (DWT) is also described using co-occurrence matrix. This matrix was an input feature vector to a classifier. The best accuracy of 92.79% and 94.56% on Casia v1.0 and Casia v2.0 datasets respectively was achieved. This pe
... Show MoreFuture wireless systems aim to provide higher transmission data rates, improved spectral efficiency and greater capacity. In this paper a spectral efficient two dimensional (2-D) parallel code division multiple access (CDMA) system is proposed for generating and transmitting (2-D CDMA) symbols through 2-D Inter-Symbol Interference (ISI) channel to increase the transmission speed. The 3D-Hadamard matrix is used to generate the 2-D spreading codes required to spread the two-dimensional data for each user row wise and column wise. The quadrature amplitude modulation (QAM) is used as a data mapping technique due to the increased spectral efficiency offered. The new structure simulated using MATLAB and a comparison of performance for ser
... Show MoreIn the current research work, a system of hiding a text in a digital grayscale image has been presented. The algorithm system that had been used was adopted two transforms Integer Wavelet transform and Discrete Cosine transformed. Huffman's code has been used to encoding the text before the embedding it in the cover image in the HL sub band. Peak Signal to Noise Ratio (PSNR) was used to measure the effect of embedding text in the watermarked image; also correlation coefficient has been used to measure the ratio of the recovered text after applying an attack on the watermarked image and we get a good result. The implementation of our proposed Algorithm is realized using MATLAB version 2010a.
In this paper, we introduce a new complex integral transform namely ”Complex Sadik Transform”. The
properties of this transformation are investigated. This complex integral transformation is used to reduce
the core problem to a simple algebraic equation. The answer to this primary problem can than be obtained
by solving this algebraic equation and applying the inverse of complex Sadik transformation. Finally,
the complex Sadik integral transformation is applied and used to find the solution of linear higher order
ordinary differential equations. As well as, we present and discuss, some important real life problems
such as: pharmacokinetics problem ,nuclear physics problem and Beams Probem
The art of preventing the detection of hidden information messages is the way that steganography work. Several algorithms have been proposed for steganographic techniques. A major portion of these algorithms is specified for image steganography because the image has a high level of redundancy. This paper proposed an image steganography technique using a dynamic threshold produced by the discrete cosine coefficient. After dividing the green and blue channel of the cover image into 1*3-pixel blocks, check if any bits of green channel block less or equal to threshold then start to store the secret bits in blue channel block, and to increase the security not all bits in the chosen block used to store the secret bits. Firstly, store in the cente
... Show MoreIn computer vision, visual object tracking is a significant task for monitoring
applications. Tracking of object type is a matching trouble. In object tracking, one
main difficulty is to select features and build models which are convenient for
distinguishing and tracing the target. The suggested system for continuous features
descriptor and matching in video has three steps. Firstly, apply wavelet transform on
image using Haar filter. Secondly interest points were detected from wavelet image
using features from accelerated segment test (FAST) corner detection. Thirdly those
points were descripted using Speeded Up Robust Features (SURF). The algorithm
of Speeded Up Robust Features (SURF) has been employed and impl
Multiple applications use offline handwritten signatures for human verification. This fact increases the need for building a computerized system for signature recognition and verification schemes to ensure the highest possible level of security from counterfeit signatures. This research is devoted to developing a system for offline signature verification based on a combination of local ridge features and other features obtained from applying two-level Haar wavelet transform. The proposed system involves many preprocessing steps that include a group of image processing techniques (including: many enhancement techniques, region of interest allocation, converting to a binary image, and Thinning). In feature extraction and
... Show More