Copula modeling is widely used in modern statistics. The boundary bias problem is one of the problems faced when estimating by nonparametric methods, as kernel estimators are the most common in nonparametric estimation. In this paper, the copula density function was estimated using the probit transformation nonparametric method in order to get rid of the boundary bias problem that the kernel estimators suffer from. Using simulation for three nonparametric methods to estimate the copula density function and we proposed a new method that is better than the rest of the methods by five types of copulas with different sample sizes and different levels of correlation between the copula variables and the different parameters for the function. The results showed that the best method is to combine probit transformation and mirror reflection kernel estimator (PTMRKE) and followed by the (IPE) method when using all copula functions and for all sample sizes if the correlation is strong (positive or negative). But in the case of using weak and medium correlations, it turns out that the (IPE) method is the best, followed by the proposed method(PTMRKE), depending on (RMSE, LOGL, Akaike)criteria. The results also indicated that the mirror kernel reflection method when using the five copulas is weak.
In this paper, the theoretical cross section in pre-equilibrium nuclear reaction has been studied for the reaction at energy 22.4 MeV. Ericson’s formula of partial level density PLD and their corrections (William’s correction and spin correction) have been substituted in the theoretical cross section and compared with the experimental data for nucleus. It has been found that the theoretical cross section with one-component PLD from Ericson’s formula when doesn’t agree with the experimental value and when . There is little agreement only at the high value of energy range with the experimental cross section. The theoretical cross section that depends on the one-component William's formula and on-component corrected to spi
... Show MoreImage compression is one of the data compression types applied to digital images in order to reduce their high cost for storage and/or transmission. Image compression algorithms may take the benefit of visual sensitivity and statistical properties of image data to deliver superior results in comparison with generic data compression schemes, which are used for other digital data. In the first approach, the input image is divided into blocks, each of which is 16 x 16, 32 x 32, or 64 x 64 pixels. The blocks are converted first into a string; then, encoded by using a lossless and dictionary-based algorithm known as arithmetic coding. The more occurrence of the pixels values is codded in few bits compare with pixel values of less occurre
... Show MoreCatalytic reduction is considered an effective approach for the reduction of toxic organic pollutants from the environment, but finding an active catalyst is still a big challenge. Herein, Ag decorated CeO2 catalyst was synthesized through polyol reduction method and applied for catalytic reduction (conversion) of 4-nitrophenol (4-NP) to 4-aminophenol (4-AP). The Ag decorated CeO2 catalyst displayed an outstanding reduction activity with 99% conversion of 4-NP in 5 min with a 0.61 min−1 reaction rate (k). A number of structural characterization techniques were executed to investigate the influence of Ag on CeO2 and its effect on the catalytic conversion of 4-NP. The outstanding catalytic performances of the Ag-CeO2 catalyst can be assigne
... Show MoreA mathematical method with a new algorithm with the aid of Matlab language is proposed to compute the linear equivalence (or the recursion length) of the pseudo-random key-stream periodic sequences using Fourier transform. The proposed method enables the computation of the linear equivalence to determine the degree of the complexity of any binary or real periodic sequences produced from linear or nonlinear key-stream generators. The procedure can be used with comparatively greater computational ease and efficiency. The results of this algorithm are compared with Berlekamp-Massey (BM) method and good results are obtained where the results of the Fourier transform are more accurate than those of (BM) method for computing the linear equivalenc
... Show MoreThe Internet is providing vital communications between millions of individuals. It is also more and more utilized as one of the commerce tools; thus, security is of high importance for securing communications and protecting vital information. Cryptography algorithms are essential in the field of security. Brute force attacks are the major Data Encryption Standard attacks. This is the main reason that warranted the need to use the improved structure of the Data Encryption Standard algorithm. This paper proposes a new, improved structure for Data Encryption Standard to make it secure and immune to attacks. The improved structure of Data Encryption Standard was accomplished using standard Data Encryption Standard with a new way of two key gene
... Show MoreRecently, the internet has made the users able to transmit the digital media in the easiest manner. In spite of this facility of the internet, this may lead to several threats that are concerned with confidentiality of transferred media contents such as media authentication and integrity verification. For these reasons, data hiding methods and cryptography are used to protect the contents of digital media. In this paper, an enhanced method of image steganography combined with visual cryptography has been proposed. A secret logo (binary image) of size (128x128) is encrypted by applying (2 out 2 share) visual cryptography on it to generate two secret share. During the embedding process, a cover red, green, and blue (RGB) image of size (512
... Show MoreQuantitative analysis of human voice has been subject of interest and the subject gained momentum when human voice was identified as a modality for human authentication and identification. The main organ responsible for production of sound is larynx and the structure of larynx along with its physical properties and modes of vibration determine the nature and quality of sound produced. There has been lot of work from the point of view of fundamental frequency of sound and its characteristics. With the introduction of additional applications of human voice interest grew in other characteristics of sound and possibility of extracting useful features from human voice. We conducted a study using Fast Fourier Transform (FFT) technique to analy
... Show More