Copula modeling is widely used in modern statistics. The boundary bias problem is one of the problems faced when estimating by nonparametric methods, as kernel estimators are the most common in nonparametric estimation. In this paper, the copula density function was estimated using the probit transformation nonparametric method in order to get rid of the boundary bias problem that the kernel estimators suffer from. Using simulation for three nonparametric methods to estimate the copula density function and we proposed a new method that is better than the rest of the methods by five types of copulas with different sample sizes and different levels of correlation between the copula variables and the different parameters for the function. The results showed that the best method is to combine probit transformation and mirror reflection kernel estimator (PTMRKE) and followed by the (IPE) method when using all copula functions and for all sample sizes if the correlation is strong (positive or negative). But in the case of using weak and medium correlations, it turns out that the (IPE) method is the best, followed by the proposed method(PTMRKE), depending on (RMSE, LOGL, Akaike)criteria. The results also indicated that the mirror kernel reflection method when using the five copulas is weak.
Globally, over forty million people are living with Human Immunodeficiency Viral (HIV) infections. Highly Active Antiretroviral Therapy (HAART) consists of two or three Antiretroviral (ARV) drugs and has been used for more than a decade to prolong the life of AIDS-diagnosed patients. The persistent use of HAART is essential for effectively suppressing HIV replication. Frequent use of multiple medications at relatively high dosages is a major reason for patient noncompliance and an obstacle to achieving efficient pharmacological treatment. Despite strict compliance with the HAART regimen, the eradication of HIV from the host remains unattainable. Anatomical and Intracellular viral reservo
Abstract
Lightweight materials is used in the sheet metal hydroforming process, because it can be adapted to the manufacturing of complex structural components into a single body with high structural stiffness. Sheet hydroforming has been successfully developed in industry such as in the manufacturing of the components of automotive.The aim of this study is to simulate the experimental results ( such as the amount of pressure required to hydroforming process, stresses, and strains distribution) with results of finite element analyses (FEA) (ANSYS 11) for aluminum alloy (AA5652) sheets with thickness (1.2mm) before heat treatm
... Show MoreIn this work was prepared three different types of modified screen printed carbon electrode (SPCEs) with drops casted method, the used carbone nanomaterials were the MWCNT, functionalized –MWCNT (f-MWCNT) and After several experiments were made to find an appropriate ratio to make good GOT/f-MWCNT nanocomposite, and found the suspension mixture (1:1) from GOT/f-MWCNT (f-MWCNT-GOT). The electrical and physical properties were performed with cyclic voltammeter technique, and studied the maximum current response, the effective surface area, effect of the pH value and the determination of active surface area for MWCNT-SPCE , f-MWCNT-SPCE and f-MWCNT-GOT/SPCE as (0.04 cm2), (0.119 cm2) and (0.115 cm2) respectively, the surface coverage concent
... Show MoreDairy wastewater generally contains fats, lactose, whey proteins, and nutrients. Casein precipitation causes the effluent to decompose into a dark, strong-smelling sludge. Fluid waste contains soluble organic matter, suspended solids, and gaseous organic matter, which cause undesirable taste and smell, grant tone and turbidity, and advance eutrophication, which plays an essential role in increasing biological oxygen demand (BOD) in water. It also contains detergents and disinfecting agents from the rinses and washing processes, which increase the need for chemical oxygen (COD). One of the characteristics of dairy effluents is their relatively high temperature, high organic contents, and wide pH range, so the discharge of wastewater into
... Show MoreBig data analysis is essential for modern applications in areas such as healthcare, assistive technology, intelligent transportation, environment and climate monitoring. Traditional algorithms in data mining and machine learning do not scale well with data size. Mining and learning from big data need time and memory efficient techniques, albeit the cost of possible loss in accuracy. We have developed a data aggregation structure to summarize data with large number of instances and data generated from multiple data sources. Data are aggregated at multiple resolutions and resolution provides a trade-off between efficiency and accuracy. The structure is built once, updated incrementally, and serves as a common data input for multiple mining an
... Show MoreAbstract
This research aims to study and improve the passivating specifications of rubber resistant to vibration. In this paper, seven different rubber recipes were prepared based on mixtures of natural rubber(NR) as an essential part in addition to the synthetic rubber (IIR, BRcis, SBR, CR)with different rates. Mechanical tests such as tensile strength, hardness, friction, resistance to compression, fatigue and creep testing in addition to the rheological test were performed. Furthermore, scanning electron microscopy (SEM)test was used to examine the structure morphology of rubber. After studying and analyzing the results, we found that, recipe containing (BRcis) of 40% from th
... Show MoreThis paper is interested in comparing the performance of the traditional methods to estimate parameter of exponential distribution (Maximum Likelihood Estimator, Uniformly Minimum Variance Unbiased Estimator) and the Bayes Estimator in the case of data to meet the requirement of exponential distribution and in the case away from the distribution due to the presence of outliers (contaminated values). Through the employment of simulation (Monte Carlo method) and the adoption of the mean square error (MSE) as criterion of statistical comparison between the performance of the three estimators for different sample sizes ranged between small, medium and large (n=5,10,25,50,100) and different cases (wit
... Show More