Survival analysis is widely applied in data describing for the life time of item until the occurrence of an event of interest such as death or another event of understudy . The purpose of this paper is to use the dynamic approach in the deep learning neural network method, where in this method a dynamic neural network that suits the nature of discrete survival data and time varying effect. This neural network is based on the Levenberg-Marquardt (L-M) algorithm in training, and the method is called Proposed Dynamic Artificial Neural Network (PDANN). Then a comparison was made with another method that depends entirely on the Bayes methodology is called Maximum A Posterior (MAP) method. This method was carried out using numerical algorithms represented by Iteratively Weighted Kalman Filter Smoothing (IWKFS) algorithm and in combination with the Expectation Maximization (EM) algorithm. Average Mean Square Error (AMSE) and Cross Entropy Error (CEE) were used as comparison’s criteria. The methods and procedures were applied to data generated by simulation using a different combination of sample sizes and the number of intervals.
This paper study two stratified quantile regression models of the marginal and the conditional varieties. We estimate the quantile functions of these models by using two nonparametric methods of smoothing spline (B-spline) and kernel regression (Nadaraya-Watson). The estimates can be obtained by solve nonparametric quantile regression problem which means minimizing the quantile regression objective functions and using the approach of varying coefficient models. The main goal is discussing the comparison between the estimators of the two nonparametric methods and adopting the best one between them
Mixed-effects conditional logistic regression is evidently more effective in the study of qualitative differences in longitudinal pollution data as well as their implications on heterogeneous subgroups. This study seeks that conditional logistic regression is a robust evaluation method for environmental studies, thru the analysis of environment pollution as a function of oil production and environmental factors. Consequently, it has been established theoretically that the primary objective of model selection in this research is to identify the candidate model that is optimal for the conditional design. The candidate model should achieve generalizability, goodness-of-fit, parsimony and establish equilibrium between bias and variab
... Show MoreWith the continuous downscaling of semiconductor processes, the growing power density and thermal issues in multicore processors become more and more challenging, thus reliable dynamic thermal management (DTM) is required to prevent severe challenges in system performance. The accuracy of the thermal profile, delivered to the DTM manager, plays a critical role in the efficiency and reliability of DTM, different sources of noise and variations in deep submicron (DSM) technologies severely affecting the thermal data that can lead to significant degradation of DTM performance. In this article, we propose a novel fault-tolerance scheme exploiting approximate computing to mitigate the DSM effects on DTM efficiency. Approximate computing in hardw
... Show MorePalm vein recognition is a one of the most efficient biometric technologies, each individual can be identified through its veins unique characteristics, palm vein acquisition techniques is either contact based or contactless based, as the individual's hand contact or not the peg of the palm imaging device, the needs a contactless palm vein system in modern applications rise tow problems, the pose variations (rotation, scaling and translation transformations) since the imaging device cannot aligned correctly with the surface of the palm, and a delay of matching process especially for large systems, trying to solve these problems. This paper proposed a pose invariant identification system for contactless palm vein which include three main
... Show MoreThe method of solving volterra integral equation by using numerical solution is a simple operation but to require many memory space to compute and save the operation. The importance of this equation appeares new direction to solve the equation by using new methods to avoid obstacles. One of these methods employ neural network for obtaining the solution.
This paper presents a proposed method by using cascade-forward neural network to simulate volterra integral equations solutions. This method depends on training cascade-forward neural network by inputs which represent the mean of volterra integral equations solutions, the target of cascade-forward neural network is to get the desired output of this network. Cascade-forward neural
... Show MoreIn this paper, the methods of weighted residuals: Collocation Method (CM), Least Squares Method (LSM) and Galerkin Method (GM) are used to solve the thin film flow (TFF) equation. The weighted residual methods were implemented to get an approximate solution to the TFF equation. The accuracy of the obtained results is checked by calculating the maximum error remainder functions (MER). Moreover, the outcomes were examined in comparison with the 4th-order Runge-Kutta method (RK4) and good agreements have been achieved. All the evaluations have been successfully implemented by using the computer system Mathematica®10.
The removal of congo red (CR) is a critical issue in contemporary textile industry wastewater treatment. The current study introduces a combined electrochemical process of electrocoagulation (EC) and electro-oxidation (EO) to address the elimination of this dye. Moreover, it discusses the formation of a triple composite of Co, Mn, and Ni oxides by depositing fixed salt ratios (1:1:1) of these oxides in an electrolysis cell at a constant current density of 25 mA/cm2. The deposition ended within 3 hours at room temperature. X-ray diffractometer (XRD), field emission scanning electron microscopy (FESEM), atomic force microscopy (AFM), and energy dispersive X-ray (EDX) characterized the structural and surface morphology of the multi-oxide sedim
... Show MoreBig data analysis is essential for modern applications in areas such as healthcare, assistive technology, intelligent transportation, environment and climate monitoring. Traditional algorithms in data mining and machine learning do not scale well with data size. Mining and learning from big data need time and memory efficient techniques, albeit the cost of possible loss in accuracy. We have developed a data aggregation structure to summarize data with large number of instances and data generated from multiple data sources. Data are aggregated at multiple resolutions and resolution provides a trade-off between efficiency and accuracy. The structure is built once, updated incrementally, and serves as a common data input for multiple mining an
... Show MoreGrass trimming operation is widely done in Malaysia for the purpose of maintaining highways. Large number of operators engaged in this work encounters high level of noise generated by back pack type grass trimmer used for this purpose. High level of noise exposure gives different kinds of ill effect on human operators. Exact nature of deteriorated work performance is not known. For predicting the work efficiency deterioration, fuzzy tool has been used in present research. It has been established that a fuzzy computing system will help in identification and analysis of fuzzy models fuzzy system offers a convenient way of representing the relationships between the inputs and outputs of a system in the form of IF-THEN rules. The paper presents
... Show MoreIn this study, we focused on the random coefficient estimation of the general regression and Swamy models of panel data. By using this type of data, the data give a better chance of obtaining a better method and better indicators. Entropy's methods have been used to estimate random coefficients for the general regression and Swamy of the panel data which were presented in two ways: the first represents the maximum dual Entropy and the second is general maximum Entropy in which a comparison between them have been done by using simulation to choose the optimal methods.
The results have been compared by using mean squares error and mean absolute percentage error to different cases in term of correlation valu
... Show More