Survival analysis is widely applied in data describing for the life time of item until the occurrence of an event of interest such as death or another event of understudy . The purpose of this paper is to use the dynamic approach in the deep learning neural network method, where in this method a dynamic neural network that suits the nature of discrete survival data and time varying effect. This neural network is based on the Levenberg-Marquardt (L-M) algorithm in training, and the method is called Proposed Dynamic Artificial Neural Network (PDANN). Then a comparison was made with another method that depends entirely on the Bayes methodology is called Maximum A Posterior (MAP) method. This method was carried out using numerical algorithms represented by Iteratively Weighted Kalman Filter Smoothing (IWKFS) algorithm and in combination with the Expectation Maximization (EM) algorithm. Average Mean Square Error (AMSE) and Cross Entropy Error (CEE) were used as comparison’s criteria. The methods and procedures were applied to data generated by simulation using a different combination of sample sizes and the number of intervals.
Four simply supported reinforced concrete (RC) beams were test experimentaly and analyzed using the extended finite element method (XFEM). This method is used to treat the discontinuities resulting from the fracture process and crack propagation in that occur in concrete. The Meso-Scale Approach (MSA) used to model concrete as a heterogenous material consists of a three-phasic material (coarse aggregate, mortar, and air voids in the cement paste). The coarse aggregate that was used in the casting of these beams rounded and crashed aggregate shape with maximum size of 20 mm. The compressive strength used in these beams is equal to 17 MPa and 34 MPa, respectively. These RC beams are designed to fail due to flexure when subjected to lo
... Show MoreA load flow program is developed using MATLAB and based on the Newton–Raphson method,which shows very fast and efficient rate of convergence as well as computationally the proposed method is very efficient and it requires less computer memory through the use of sparsing method and other methods in programming to accelerate the run speed to be near the real time.
The designed program computes the voltage magnitudes and phase angles at each bus of the network under steady–state operating conditions. It also computes the power flow and power losses for all equipment, including transformers and transmission lines taking into consideration the effects of off–nominal, tap and phase shift transformers, generators, shunt capacitors, sh
In this paper, we established a mathematical model of an SI1I2R epidemic disease with saturated incidence and general recovery functions of the first disease I1. Considering the basic reproduction number, we obtained conditions for both disease-free and co-existing cases. The equilibrium points local stability is verified by using the Routh-Hurwitz criterion, while for the global stability, we used a suitable Lyapunov function to analyze the endemic spread of the positive equilibrium point. Moreover, we carried out the local bifurcation around both equilibrium points (disease-free and co-existing), where we obtained that the disease-free equilibrium point undergoes a transcritical bifurcation. We conduct numerical simulations that suppo
... Show MoreBackground: This study was conducted to evaluate the hard palate bone density and thickness during 3rd and 4th decades and their relationships with body mass index (BMI) and compositions, to allow more accurate mini-implant placement. Materials and method: Computed tomographic (CT) images were obtained for 60 patients (30 males and 30 females) with age range 20-39 years. The hard palate bone density and thickness were measured at 20 sites at the intersection of five anterioposterior and four mediolateral reference lines with 6 and 3 mm intervals from incisive foramen and mid-palatal suture respectively. Diagnostic scale operates according to the bioelectric impedance analysis principle was used to measure body weight; percentages of body fa
... Show MoreVisual analytics becomes an important approach for discovering patterns in big data. As visualization struggles from high dimensionality of data, issues like concept hierarchy on each dimension add more difficulty and make visualization a prohibitive task. Data cube offers multi-perspective aggregated views of large data sets and has important applications in business and many other areas. It has high dimensionality, concept hierarchy, vast number of cells, and comes with special exploration operations such as roll-up, drill-down, slicing and dicing. All these issues make data cubes very difficult to visually explore. Most existing approaches visualize a data cube in 2D space and require preprocessing steps. In this paper, we propose a visu
... Show MoreIdentifying breast cancer utilizing artificial intelligence technologies is valuable and has a great influence on the early detection of diseases. It also can save humanity by giving them a better chance to be treated in the earlier stages of cancer. During the last decade, deep neural networks (DNN) and machine learning (ML) systems have been widely used by almost every segment in medical centers due to their accurate identification and recognition of diseases, especially when trained using many datasets/samples. in this paper, a proposed two hidden layers DNN with a reduction in the number of additions and multiplications in each neuron. The number of bits and binary points of inputs and weights can be changed using the mask configuration
... Show MoreAbstract
The problem of missing data represents a major obstacle before researchers in the process of data analysis in different fields since , this problem is a recurrent one in all fields of study including social , medical , astronomical and clinical experiments .
The presence of such a problem within the data to be studied may influence negatively on the analysis and it may lead to misleading conclusions , together with the fact that these conclusions that result from a great bias caused by that problem in spite of the efficiency of wavelet methods but they are also affected by the missing of data , in addition to the impact of the problem of miss of accuracy estimation
... Show More