Survival analysis is one of the types of data analysis that describes the time period until the occurrence of an event of interest such as death or other events of importance in determining what will happen to the phenomenon studied. There may be more than one endpoint for the event, in which case it is called Competing risks. The purpose of this research is to apply the dynamic approach in the analysis of discrete survival time in order to estimate the effect of covariates over time, as well as modeling the nonlinear relationship between the covariates and the discrete hazard function through the use of the multinomial logistic model and the multivariate Cox model. For the purpose of conducting the estimation process for both the discrete hazard function and the time-dependent parameters, two estimation methods have been used that depend on the Bayes method according to dynamic modeling: the Maximum A Posterior method (MAP) This method was done using numerical methods represented by a Iteratively Weighted Kalman Filter Smoothing (IWKFS) and in combination with the Expectation maximization algorithm (EM), the other method is represented by the Hybrid Markov Chains Monte Carlo (HMCMC) method using the Metropolis Hasting algorithm (MH) and Gypsum sampling (GS). It was concluded that survival analysis by descretization the data into a set of intervals is more flexible and fluid, as this allows analyzing risks and diagnosing impacts that vary over time. The study was applied in the survival analysis on dialysis until either death occurred due to kidney failure or the competing event, represented by kidney transplantation. The most important variables affecting the patient’s cessation of dialysis were also identified for both events in this research.
In this paper, the nonclassical approach to dynamic programming for the optimal control problem via strongly continuous semigroup has been presented. The dual value function VD ( .,. ) of the problem is defined and characterized. We find that it satisfied the dual dynamic programming principle and dual Hamilton Jacobi –Bellman equation. Also, some properties of VD (. , .) have been studied, such as, various kinds of continuities and boundedness, these properties used to give a sufficient condition for optimality. A suitable verification theorem to find a dual optimal feedback control has been proved. Finally gives an example which illustrates the value of the theorem which deals with the sufficient condition for optimality.
<
This thesis aims to show the effects of the development of the traditional manual system of the tax accounting process to the electronic system by the activation of the tax identification numbers (TINs) mechanism. The impact of this development is facilitating the tax accounting process, tax fraudand thus increasing the tax outcome.To prove the research hypothesis, an electronic system was designed based on income tax report, estimation note of individuals, in additional to using Adobe Dreamweaver application to write PHP, HTML, Javascript, and CSS web languages to implement the proposed system. The research reached a set of conclusions, the most important of which is; not enough the communication methods between the Genera
... Show Moreهدفت هذه الدراسة إلى تحليل نتائج الاختبار الوطني الموحد الذي تطبقه وزارة التربية والتعليم الفلسطينية في مادة الرياضيات لطلبة الصف الثامن الأساسي في المدارس الحكومية في محافظة طولكرم، وذلك لمعرفة مستوى الطلبة على هذا الاختبار في ضوء متغيرات الجنس والمنطقة التعليمية ونوع المدرسة، ومعرفة علاقة التحصيل على هذا الاختبار بتحصيل الطلبة المدرسي والمعدل العام. ولتحقيق ذلك تم تحليل درجات (3218) طالباً وطالبة؛ وهم ي
... Show Moreفي هذا البحث نحاول تسليط الضوء على إحدى طرائق تقدير المعلمات الهيكلية لنماذج المعادلات الآنية الخطية والتي تزودنا بتقديرات متسقة تختلف أحيانا عن تلك التي نحصل عليها من أساليب الطرائق التقليدية الأخرى وفق الصيغة العامة لمقدرات K-CLASS. وهذه الطريقة تعرف بطريقة الإمكان الأعظم محدودة المعلومات "LIML" أو طريقة نسبة التباين الصغرى"LVR
... Show MoreWith the increasing demands to use remote sensing approaches, such as aerial photography, satellite imagery, and LiDAR in archaeological applications, there is still a limited number of studies assessing the differences between remote sensing methods in extracting new archaeological finds. Therefore, this work aims to critically compare two types of fine-scale remotely sensed data: LiDAR and an Unmanned Aerial Vehicle (UAV) derived Structure from Motion (SfM) photogrammetry. To achieve this, aerial imagery and airborne LiDAR datasets of Chun Castle were acquired, processed, analyzed, and interpreted. Chun Castle is one of the most remarkable ancient sites in Cornwall County (Southwest England) that had not been surveyed and explored
... Show MoreBackground: Chronic myelogenous leukemia is a malignant hematological disease of hematopoietic stem cells. It is difficult to adapt treatment to each patient's risk level because there are currently few clinical tests and no molecular diagnostics that may predict a patient's clock for the advancement of CML at the time of chronic phase diagnosis. Biomarkers that can differentiate people based on the outcome at diagnosis are needed for blast crisis prevention and response improvement. Objective: This study is an effort to exploit the SLC25A3 gene as a potential biomarker for CML. Methods: RT-qPCR was applied to assess the expression levels of the SLC25A3 gene. Results: In comparison to the mean ΔCt of the control group, which was found to b
... Show MoreWithin the framework of big data, energy issues are highly significant. Despite the significance of energy, theoretical studies focusing primarily on the issue of energy within big data analytics in relation to computational intelligent algorithms are scarce. The purpose of this study is to explore the theoretical aspects of energy issues in big data analytics in relation to computational intelligent algorithms since this is critical in exploring the emperica aspects of big data. In this chapter, we present a theoretical study of energy issues related to applications of computational intelligent algorithms in big data analytics. This work highlights that big data analytics using computational intelligent algorithms generates a very high amo
... Show MoreDue to the increasing interest in the quality of auditing by writers, researchers and regulators of the auditing profession. The matter necessitated a statement of the extent to which the auditor practices professional skepticism, because of its significant impact in discovering errors and material misrepresentations contained in the financial statements in order to give the financial community confidence in them and the success of the audit process. The research aims to clarify the concept and importance of the practice of professional skepticism and its effect on the quality of the auditor's performance in Iraq. To achieve the research objectives, the two re
... Show MoreTwelve species from Brassicaceae family were studied using two different molecular techniques: RAPD and ISSR; both of these techniques were used to detect some molecular markers associated with the genotype identification. RAPD results, from using five random primers, revealed 241 amplified fragments, 62 of them were polymorphic (26%).
ISSR results showed that out of seven primers, three (ISSR3, UBC807, UBC811) could not amplify the genomic DNA; other primers revealed 183 amplified fragments, 36 of them were polymorphic (20%). The similarity evidence and dendrogram for the genetic distances of the incorporation between the two techniques showed that the highest similarity was 0.897 between the va
... Show MoreThe 2D resistivity imaging technique was applied in an engineering study for the investigation of subsurface weakness zones within University of Anbar, western Iraq. The survey was carried out using Dipole-dipole array with an n-factor of 6 and a-spacing values of 2 m and 5 m. The inverse models of the 2D electrical imaging clearly show the resistivity contrast between the anomalous parts of the weakness zones and the background resistivity distribution. The thickness and shape of the subsurface weakness zones were well defined from the 2D imaging using Dipole-dipole array of 2 m a-spacing. The thickness of the weakness zone ranges between 9.5 m to 11.5 m. Whereas the Dipole-dipole array with a-spacing of 5 m and n-factor of 6 allocated
... Show More