The theory of probabilistic programming may be conceived in several different ways. As a method of programming it analyses the implications of probabilistic variations in the parameter space of linear or nonlinear programming model. The generating mechanism of such probabilistic variations in the economic models may be due to incomplete information about changes in demand, production and technology, specification errors about the econometric relations presumed for different economic agents, uncertainty of various sorts and the consequences of imperfect aggregation or disaggregating of economic variables. In this Research we discuss the probabilistic programming problem when the coefficient bi is random variable with given Laplace distribution.
Problem: Cancer is regarded as one of the world's deadliest diseases. Machine learning and its new branch (deep learning) algorithms can facilitate the way of dealing with cancer, especially in the field of cancer prevention and detection. Traditional ways of analyzing cancer data have their limits, and cancer data is growing quickly. This makes it possible for deep learning to move forward with its powerful abilities to analyze and process cancer data. Aims: In the current study, a deep-learning medical support system for the prediction of lung cancer is presented. Methods: The study uses three different deep learning models (EfficientNetB3, ResNet50 and ResNet101) with the transfer learning concept. The three models are trained using a
... Show MoreThis research aims to examine the effectiveness of a teaching strategy based on the cognitive model of Daniel in the development of achievement and the motivation of learning the school mathematics among the third intermediate grade students in the light of their study of "Systems of Linear Equations”. The research was conducted in the first semester (1439/1440AH), at Saeed Ibn Almosaieb Intermediate School, in Arar, Saudi Arabia. A quasi-experimental design has been used. In addition, a (pre & post) achievement test (20 Questions) and a (pre & post) scale of learning motivation to the school mathematics (25 Items) have been applied on two groups: a control group (31Students), and an experimental group (29 Students). The resear
... Show More
Abstract
Rayleigh distribution is one of the important distributions used for analysis life time data, and has applications in reliability study and physical interpretations. This paper introduces four different methods to estimate the scale parameter, and also estimate reliability function; these methods are Maximum Likelihood, and Bayes and Modified Bayes, and Minimax estimator under squared error loss function, for the scale and reliability function of the generalized Rayleigh distribution are obtained. The comparison is done through simulation procedure, t
... Show MoreThe objective of the study is to demonstrate the predictive ability is better between the logistic regression model and Linear Discriminant function using the original data first and then the Home vehicles to reduce the dimensions of the variables for data and socio-economic survey of the family to the province of Baghdad in 2012 and included a sample of 615 observation with 13 variable, 12 of them is an explanatory variable and the depended variable is number of workers and the unemployed.
Was conducted to compare the two methods above and it became clear by comparing the logistic regression model best of a Linear Discriminant function written
... Show MoreThe use of real-time machine learning to optimize passport control procedures at airports can greatly improve both the efficiency and security of the processes. To automate and optimize these procedures, AI algorithms such as character recognition, facial recognition, predictive algorithms and automatic data processing can be implemented. The proposed method is to use the R-CNN object detection model to detect passport objects in real-time images collected by passport control cameras. This paper describes the step-by-step process of the proposed approach, which includes pre-processing, training and testing the R-CNN model, integrating it into the passport control system, and evaluating its accuracy and speed for efficient passenger flow
... Show MoreOptical Mark Recognition (OMR) is an important technology for applications that require speedy, high-accuracy processing of a huge volume of hand-filled forms. The aim of this technology is to reduce manual work, human effort, high accuracy in assessment, and minimize time for evaluation answer sheets. This paper proposed OMR by using Modify Bidirectional Associative Memory (MBAM), MBAM has two phases (learning and analysis phases), it will learn on the answer sheets that contain the correct answers by giving its own code that represents the number of correct answers, then detection marks from answer sheets by using analysis phase. This proposal will be able to detect no selection or select more than one choice, in addition, using M
... Show MoreA 3D geological model is an essential step to reveal reservoir heterogeneity and reservoir properties distribution. In the present study, a three-dimensional geological model for the Mishrif reservoir was built based on data obtained from seven wells and core data. The methodology includes building a 3D grid and populating it with petrophysical properties such as (facies, porosity, water saturation, and net to gross ratio). The structural model was built based on a base contour map obtained from 2D seismic interpretation along with well tops from seven wells. A simple grid method was used to build the structural framework with 234x278x91 grid cells in the X, Y, and Z directions, respectively, with lengths equal to 150 meters. The to
... Show MoreThe development of information systems in recent years has contributed to various methods of gathering information to evaluate IS performance. The most common approach used to collect information is called the survey system. This method, however, suffers one major drawback. The decision makers consume considerable time to transform data from survey sheets to analytical programs. As such, this paper proposes a method called ‘survey algorithm based on R programming language’ or SABR, for data transformation from the survey sheets inside R environments by treating the arrangement of data as a relational format. R and Relational data format provide excellent opportunity to manage and analyse the accumulated data. Moreover, a survey syste
... Show MoreIn this paper, we are mainly concerned with estimating cascade reliability model (2+1) based on inverted exponential distribution and comparing among the estimation methods that are used . The maximum likelihood estimator and uniformly minimum variance unbiased estimators are used to get of the strengths and the stress ;k=1,2,3 respectively then, by using the unbiased estimators, we propose Preliminary test single stage shrinkage (PTSSS) estimator when a prior knowledge is available for the scale parameter as initial value due past experiences . The Mean Squared Error [MSE] for the proposed estimator is derived to compare among the methods. Numerical results about conduct of the considered
... Show MoreA mixture model is used to model data that come from more than one component. In recent years, it became an effective tool in drawing inferences about the complex data that we might come across in real life. Moreover, it can represent a tremendous confirmatory tool in classification observations based on similarities amongst them. In this paper, several mixture regression-based methods were conducted under the assumption that the data come from a finite number of components. A comparison of these methods has been made according to their results in estimating component parameters. Also, observation membership has been inferred and assessed for these methods. The results showed that the flexible mixture model outperformed the
... Show More