Improving the performance of visual computing systems is achieved by removing unwanted reflections from a picture captured in front of a glass. Reflection and transmission layers are superimposed in a linear form at the reflected photographs. Decomposing an image into these layers is often a difficult task. Plentiful classical separation methods are available in the literature which either works on a single image or requires multiple images. The major step in reflection removal is the detection of reflection and background edges. Separation of the background and reflection layers is depended on edge categorization results. In this paper a wavelet transform is used as a prior estimation of background edges to separate reflection. Experimental results verify the effectiveness of the proposal in the speed and accuracy.
A fault is an error that has effects on system behaviour. A software metric is a value that represents the degree to which software processes work properly and where faults are more probable to occur. In this research, we study the effects of removing redundancy and log transformation based on threshold values for identifying faults-prone classes of software. The study also contains a comparison of the metric values of an original dataset with those after removing redundancy and log transformation. E-learning and system dataset were taken as case studies. The fault ratio ranged from 1%-31% and 0%-10% for the original dataset and 1%-10% and 0%-4% after removing redundancy and log transformation, respectively. These results impacted direct
... Show MoreThis research deals with unusual approach for analyzing the Simple Linear Regression via Linear Programming by Two - phase method, which is known in Operations Research: “O.R.”. The estimation here is found by solving optimization problem when adding artificial variables: Ri. Another method to analyze the Simple Linear Regression is introduced in this research, where the conditional Median of (y) was taken under consideration by minimizing the Sum of Absolute Residuals instead of finding the conditional Mean of (y) which depends on minimizing the Sum of Squared Residuals, that is called: “Median Regression”. Also, an Iterative Reweighted Least Squared based on the Absolute Residuals as weights is performed here as another method to
... Show MoreThe estimation of the initial oil in place is a crucial topic in the period of exploration, appraisal, and development of the reservoir. In the current work, two conventional methods were used to determine the Initial Oil in Place. These two methods are a volumetric method and a reservoir simulation method. Moreover, each method requires a type of data whereet al the volumetric method depends on geological, core, well log and petrophysical properties data while the reservoir simulation method also needs capillary pressure versus water saturation, fluid production and static pressure data for all active wells at the Mishrif reservoir. The petrophysical properties for the studied reservoir is calculated using neural network technique
... Show MoreThe physical and elastic characteristics of rocks determine rock strengths in general. Rock strength is frequently assessed using porosity well logs such as neutron and sonic logs. The essential criteria for estimating rock mechanic parameters in petroleum engineering research are uniaxial compressive strength and elastic modulus. Indirect estimation using well-log data is necessary to measure these variables. This study attempts to create a single regression model that can accurately forecast rock mechanic characteristics for the Harth Carbonate Formation in the Fauqi oil field. According to the findings of this study, petrophysical parameters are reliable indexes for determining rock mechanical properties having good performance p
... Show MoreExponential Distribution is probably the most important distribution in reliability work. In this paper, estimating the scale parameter of an exponential distribution was proposed through out employing maximum likelihood estimator and probability plot methods for different samples size. Mean square error was implemented as an indicator of performance for assumed several values of the parameter and computer simulation has been carried out to analysis the obtained results
This study discussed a biased estimator of the Negative Binomial Regression model known as (Liu Estimator), This estimate was used to reduce variance and overcome the problem Multicollinearity between explanatory variables, Some estimates were used such as Ridge Regression and Maximum Likelihood Estimators, This research aims at the theoretical comparisons between the new estimator (Liu Estimator) and the estimators
Background: Machine learning relies on a hybrid of analytics, including regression analyses. There have been no attempts to deploy a sinusoidal transformation of data to enhance linear regression models.
Objectives: We aim to optimize linear models by implementing sinusoidal transformation to minimize the sum of squared error.
Methods: We implemented non-Bayesian statistics using SPSS and MatLab. We used Excel to generate 30 trials of linear regression models, and each has 1,000 observations. We utilized SPSS linear regression, Wilcoxon signed-rank test, and Cronbach’s alpha statistics to evaluate the performance of the optimization model. Results: The sinusoidal
The process of digital transformation is considered one of the most influential matters in circulation at the present time, as it seeks to integrate computer-based technologies into the public services provided by companies or institutions. To achieve digital transformation, basics and points must be established, while relying on a set of employee skills and involving customers in developing this process. Today, all governments are seeking electronic transformation by converting all public services into digital, where changes in cybersecurity must be taken into account, which constitutes a large part of the priorities of nations and companies. The vulnerability to cyberspace, the development of technologies and devices, and the use
... Show MoreIn modern times face recognition is one of the vital sides for computer vision. This is due to many reasons involving availability and accessibility of technologies and commercial applications. Face recognition in a brief statement is robotically recognizing a person from an image or video frame. In this paper, an efficient face recognition algorithm is proposed based on the benefit of wavelet decomposition to extract the most important and distractive features for the face and Eigen face method to classify faces according to the minimum distance with feature vectors. Faces94 data base is used to test the method. An excellent recognition with minimum computation time is obtained with accuracy reaches to 100% and recognition time decrease
... Show More