Big data analysis has important applications in many areas such as sensor networks and connected healthcare. High volume and velocity of big data bring many challenges to data analysis. One possible solution is to summarize the data and provides a manageable data structure to hold a scalable summarization of data for efficient and effective analysis. This research extends our previous work on developing an effective technique to create, organize, access, and maintain summarization of big data and develops algorithms for Bayes classification and entropy discretization of large data sets using the multi-resolution data summarization structure. Bayes classification and data discretization play essential roles in many learning algorithms such as decision tree and nearest neighbor search. The proposed method can handle streaming data efficiently and, for entropy discretization, provide su the optimal split value.
Background: Implantology is a fast growing area in dentistry. One of the most common issues encountered in dental implantation procedures is the lack of adequate preoperative planning. Conventional radiography may not be able to assess the true regional three-dimensional anatomical presentation. Multi Slice Computed Tomography provides data in 3-dimentional format offering information on craniofacial anatomy for diagnosis; this technology enables the virtual placement of implant in a 3-Dimensional model of the patient jaw (dental planning). Patients, Material and Methods: The sample consisted of (72) Iraqi patients indicated for dental implant (34 male and 38 female), age range between (20-70) years old. They were examined during a time p
... Show MoreIn this paper, Bayes estimators of the parameter of Maxwell distribution have been derived along with maximum likelihood estimator. The non-informative priors; Jeffreys and the extension of Jeffreys prior information has been considered under two different loss functions, the squared error loss function and the modified squared error loss function for comparison purpose. A simulation study has been developed in order to gain an insight into the performance on small, moderate and large samples. The performance of these estimators has been explored numerically under different conditions. The efficiency for the estimators was compared according to the mean square error MSE. The results of comparison by MSE show that the efficiency of B
... Show MoreIn this paper, Bayes estimators of the parameter of Maxwell distribution have been derived along with maximum likelihood estimator. The non-informative priors; Jeffreys and the extension of Jeffreys prior information has been considered under two different loss functions, the squared error loss function and the modified squared error loss function for comparison purpose. A simulation study has been developed in order to gain an insight into the performance on small, moderate and large samples. The performance of these estimators has been explored numerically under different conditions. The efficiency for the estimators was compared according to the mean square error MSE. The results of comparison by MSE show that the efficiency of Bayes est
... Show MoreIn this paper, Bayes estimators for the shape and scale parameters of Gamma distribution under the Entropy loss function have been obtained, assuming Gamma and Exponential priors for the shape and scale parameters respectively. Moment, Maximum likelihood estimators and Lindley’s approximation have been used effectively in Bayesian estimation. Based on Monte Carlo simulation method, those estimators are compared depending on the mean squared errors (MSE’s). The results show that, the performance of the Bayes estimator under Entropy loss function is better than other estimates in all cases.
Classification of imbalanced data is an important issue. Many algorithms have been developed for classification, such as Back Propagation (BP) neural networks, decision tree, Bayesian networks etc., and have been used repeatedly in many fields. These algorithms speak of the problem of imbalanced data, where there are situations that belong to more classes than others. Imbalanced data result in poor performance and bias to a class without other classes. In this paper, we proposed three techniques based on the Over-Sampling (O.S.) technique for processing imbalanced dataset and redistributing it and converting it into balanced dataset. These techniques are (Improved Synthetic Minority Over-Sampling Technique (Improved SMOTE), Border
... Show MoreA comparison of double informative and non- informative priors assumed for the parameter of Rayleigh distribution is considered. Three different sets of double priors are included, for a single unknown parameter of Rayleigh distribution. We have assumed three double priors: the square root inverted gamma (SRIG) - the natural conjugate family of priors distribution, the square root inverted gamma – the non-informative distribution, and the natural conjugate family of priors - the non-informative distribution as double priors .The data is generating form three cases from Rayleigh distribution for different samples sizes (small, medium, and large). And Bayes estimators for the parameter is derived under a squared erro
... Show MoreIn this paper, a Monte Carlo Simulation technique is used to compare the performance of the standard Bayes estimators of the reliability function of the one parameter exponential distribution .Three types of loss functions are adopted, namely, squared error loss function (SELF) ,Precautionary error loss function (PELF) andlinear exponential error loss function(LINEX) with informative and non- informative prior .The criterion integrated mean square error (IMSE) is employed to assess the performance of such estimators
In this work, we prove by employing mapping Cone that the sequence and the subsequence of the characteristic-zero are exact and subcomplex respectively in the case of partition (6,6,4) .