Database is characterized as an arrangement of data that is sorted out and disseminated in a way that allows the client to get to the data being put away in a simple and more helpful way. However, in the era of big-data the traditional methods of data analytics may not be able to manage and process the large amount of data. In order to develop an efficient way of handling big-data, this work studies the use of Map-Reduce technique to handle big-data distributed on the cloud. This approach was evaluated using Hadoop server and applied on EEG Big-data as a case study. The proposed approach showed clear enhancement for managing and processing the EEG Big-data with average of 50% reduction on response time. The obtained results provide EEG researchers and specialist with an easy and fast method of handling the EEG big data.
the researchers Sought to determine the impact of the customer contact (Within a client contact there are two times, first is the total time required to create a service and within it there is contact time while the second time is the time of client contact ؛ where means a time that records the physical presence of the customer during the process of service) on operations performance by concentrate attention on the cost (labor productivity) and quality (patient ratio to the doctor) and speed (cycle time) and flexibility (the flexibility range) , as well as ruling out variable of innovation because of impossibility to measure this variable in the Specialty Center for Dental in al-alwia due to the center is lacking of mechanisms t
... Show MoreThe research started from the basic objective of tracking the reality of organizational excellence in educational organizations on the basis of practical application. The research in its methodology was based on the examination of organizational excellence in the way of evaluating institutional performance. Tikrit University was selected as a case study to study the reality of application to the dimensions of organizational excellence in it, The results of the analysis for ten periods during the year and month. For the accuracy of the test and its averages, it was preferable to use the T test to determine the significance of the results compared to the basic criteria.
The research found that there is an o
... Show MoreA content-based image retrieval (CBIR) is a technique used to retrieve images from an image database. However, the CBIR process suffers from less accuracy to retrieve images from an extensive image database and ensure the privacy of images. This paper aims to address the issues of accuracy utilizing deep learning techniques as the CNN method. Also, it provides the necessary privacy for images using fully homomorphic encryption methods by Cheon, Kim, Kim, and Song (CKKS). To achieve these aims, a system has been proposed, namely RCNN_CKKS, that includes two parts. The first part (offline processing) extracts automated high-level features based on a flatting layer in a convolutional neural network (CNN) and then stores these features in a
... Show MoreAll businesses seek to improve their levels of profits through various means, most notably their marketing channels, which ensure the delivery of their products to their customers in a safe manner in a timely manner. It considers losses to a minimum and that insurance companies place great interest in marine insurance losses because they often constitute huge amounts compared to other losses, hence the problem of research, which is centred on the type and size of the impact owned by the channel The Iraqi insurance company was chosen to be applied according to the intentional sample method because this company is closely related to the subject matter. The research has reached a set of conclusions, most notably that the choice of i
... Show MoreSpatial data observed on a group of areal units is common in scientific applications. The usual hierarchical approach for modeling this kind of dataset is to introduce a spatial random effect with an autoregressive prior. However, the usual Markov chain Monte Carlo scheme for this hierarchical framework requires the spatial effects to be sampled from their full conditional posteriors one-by-one resulting in poor mixing. More importantly, it makes the model computationally inefficient for datasets with large number of units. In this article, we propose a Bayesian approach that uses the spectral structure of the adjacency to construct a low-rank expansion for modeling spatial dependence. We propose a pair of computationally efficient estimati
... Show More