Database is characterized as an arrangement of data that is sorted out and disseminated in a way that allows the client to get to the data being put away in a simple and more helpful way. However, in the era of big-data the traditional methods of data analytics may not be able to manage and process the large amount of data. In order to develop an efficient way of handling big-data, this work studies the use of Map-Reduce technique to handle big-data distributed on the cloud. This approach was evaluated using Hadoop server and applied on EEG Big-data as a case study. The proposed approach showed clear enhancement for managing and processing the EEG Big-data with average of 50% reduction on response time. The obtained results provide EEG researchers and specialist with an easy and fast method of handling the EEG big data.
The research aims mainly to the role of the statement style costs on the basis of activity based on performance (PFABC) to reduce production cost and improve the competitive advantage of economic units and industrial under the modern business environment dominated by a lot of developments and changes rapidly, which necessitates taking them and criticize them to ensure survival and continuity. The research problem is the inability of traditional cost methods of providing useful information to the departments of units to take many administrative decisions, particularly decisions related to the product and calculating the costs of the quality of the sound and the availability of the need and the ability to replace methods capa
... Show MoreThe research aimed to achieve many objectives represented in two variables, which are the impacted factors and the aggregate planning alternatives of workforce in Educational Al- yarmouk Hospital , This research started from a problem focused on finding solutions to the demand’s fluctuation or the energy limitation while the study importance is emerged from diagnosis the suitable strategy and adopt the suitable alternatives due to their importance in meeting the demand for the health service submitted by the hospital .This study based on choosing assumptions of connection relationship and the impact among the mentioned variables in the(surgery and internal diseases) departments. The research is dependent on ch
... Show MoreThe current study aims to apply the methods of evaluating investment decisions to extract the highest value and reduce the economic and environmental costs of the health sector according to the strategy.In order to achieve the objectives of the study, the researcher relied on the deductive approach in the theoretical aspect by collecting sources and previous studies. He also used the applied practical approach, relying on the data and reports of Amir almuminin Hospital for the period (2017-2031) for the purpose of evaluating investment decisions in the hospital. A set of conclusions, the most important of which is: The failure to apply
... Show MoreLoanwords are the words transferred from one language to another, which become essential part of the borrowing language. The loanwords have come from the source language to the recipient language because of many reasons. Detecting these loanwords is complicated task due to that there are no standard specifications for transferring words between languages and hence low accuracy. This work tries to enhance this accuracy of detecting loanwords between Turkish and Arabic language as a case study. In this paper, the proposed system contributes to find all possible loanwords using any set of characters either alphabetically or randomly arranged. Then, it processes the distortion in the pronunciation, and solves the problem of the missing lette
... Show MoreThis research tries to reveal how to manage and control the competitive edge for business by building managerial skills in various organizational levels. Our research aims at finding out the nature of various technical, human and in tellectual skills of a new president whose superiority is his competitive ness in the application field at general company for construe tioual industriesand testing the surveyed minor and major changes through a questionnaire to collect information from officials. The sample was composed of (45) director. The data was analyzed using some methods and statistical programs. The most prominent of these is (SPSS) that was used to extract the arithmetic mean, standard deviation, correlation coefficient
... Show MoreThis research is a modest contribution to Put the finishing touches on therole of managerial leaders represented by static companies belonging to theministry of transport and communication/ [iraq] . the research has adopted thedescriptive. Analytical methodology and field study technique, using thequestionnaire as a tool for data collection [58] forms Nar, been distributed tothe research sample. The sample has been deliberately selected {generalmanager, as assistant general manager and heads of department}. Thequestionnaires and the main hypotheses of represented by the existence ofsignificant correlation and impact between successful managerial leadershipand crisis management using the software {SPSS}, were analyzed. Resultswere identic
... Show MoreNoor oil field is one of smallest fields in Missan province. Twelve well penetrates the Mishrif Formation in Noor field and eight of them were selected for this study. Mishrif formation is one of the most important reservoirs in Noor field and it consists of one anticline dome and bounded by the Khasib formation at the top and the Rumaila formation at the bottom. The reservoir was divided into eight units separated by isolated units according to partition taken by a rounding fields.
In this paper histograms frequency distribution of the porosity, permeability, and water saturation were plotted for MA unit of Mishrif formation in Noor field, and then transformed to the normal distribution by applying the Box-Cox transformation alg
... Show MoreData scarcity is a major challenge when training deep learning (DL) models. DL demands a large amount of data to achieve exceptional performance. Unfortunately, many applications have small or inadequate data to train DL frameworks. Usually, manual labeling is needed to provide labeled data, which typically involves human annotators with a vast background of knowledge. This annotation process is costly, time-consuming, and error-prone. Usually, every DL framework is fed by a significant amount of labeled data to automatically learn representations. Ultimately, a larger amount of data would generate a better DL model and its performance is also application dependent. This issue is the main barrier for