Wireless sensor applications are susceptible to energy constraints. Most of the energy is consumed in communication between wireless nodes. Clustering and data aggregation are the two widely used strategies for reducing energy usage and increasing the lifetime of wireless sensor networks. In target tracking applications, large amount of redundant data is produced regularly. Hence, deployment of effective data aggregation schemes is vital to eliminate data redundancy. This work aims to conduct a comparative study of various research approaches that employ clustering techniques for efficiently aggregating data in target tracking applications as selection of an appropriate clustering algorithm may reflect positive results in the data aggregation process. In this paper, we have highlighted the gains of the existing schemes for node clustering based data aggregation along with a detailed discussion on their advantages and issues that may degrade the performance. Also, the boundary issues in each type of clustering technique have been analyzed. Simulation results reveal that the efficacy and validity of these clustering-based data aggregation algorithms are limited to specific sensing situations only, while failing to exhibit adaptive behavior in various other environmental conditions.
Abstract Background: The novel coronavirus 2 (SARS?CoV?2) pandemic is a pulmonary disease, which leads to cardiac, hematologic, and renal complications. Anticoagulants are used for COVID-19 infected patients because the infection increases the risk of thrombosis. The world health organization (WHO), recommend prophylaxis dose of anticoagulants: (Enoxaparin or unfractionated Heparin for hospitalized patients with COVID-19 disease. This has created an urgent need to identify effective medications for COVID-19 prevention and treatment. The value of COVID-19 treatments is affected by cost-effectiveness analysis (CEA) to inform relative value and how to best maximize social welfare through evidence-based pricing decisions. O
... Show MoreAbstract
Background: The novel coronavirus 2 (SARS?CoV?2) pandemic is a pulmonary disease, which leads to cardiac, hematologic, and renal complications. Anticoagulants are used for COVID-19 infected patients because the infection increases the risk of thrombosis. The world health organization (WHO), recommend prophylaxis dose of anticoagulants: (Enoxaparin or unfractionated Heparin for hospitalized patients with COVID-19 disease. This has created an urgent need to identify effective medications for COVID-19 prevention and treatment. The value of COVID-19 treatments is affected by cost-effectiveness analysis (CEA) to inform relative value and how to best maximize social welfare through eviden
... Show MoreThe Research dealt with the role of the target costs in reducing the cost of products in the General Company for soft drinks. One the modern approaches reduce costs and thus increase the ability and continuity to compete in the market. Where the problem of research in identifying the shortcomings in the traditional method used in the company sample research. Which led to a weak control of the cost and the researcher relied on data and costs of the company. The research recommended that the target cost of the company should be applied to the research sample. Training the employees. In addition, preparing training courses for them. He stressed the need to address obstacles that prevent the existence of an effective cost system. Including t
... Show MoreAdvances in digital technology and the World Wide Web has led to the increase of digital documents that are used for various purposes such as publishing and digital library. This phenomenon raises awareness for the requirement of effective techniques that can help during the search and retrieval of text. One of the most needed tasks is clustering, which categorizes documents automatically into meaningful groups. Clustering is an important task in data mining and machine learning. The accuracy of clustering depends tightly on the selection of the text representation method. Traditional methods of text representation model documents as bags of words using term-frequency index document frequency (TFIDF). This method ignores the relationship an
... Show MoreThis research a study model of linear regression problem of autocorrelation of random error is spread when a normal distribution as used in linear regression analysis for relationship between variables and through this relationship can predict the value of a variable with the values of other variables, and was comparing methods (method of least squares, method of the average un-weighted, Thiel method and Laplace method) using the mean square error (MSE) boxes and simulation and the study included fore sizes of samples (15, 30, 60, 100). The results showed that the least-squares method is best, applying the fore methods of buckwheat production data and the cultivated area of the provinces of Iraq for years (2010), (2011), (2012),
... Show MoreSpatial data observed on a group of areal units is common in scientific applications. The usual hierarchical approach for modeling this kind of dataset is to introduce a spatial random effect with an autoregressive prior. However, the usual Markov chain Monte Carlo scheme for this hierarchical framework requires the spatial effects to be sampled from their full conditional posteriors one-by-one resulting in poor mixing. More importantly, it makes the model computationally inefficient for datasets with large number of units. In this article, we propose a Bayesian approach that uses the spectral structure of the adjacency to construct a low-rank expansion for modeling spatial dependence. We propose a pair of computationally efficient estimati
... Show MoreIn this research، a comparison has been made between the robust estimators of (M) for the Cubic Smoothing Splines technique، to avoid the problem of abnormality in data or contamination of error، and the traditional estimation method of Cubic Smoothing Splines technique by using two criteria of differentiation which are (MADE، WASE) for different sample sizes and disparity levels to estimate the chronologically different coefficients functions for the balanced longitudinal data which are characterized by observations obtained through (n) from the independent subjects، each one of them is measured repeatedly by group of specific time points (m)،since the frequent measurements within the subjects are almost connected an
... Show MoreIn recent decades, drug modification is no longer unusual in the pharmaceutical world as living things are evolving in response to environmental changes. A non-steroidal anti-inflammatory drug (NSAID) such as aspirin is a common over-the-counter drug that can be purchased without medical prescription. Aspirin can inhibit the synthesis of prostaglandin by blocking the cyclooxygenase (COX) which contributes to its properties such as anti-inflammatory, antipyretic, antiplatelet and etc. It is also being considered as a chemopreventive agent due to its antithrombotic actions through the COX’s inhibition. However, the prolonged use of aspirin can cause heartburn, ulceration, and gastro-toxicity in children and adults. This review article hi
... Show MoreDatabase is characterized as an arrangement of data that is sorted out and disseminated in a way that allows the client to get to the data being put away in a simple and more helpful way. However, in the era of big-data the traditional methods of data analytics may not be able to manage and process the large amount of data. In order to develop an efficient way of handling big-data, this work studies the use of Map-Reduce technique to handle big-data distributed on the cloud. This approach was evaluated using Hadoop server and applied on EEG Big-data as a case study. The proposed approach showed clear enhancement for managing and processing the EEG Big-data with average of 50% reduction on response time. The obtained results provide EEG r
... Show More