Wireless sensor applications are susceptible to energy constraints. Most of the energy is consumed in communication between wireless nodes. Clustering and data aggregation are the two widely used strategies for reducing energy usage and increasing the lifetime of wireless sensor networks. In target tracking applications, large amount of redundant data is produced regularly. Hence, deployment of effective data aggregation schemes is vital to eliminate data redundancy. This work aims to conduct a comparative study of various research approaches that employ clustering techniques for efficiently aggregating data in target tracking applications as selection of an appropriate clustering algorithm may reflect positive results in the data aggregation process. In this paper, we have highlighted the gains of the existing schemes for node clustering based data aggregation along with a detailed discussion on their advantages and issues that may degrade the performance. Also, the boundary issues in each type of clustering technique have been analyzed. Simulation results reveal that the efficacy and validity of these clustering-based data aggregation algorithms are limited to specific sensing situations only, while failing to exhibit adaptive behavior in various other environmental conditions.
In data mining, classification is a form of data analysis that can be used to extract models describing important data classes. Two of the well known algorithms used in data mining classification are Backpropagation Neural Network (BNN) and Naïve Bayesian (NB). This paper investigates the performance of these two classification methods using the Car Evaluation dataset. Two models were built for both algorithms and the results were compared. Our experimental results indicated that the BNN classifier yield higher accuracy as compared to the NB classifier but it is less efficient because it is time-consuming and difficult to analyze due to its black-box implementation.
This paper deals with defining Burr-XII, and how to obtain its p.d.f., and CDF, since this distribution is one of failure distribution which is compound distribution from two failure models which are Gamma model and weibull model. Some equipment may have many important parts and the probability distributions representing which may be of different types, so found that Burr by its different compound formulas is the best model to be studied, and estimated its parameter to compute the mean time to failure rate. Here Burr-XII rather than other models is consider because it is used to model a wide variety of phenomena including crop prices, household income, option market price distributions, risk and travel time. It has two shape-parame
... Show MoreThe objective of this study was to investigate and compare among five different methods of contraception including combined oral contraceptive pills (COC), Depot medroxyprogesterone acetate (DMPA), copper Intrauterine contraceptive device (IUCD), vaginal spermicides and male condom used in Hawler City through estimate of their effect, relative failure rate, percentage of use, adherence and compliance and adverse effects of each contraceptive method. In order to reach to these aims, a retrospective study was conducted in Hawler City in Azadi Health Care Center over a period of 6 months from 22th November, 2010 to 15th May, 2011 during which data collection and subjects follow up for 3 months had been achieved. A conv
... Show MoreIn recent years, the performance of Spatial Data Infrastructures for governments and companies is a task that has gained ample attention. Different categories of geospatial data such as digital maps, coordinates, web maps, aerial and satellite images, etc., are required to realize the geospatial data components of Spatial Data Infrastructures. In general, there are two distinct types of geospatial data sources exist over the Internet: formal and informal data sources. Despite the growth of informal geospatial data sources, the integration between different free sources is not being achieved effectively. The adoption of this task can be considered the main advantage of this research. This article addresses the research question of ho
... Show MoreThe work reported in this study focusing on the abrasive wear behavior for three types of pipes used in oil industries (Carbone steel, Alloy steel and Stainless steel) using a wear apparatus for dry and wet tests, manufactured according to ASTM G65. Silica sand with
hardness (1000-1100) HV was used as abrasive material. The abrasive wear of these pipes has been measured experimentally by measuring the wear rate for each case under different sliding speeds, applied loads, and sand conditions (dry or wet). All tests have been conducted using sand of particle size (200-425) µm, ambient temperature of 34.5 °C and humidity 22% (Lab conditions).
The results show that the material loss due to abrasive wear increased monotonically with
In regression testing, Test case prioritization (TCP) is a technique to arrange all the available test cases. TCP techniques can improve fault detection performance which is measured by the average percentage of fault detection (APFD). History-based TCP is one of the TCP techniques that consider the history of past data to prioritize test cases. The issue of equal priority allocation to test cases is a common problem for most TCP techniques. However, this problem has not been explored in history-based TCP techniques. To solve this problem in regression testing, most of the researchers resort to random sorting of test cases. This study aims to investigate equal priority in history-based TCP techniques. The first objective is to implement
... Show MoreThe research aimed at measuring the compatibility of Big date with the organizational Ambidexterity dimensions of the Asia cell Mobile telecommunications company in Iraq in order to determine the possibility of adoption of Big data Triple as a approach to achieve organizational Ambidexterity.
The study adopted the descriptive analytical approach to collect and analyze the data collected by the questionnaire tool developed on the Likert scale After a comprehensive review of the literature related to the two basic study dimensions, the data has been subjected to many statistical treatments in accordance with res
... Show MoreWithin the framework of big data, energy issues are highly significant. Despite the significance of energy, theoretical studies focusing primarily on the issue of energy within big data analytics in relation to computational intelligent algorithms are scarce. The purpose of this study is to explore the theoretical aspects of energy issues in big data analytics in relation to computational intelligent algorithms since this is critical in exploring the emperica aspects of big data. In this chapter, we present a theoretical study of energy issues related to applications of computational intelligent algorithms in big data analytics. This work highlights that big data analytics using computational intelligent algorithms generates a very high amo
... Show More