The transmitting and receiving of data consume the most resources in Wireless Sensor Networks (WSNs). The energy supplied by the battery is the most important resource impacting WSN's lifespan in the sensor node. Therefore, because sensor nodes run from their limited battery, energy-saving is necessary. Data aggregation can be defined as a procedure applied for the elimination of redundant transmissions, and it provides fused information to the base stations, which in turn improves the energy effectiveness and increases the lifespan of energy-constrained WSNs. In this paper, a Perceptually Important Points Based Data Aggregation (PIP-DA) method for Wireless Sensor Networks is suggested to reduce redundant data before sending them to the sink. By utilizing Intel Berkeley Research Lab (IBRL) dataset, the efficiency of the proposed method was measured. The experimental findings illustrate the benefits of the proposed method as it reduces the overhead on the sensor node level up to 1.25% in remaining data and reduces the energy consumption up to 93% compared to prefix frequency filtering (PFF) and ATP protocols.
Today in the digital realm, where images constitute the massive resource of the social media base but unfortunately suffer from two issues of size and transmission, compression is the ideal solution. Pixel base techniques are one of the modern spatially optimized modeling techniques of deterministic and probabilistic bases that imply mean, index, and residual. This paper introduces adaptive pixel-based coding techniques for the probabilistic part of a lossy scheme by incorporating the MMSA of the C321 base along with the utilization of the deterministic part losslessly. The tested results achieved higher size reduction performance compared to the traditional pixel-based techniques and the standard JPEG by about 40% and 50%,
... Show MoreThe research involves preparing gold nanoparticles (AuNPs) and studying the factors that influence the shape, sizes and distribution ratio of the prepared particles according to Turkevich method. These factors include (reaction temperature, initial heating, concentration of gold ions, concentration and quantity of added citrate, reaction time and order of reactant addition). Gold nanoparticles prepared were characterized by the following measurements: UV-Visible spectroscopy, X-ray diffraction and scanning electron microscopy. The average size of gold nanoparticles was formed in the range (20 -35) nm. The amount of added citrate was changed and studied. In addition, the concentration of added gold ions was changed and the calibration cur
... Show MoreIn this work, the fractional damped Burger's equation (FDBE) formula = 0,
The aim of this paper, is to discuss several high performance training algorithms fall into two main categories. The first category uses heuristic techniques, which were developed from an analysis of the performance of the standard gradient descent algorithm. The second category of fast algorithms uses standard numerical optimization techniques such as: quasi-Newton . Other aim is to solve the drawbacks related with these training algorithms and propose an efficient training algorithm for FFNN
The aim of the thesis is to estimate the partial and inaccessible population groups, which is a field study to estimate the number of drug’s users in the Baghdad governorate for males who are (15-60) years old.
Because of the absence of data approved by government institutions, as well as the difficulty of estimating the numbers of these people from the traditional survey, in which the respondent expresses himself or his family members in some cases. In these challenges, the NSUM Network Scale-Up Method Is mainly based on asking respondents about the number of people they know in their network of drug addicts.
Based on this principle, a statistical questionnaire was designed to
... Show MoreThe objective of this work is to study the influence of end milling cutting process parameters, tool material and geometry on multi-response outputs for 4032 Al-alloy. This can be done by proposing an approach that combines Taguchi method with grey relational analysis. Three cutting parameters have been selected (spindle speed, feed rate and cut depth) with three levels for each parameter. Three tools with different materials and geometry have been also used to design the experimental tests and runs based on matrix L9. The end milling process with several output characteristics is solved using a grey relational analysis. The results of analysis of variance (ANOVA) showed that the major influencing parameters on multi-objective response w
... Show MoreMany objective optimizations (MaOO) algorithms that intends to solve problems with many objectives (MaOP) (i.e., the problem with more than three objectives) are widely used in various areas such as industrial manufacturing, transportation, sustainability, and even in the medical sector. Various approaches of MaOO algorithms are available and employed to handle different MaOP cases. In contrast, the performance of the MaOO algorithms assesses based on the balance between the convergence and diversity of the non-dominated solutions measured using different evaluation criteria of the quality performance indicators. Although many evaluation criteria are available, yet most of the evaluation and benchmarking of the MaOO with state-of-art a
... Show MoreAn Indirect simple sensitive and applicable spectrofluorometric method has been developed for the determination of Cefotaxime Sodium (CEF), ciprofloxacin Hydrochloride (CIP) and Famotidine (FAM) using reaction system bromate-bromide and acriflavine (AF) as fluorescent dye. The method is based on the oxidation of drugs with known excess bromate-bromide mixture in acidic medium and subsequent determination of unreacted oxidant by quenching fluorescence of AF. Fluorescence intensity of residual AF was measured at 528 nm after excitation at 402 nm. The fluorescence-concentration plots were rectilinear over the ranges 0.1-3.0, 0.05-2.6 and 0.1-3.8 µg ml-1 with lower detection limits of 0.013, 0.018 and 0.021 µg ml-1 an
... Show MoreThis research sought to present a concept of cross-sectional data models, A crucial double data to take the impact of the change in time and obtained from the measured phenomenon of repeated observations in different time periods, Where the models of the panel data were defined by different types of fixed , random and mixed, and Comparing them by studying and analyzing the mathematical relationship between the influence of time with a set of basic variables Which are the main axes on which the research is based and is represented by the monthly revenue of the working individual and the profits it generates, which represents the variable response And its relationship to a set of explanatory variables represented by the
... Show MoreThe 3-parameter Weibull distribution is used as a model for failure since this distribution is proper when the failure rate somewhat high in starting operation and these rates will be decreased with increasing time .
In practical side a comparison was made between (Shrinkage and Maximum likelihood) Estimators for parameter and reliability function using simulation , we conclude that the Shrinkage estimators for parameters are better than maximum likelihood estimators but the maximum likelihood estimator for reliability function is the better using statistical measures (MAPE)and (MSE) and for different sample sizes.
Note:- ns : small sample ; nm=median sample
... Show More