Wireless Body Area Sensor Networks (WBASNs) have garnered significant attention due to the implementation of self-automaton and modern technologies. Within the healthcare WBASN, certain sensed data hold greater significance than others in light of their critical aspect. Such vital data must be given within a specified time frame. Data loss and delay could not be tolerated in such types of systems. Intelligent algorithms are distinguished by their superior ability to interact with various data systems. Machine learning methods can analyze the gathered data and uncover previously unknown patterns and information. These approaches can also diagnose and notify critical conditions in patients under monitoring. This study implements two supervised machine learning classification techniques, Learning Vector Quantization (LVQ) and Support Vector Machine (SVM) classifiers, to achieve better search performance and high classification accuracy in a heterogeneous WBASN. These classification techniques are responsible for categorizing each incoming packet into normal, critical, or very critical, depending on the patient's condition, so that any problem affecting him can be addressed promptly. Comparative analyses reveal that LVQ outperforms SVM in terms of accuracy at 91.45% and 80%, respectively.
This study was conducted in the Poultry farm of the animal during the Production department, Iraqi during the (Ministry of Science and Technology) period from 3-9-2001 to 8-4-2002. The objectives of this study were to evaluate the effect of low – level chronic aflatoxicosis on performance (body weight, feed conversion efficiency and mortality), Serum biochemistry and activity of some enzymes (GOT,GPT, ALKP, LDH). A total of 300 male chicks of broiler breeder (Faw–Bro) were used. Chicks at day 1 of age were fed diets contaminated with aflatoxine at levels of 0, 0.3, 0.6, 0.9, 1.2, and 1.5 the feeding period were extended to 8 weeks. The data were subjected to analysis of variance by the completely randomized design. The results showed
... Show MoreThe current study aimed at identifying the impact of each of the full and part time summer enrichment programs on the performance of gifted students. Moreover, it aimed to study the difference between the full and part time programs on the performance of gifted students. The study sample consisted of (115) students from the full time programs and (137) students from the part time programs, they have been randomly selected from the gifted students participating in the full and part time summer enrichment programs. The researcher used the scale of student performance. The results indicated that there were statistically significant differences between the averages of the pre and post applications of the
... Show MoreCryptography is a method used to mask text based on any encryption method, and the authorized user only can decrypt and read this message. An intruder tried to attack in many manners to access the communication channel, like impersonating, non-repudiation, denial of services, modification of data, threatening confidentiality and breaking availability of services. The high electronic communications between people need to ensure that transactions remain confidential. Cryptography methods give the best solution to this problem. This paper proposed a new cryptography method based on Arabic words; this method is done based on two steps. Where the first step is binary encoding generation used t
... Show MoreToday with increase using social media, a lot of researchers have interested in topic extraction from Twitter. Twitter is an unstructured short text and messy that it is critical to find topics from tweets. While topic modeling algorithms such as Latent Semantic Analysis (LSA) and Latent Dirichlet Allocation (LDA) are originally designed to derive topics from large documents such as articles, and books. They are often less efficient when applied to short text content like Twitter. Luckily, Twitter has many features that represent the interaction between users. Tweets have rich user-generated hashtags as keywords. In this paper, we exploit the hashtags feature to improve topics learned
Authentication is the process of determining whether someone or something is, in fact, who or what it is declared to be. As the dependence upon computers and computer networks grows, the need for user authentication has increased. User’s claimed identity can be verified by one of several methods. One of the most popular of these methods is represented by (something user know), such as password or Personal Identification Number (PIN). Biometrics is the science and technology of authentication by identifying the living individual’s physiological or behavioral attributes. Keystroke authentication is a new behavioral access control system to identify legitimate users via their typing behavior. The objective of this paper is to provide user
... Show MoreIris research is focused on developing techniques for identifying and locating relevant biometric features, accurate segmentation and efficient computation while lending themselves to compression methods. Most iris segmentation methods are based on complex modelling of traits and characteristics which, in turn, reduce the effectiveness of the system being used as a real time system. This paper introduces a novel parameterized technique for iris segmentation. The method is based on a number of steps starting from converting grayscale eye image to a bit plane representation, selection of the most significant bit planes followed by a parameterization of the iris location resulting in an accurate segmentation of the iris from the origin
... Show MoreSpeech is the essential way to interact between humans or between human and machine. However, it is always contaminated with different types of environment noise. Therefore, speech enhancement algorithms (SEA) have appeared as a significant approach in speech processing filed to suppress background noise and return back the original speech signal. In this paper, a new efficient two-stage SEA with low distortion is proposed based on minimum mean square error sense. The estimation of clean signal is performed by taking the advantages of Laplacian speech and noise modeling based on orthogonal transform (Discrete Krawtchouk-Tchebichef transform) coefficients distribution. The Discrete Kra