Any software application can be divided into four distinct interconnected domains namely, problem domain, usage domain, development domain and system domain. A methodology for assistive technology software development is presented here that seeks to provide a framework for requirements elicitation studies together with their subsequent mapping implementing use-case driven object-oriented analysis for component based software architectures. Early feedback on user interface components effectiveness is adopted through process usability evaluation. A model is suggested that consists of the three environments; problem, conceptual, and representational environments or worlds. This model aims to emphasize on the relationship between the objects and classes in the representationalmodel and the elements in the considered system. Implementing this model on some practical examples is investigated and resulted into promising improvement in software design and understanding.
Experimental measurements of viscosity and thermal conductivity of single layer of graphene . based DI-water nanofluid are performed as a function of concentrations (0.1-1wt%) and temperatures between (5 to 35ºC). The result reveals that the thermal conductivity of GNPs nanofluids was increased with increasing the nanoparticle weight fraction concentration and temperature, while the maximum enhancement was about 22% for concentration of 1 wt.% at
35ºC. These experimental results were compared with some theoretical models and a good agreement between Nan’s model and the experimental results was observed. The viscosity of the graphene nanofluid displays Newtonian and Non-Newtonian behaviors with respect to nanoparticles concen
Laser is a powerful device that has a wide range of applications in fields ranging from materials science and manufacturing to medicine and fibre optic communications. One remarkable
The exponential growth of audio data shared over the internet and communication channels has raised significant concerns about the security and privacy of transmitted information. Due to high processing requirements, traditional encryption algorithms demand considerable computational effort for real-time audio encryption. To address these challenges, this paper presents a permutation for secure audio encryption using a combination of Tent and 1D logistic maps. The audio data is first shuffled using Tent map for the random permutation. The high random secret key with a length equal to the size of the audio data is then generated using a 1D logistic map. Finally, the Exclusive OR (XOR) operation is applied between the generated key and the sh
... Show MoreObjective: To measure the effect of the pharmacist-led medication reconciliation service before hospital discharge on preventing potential medication errors. Methods: This behavioral interventional study took place in a public teaching hospital in Iraq between December 2022 and January 2023. It included inpatients who were taking four or more medications upon discharge from the internal medicine ward and the cardiac care unit. The researcher provided the patients with a medication reconciliation form and reconciliation form (including medication regimen and pharmacist instructions) before discharging them home. Any discrepancies between the patients’ understanding and the actual medication recommendations prescribed by the physici
... Show MoreI
In this study, optical fibers were designed and implemented as a chemical sensor based on surface plasmon resonance (SPR) to estimate the age of the oil used in electrical transformers. The study depends on the refractive indices of the oil. The sensor was created by embedding the center portion of the optical fiber in a resin block, followed by polishing, and tapering to create the optical fiber sensor. The tapering time was 50 min. The multi-mode optical fiber was coated with 60 nm thickness gold metal. The deposition length was 4 cm. The sensor's resonance wavelength was 415 nm. The primary sensor parameters were calculated, including sensitivity (6.25), signal-to-noise ratio (2.38), figure of merit (4.88), and accuracy (3.2)
... Show MoreConfocal microscope imaging has become popular in biotechnology labs. Confocal imaging technology utilizes fluorescence optics, where laser light is focused onto a specific spot at a defined depth in the sample. A considerable number of images are produced regularly during the process of research. These images require methods of unbiased quantification to have meaningful analyses. Increasing efforts to tie reimbursement to outcomes will likely increase the need for objective data in analyzing confocal microscope images in the coming years. Utilizing visual quantification methods to quantify confocal images with naked human eyes is an essential but often underreported outcome measure due to the time required for manual counting and e
... Show MoreThis abstract focuses on the significance of wireless body area networks (WBANs) as a cutting-edge and self-governing technology, which has garnered substantial attention from researchers. The central challenge faced by WBANs revolves around upholding quality of service (QoS) within rapidly evolving sectors like healthcare. The intricate task of managing diverse traffic types with limited resources further compounds this challenge. Particularly in medical WBANs, the prioritization of vital data is crucial to ensure prompt delivery of critical information. Given the stringent requirements of these systems, any data loss or delays are untenable, necessitating the implementation of intelligent algorithms. These algorithms play a pivota
... Show MoreDuring COVID-19, wearing a mask was globally mandated in various workplaces, departments, and offices. New deep learning convolutional neural network (CNN) based classifications were proposed to increase the validation accuracy of face mask detection. This work introduces a face mask model that is able to recognize whether a person is wearing mask or not. The proposed model has two stages to detect and recognize the face mask; at the first stage, the Haar cascade detector is used to detect the face, while at the second stage, the proposed CNN model is used as a classification model that is built from scratch. The experiment was applied on masked faces (MAFA) dataset with images of 160x160 pixels size and RGB color. The model achieve
... Show MoreComputer models are used in the study of electrocardiography to provide insight into physiological phenomena that are difficult to measure in the lab or in a clinical environment.
The electrocardiogram is an important tool for the clinician in that it changes characteristically in a number of pathological conditions. Many illnesses can be detected by this measurement. By simulating the electrical activity of the heart one obtains a quantitative relationship between the electrocardiogram and different anomalies.
Because of the inhomogeneous fibrous structure of the heart and the irregular geometries of the body, finite element method is used for studying the electrical properties of the heart.
This work describes t
... Show More