Background:Measurement of hemoglobin A1c (A1C) is a renowned tactic for gauging long-term glycemic control, and exemplifies an outstanding influence to the quality of care in diabetic patients.The concept of targets is open to criticism; they may be unattainable, or limit what could be attained, and in addition they may be economically difficult to attain. However, without some form of targeted control of an asymptomatic condition it becomes difficult to promote care at allObjectives: The present article aims to address the most recent evidence-based global guidelines of A1C targets intended for glycemic control in Type 2 Diabetes Mellitus (T2D).Key messages:Rationale for Treatment Targets of A1C includesevidence for microvascular and macrovascular protectionand changes in quality of life. More or less stringent A1C goals may be appropriate for individual patients, andgoals should be individualized based on:duration of diabetes, age/life expectancy, comorbid conditions, CVD or advanced microvascular complications,hypoglycemia unawareness, and individual patient considerations
Patients infected with the COVID-19 virus develop severe pneumonia, which typically results in death. Radiological data show that the disease involves interstitial lung involvement, lung opacities, bilateral ground-glass opacities, and patchy opacities. This study aimed to improve COVID-19 diagnosis via radiological chest X-ray (CXR) image analysis, making a substantial contribution to the development of a mobile application that efficiently identifies COVID-19, saving medical professionals time and resources. It also allows for timely preventative interventions by using more than 18000 CXR lung images and the MobileNetV2 convolutional neural network (CNN) architecture. The MobileNetV2 deep-learning model performances were evaluated
... Show MoreThis work aims to develop a secure lightweight cipher algorithm for constrained devices. A secure communication among constrained devices is a critical issue during the data transmission from the client to the server devices. Lightweight cipher algorithms are defined as a secure solution for constrained devices that require low computational functions and small memory. In contrast, most lightweight algorithms suffer from the trade-off between complexity and speed in order to produce robust cipher algorithm. The PRESENT cipher has been successfully experimented on as a lightweight cryptography algorithm, which transcends other ciphers in terms of its computational processing that required low complexity operations. The mathematical model of
... Show MoreIn this paper, an algorithm through which we can embed more data than the
regular methods under spatial domain is introduced. We compressed the secret data
using Huffman coding and then this compressed data is embedded using laplacian
sharpening method.
We used Laplace filters to determine the effective hiding places, then based on
threshold value we found the places with the highest values acquired from these filters
for embedding the watermark. In this work our aim is increasing the capacity of
information which is to be embedded by using Huffman code and at the same time
increasing the security of the algorithm by hiding data in the places that have highest
values of edges and less noticeable.
The perform
Currently, with the huge increase in modern communication and network applications, the speed of transformation and storing data in compact forms are pressing issues. Daily an enormous amount of images are stored and shared among people every moment, especially in the social media realm, but unfortunately, even with these marvelous applications, the limited size of sent data is still the main restriction's, where essentially all these applications utilized the well-known Joint Photographic Experts Group (JPEG) standard techniques, in the same way, the need for construction of universally accepted standard compression systems urgently required to play a key role in the immense revolution. This review is concerned with Different
... Show MoreSolar cells has been assembly with electrolytes including I−/I−3 redox duality employ polyacrylonitrile (PAN), ethylene carbonate (EC), propylene carbonate (PC), with double iodide salts of tetrabutylammonium iodide (TBAI) and Lithium iodide (LiI) and iodine (I2) were thoughtful for enhancing the efficiency of the solar cells. The rendering of the solar cells has been examining by alteration the weight ratio of the salts in the electrolyte. The solar cell with electrolyte comprises (60% wt. TBAI/40% wt. LiI (+I2)) display elevated efficiency of 5.189% under 1000 W/m2 light intensity. While the solar cell with electrolyte comprises (60% wt. LiI/40% wt. TBAI (+I2)) display a lower efficiency of 3.189%. The conductivity raises with the
... Show MoreSemantic segmentation realization and understanding is a stringent task not just for computer vision but also in the researches of the sciences of earth, semantic segmentation decompose compound architectures in one elements, the most mutual object in a civil outside or inside senses must classified then reinforced with information meaning of all object, it’s a method for labeling and clustering point cloud automatically. Three dimensions natural scenes classification need a point cloud dataset to representation data format as input, many challenge appeared with working of 3d data like: little number, resolution and accurate of three Dimensional dataset . Deep learning now is the po
A Multiple System Biometric System Based on ECG Data
Password authentication is popular approach to the system security and it is also very important system security procedure to gain access to resources of the user. This paper description password authentication method by using Modify Bidirectional Associative Memory (MBAM) algorithm for both graphical and textual password for more efficient in speed and accuracy. Among 100 test the accuracy result is 100% for graphical and textual password to authenticate a user.
In this paper, a method is proposed to increase the compression ratio for the color images by
dividing the image into non-overlapping blocks and applying different compression ratio for these
blocks depending on the importance information of the block. In the region that contain important
information the compression ratio is reduced to prevent loss of the information, while in the
smoothness region which has not important information, high compression ratio is used .The
proposed method shows better results when compared with classical methods(wavelet and DCT).