Periodontitis is a multifactorial chronic inflammatory disease that affects tooth-supporting soft/hard tissues of the dentition. The dental plaque biofilm is considered as a primary etiological factor in susceptible patients; however, other factors contribute to progression, such as diabetes and smoking. Current management utilizes mechanical biofilm removal as the gold standard of treatment. Antibacterial agents might be indicated in certain conditions as an adjunct to this mechanical approach. However, in view of the growing concern about bacterial resistance, alternative approaches have been investigated. Currently, a range of antimicrobial agents and protocols have been used in clinical management, but these remain largely non-validated. This review aimed to evaluate the efficacy of adjunctive antibiotic use in periodontal management and to compare them to recently suggested alternatives. Evidence from in vitro, observational and clinical trial studies suggests efficacy in the use of adjunctive antimicrobials in patients with grade C periodontitis of young age or where the associated risk factors are inconsistent with the amount of bone loss present. Meanwhile, alternative approaches such as photodynamic therapy, bacteriophage therapy and probiotics showed limited supportive evidence, and more studies are warranted to validate their efficiency.
As a result of recent developments in highway research as well as the increased use of vehicles, there has been a significant interest paid to the most current, effective, and precise Intelligent Transportation System (ITS). In the field of computer vision or digital image processing, the identification of specific objects in an image plays a crucial role in the creation of a comprehensive image. There is a challenge associated with Vehicle License Plate Recognition (VLPR) because of the variation in viewpoints, multiple formats, and non-uniform lighting conditions at the time of acquisition of the image, shape, and color, in addition, the difficulties like poor image resolution, blurry image, poor lighting, and low contrast, these
... Show MoreRecently, Image enhancement techniques can be represented as one of the most significant topics in the field of digital image processing. The basic problem in the enhancement method is how to remove noise or improve digital image details. In the current research a method for digital image de-noising and its detail sharpening/highlighted was proposed. The proposed approach uses fuzzy logic technique to process each pixel inside entire image, and then take the decision if it is noisy or need more processing for highlighting. This issue is performed by examining the degree of association with neighboring elements based on fuzzy algorithm. The proposed de-noising approach was evaluated by some standard images after corrupting them with impulse
... Show Moreنتيجة للتطورات الأخيرة في أبحاث الطرق السريعة بالإضافة إلى زيادة استخدام المركبات، كان هناك اهتمام كبير بنظام النقل الذكي الأكثر حداثة وفعالية ودقة (ITS) في مجال رؤية الكمبيوتر أو معالجة الصور الرقمية، يلعب تحديد كائنات معينة في صورة دورًا مهمًا في إنشاء صورة شاملة. هناك تحدٍ مرتبط بالتعرف على لوحة ترخيص السيارة (VLPR) بسبب الاختلاف في وجهة النظر، والتنسيقات المتعددة، وظروف الإضاءة غير الموحدة في وقت الحصول
... Show MoreThis work aims to develop a secure lightweight cipher algorithm for constrained devices. A secure communication among constrained devices is a critical issue during the data transmission from the client to the server devices. Lightweight cipher algorithms are defined as a secure solution for constrained devices that require low computational functions and small memory. In contrast, most lightweight algorithms suffer from the trade-off between complexity and speed in order to produce robust cipher algorithm. The PRESENT cipher has been successfully experimented on as a lightweight cryptography algorithm, which transcends other ciphers in terms of its computational processing that required low complexity operations. The mathematical model of
... Show MoreIn this paper, an algorithm through which we can embed more data than the
regular methods under spatial domain is introduced. We compressed the secret data
using Huffman coding and then this compressed data is embedded using laplacian
sharpening method.
We used Laplace filters to determine the effective hiding places, then based on
threshold value we found the places with the highest values acquired from these filters
for embedding the watermark. In this work our aim is increasing the capacity of
information which is to be embedded by using Huffman code and at the same time
increasing the security of the algorithm by hiding data in the places that have highest
values of edges and less noticeable.
The perform
Tremendous efforts have been exerted to understand first language acquisition to facilitate second language learning. The problem lies in the difficulty of mastering English language and adapting a theory that helps in overcoming the difficulties facing students. This study aims to apply Thomasello's theory of language mastery through usage. It assumes that adults can learn faster than children and can learn the language separately, and far from academic education. Tomasello (2003) studied the stages of language acquisition for children, and developed his theory accordingly. Some studies, such as: (Ghalebi and Sadighi, 2015, Arvidsson, 2019; Munoz, 2019; Verspoor and Hong, 2013) used this theory when examining language acquisition. Thus,
... Show MoreSemantic segmentation realization and understanding is a stringent task not just for computer vision but also in the researches of the sciences of earth, semantic segmentation decompose compound architectures in one elements, the most mutual object in a civil outside or inside senses must classified then reinforced with information meaning of all object, it’s a method for labeling and clustering point cloud automatically. Three dimensions natural scenes classification need a point cloud dataset to representation data format as input, many challenge appeared with working of 3d data like: little number, resolution and accurate of three Dimensional dataset . Deep learning now is the po
In this paper, a fast lossless image compression method is introduced for compressing medical images, it is based on splitting the image blocks according to its nature along with using the polynomial approximation to decompose image signal followed by applying run length coding on the residue part of the image, which represents the error caused by applying polynomial approximation. Then, Huffman coding is applied as a last stage to encode the polynomial coefficients and run length coding. The test results indicate that the suggested method can lead to promising performance.