This study compared in vitro the microleakage of a new low shrink silorane-based posterior composite (Filtek™ P90) and two methacrylate-based composites: a packable posterior composite (Filtek™ P60) and a nanofill composite (Filtek™ Supreme XT) through dye penetration test. Thirty sound human upper premolars were used in this study. Standardized class V cavities were prepared at the buccal surface of each tooth. The teeth were then divided into three groups of ten teeth each: (Group 1: restored with Filtek™ P90, Group 2: restored with Filtek™ P60, and Group 3: restored with Filtek™ Supreme XT). Each composite system was used according to the manufacturer's instructions with their corresponding adhesive systems. The teeth were then thermocycled, immersed in 1% methylene blue dye for 24 hours at room temperature, embedded in auto-polymerizing acrylic resin and sectioned longitudinally bucco-lingually. Microleakage was evaluated by assessing the linear dye penetration at the tooth/restoration interface occlusally and gingivally. The highest microleakage score occlusally or gingivally was recorded and the results were analyzed statistically using SPSS version 13. The results of this study showed that the silorane-based posterior composite Filtek™ P90 showed significantly less microleakage than the methacrylate-based packable composite (Filtek™ P60) and the nano-filled composite (Filtek™ Supreme XT) when the tooth-restoration interface is located in enamel.
Cryptography is a major concern in communication systems. IoE technology is a new trend of smart systems based on various constrained devices. Lightweight cryptographic algorithms are mainly solved the most security concern of constrained devices and IoE systems. On the other hand, most lightweight algorithms are suffering from the trade-off between complexity and performance. Moreover, the strength of the cryptosystems, including the speed of the algorithm and the complexity of the system against the cryptanalysis. A chaotic system is based on nonlinear dynamic equations that are sensitive to initial conditions and produce high randomness which is a good choice for cryptosystems. In this work, we proposed a new five-dimensional of a chaoti
... Show MoreThe biometric-based keys generation represents the utilization of the extracted features from the human anatomical (physiological) traits like a fingerprint, retina, etc. or behavioral traits like a signature. The retina biometric has inherent robustness, therefore, it is capable of generating random keys with a higher security level compared to the other biometric traits. In this paper, an effective system to generate secure, robust and unique random keys based on retina features has been proposed for cryptographic applications. The retina features are extracted by using the algorithm of glowworm swarm optimization (GSO) that provides promising results through the experiments using the standard retina databases. Additionally, in order t
... Show MoreDrug solubility and dissolution remain a significant challenge in pharmaceutical formulations. This study aimed to formulate and evaluate repanglinide (RPG) nanosuspension-based buccal fast-dissolving films (BDFs) for dissolution enhancement. RPG nanosuspension was prepared by the antisolvent-precipitation method using multiple hydrophilic polymers, including soluplus®, polyvinyl alcohol, polyvinyl pyrrolidine, poloxamers, and hydroxyl propyl methyl cellulose. The nanosuspension was then directly loaded into BDFs using the solvent casting technique. Twelve formulas were prepared with a particle size range of 81.6-1389 nm and PDI 0.002-1 for the different polymers. Nanosuspensions prepared with soluplus showed a favored mean particle size o
... Show MorePattern matching algorithms are usually used as detecting process in intrusion detection system. The efficiency of these algorithms is affected by the performance of the intrusion detection system which reflects the requirement of a new investigation in this field. Four matching algorithms and a combined of two algorithms, for intrusion detection system based on new DNA encoding, are applied for evaluation of their achievements. These algorithms are Brute-force algorithm, Boyer-Moore algorithm, Horspool algorithm, Knuth-Morris-Pratt algorithm, and the combined of Boyer-Moore algorithm and Knuth–Morris– Pratt algorithm. The performance of the proposed approach is calculated based on the executed time, where these algorithms are applied o
... Show MoreIntrusion detection systems detect attacks inside computers and networks, where the detection of the attacks must be in fast time and high rate. Various methods proposed achieved high detection rate, this was done either by improving the algorithm or hybridizing with another algorithm. However, they are suffering from the time, especially after the improvement of the algorithm and dealing with large traffic data. On the other hand, past researches have been successfully applied to the DNA sequences detection approaches for intrusion detection system; the achieved detection rate results were very low, on other hand, the processing time was fast. Also, feature selection used to reduce the computation and complexity lead to speed up the system
... Show MoreToday many people suffering from health problems like dysfunction in lungs and cardiac. These problems often require surveillance and follow up to save a patient's health, besides control diseases before progression. For that, this work has been proposed to design and developed a remote patient surveillance system, which deals with 4 medical signs (temperature, SPO2, heart rate, and Electrocardiogram ECG. An adaptive filter has been used to remove any noise from the signal, also, a simple and fast search algorithm has been designed to find the features of ECG signal such as Q,R,S, and T waves. The system performs analysis for medical signs that are used to detected abnormal values. Besides, it sends data to the Base-Stati
... Show MoreEstimating the semantic similarity between short texts plays an increasingly prominent role in many fields related to text mining and natural language processing applications, especially with the large increase in the volume of textual data that is produced daily. Traditional approaches for calculating the degree of similarity between two texts, based on the words they share, do not perform well with short texts because two similar texts may be written in different terms by employing synonyms. As a result, short texts should be semantically compared. In this paper, a semantic similarity measurement method between texts is presented which combines knowledge-based and corpus-based semantic information to build a semantic network that repre
... Show More