Background: The main aim of the present study is to qualify and quantify voids formation of root canals obturated with GuttaCore (GC) and experimental Hydroxyapatite polyethylene (HA/PE) as new carrier-based root canal fillings by using micro computed tomography scan. Materials and methods: In the present study, eight straight single-rooted human permanent premolar teeth are selected and disinfected, then stored in distilled water. The teeth decoronated leaving a root length of 12mm each. The root canals instrumented by using crown down technique and the apical diameter of the root canal prepared to a size # 30/0.04 for achieving standardized measurements. A 5mL of 17% EDTA used to remove the smear layer followed by 5mL of 2.5% NaOCl and rinsing with normal saline. Then the shaped root canals were randomly subdivided into two groups of 4 teeth each according to the carrier-based obturation system use, GuttaCore or experimental HA/PE. Afterwards, the obturated roots stored at 37°C with 100% humidity for 72 hours to allow for complete setting of the sealer. Micro-CT was then scanned to quantify the voids within the root canal space. The data were statistically analyzed by one-way ANOVA and post hoc comparison tests (α=0.05). Results: The root canals obturated with both obturation systems, GuttaCore andexperimental HA/PE showed voids formation, particularly at the apical third of the root canal. GC obturation showed a lower percentage of voids volume (1.54%) than the experimental HA/PE obturation (2.3%). The void volume percentage in the GuttaCore system, however, was non-significantly different (P> 0.05) in comparison with the experimental PE/HA system. Conclusions: GuttaCore and experimental HA/PE obturators exhibited voids formation within the entire root canal space. The experimental HA/PE obturator is comparable to the GuttaCore obturator in terms of voids qualification
Today with increase using social media, a lot of researchers have interested in topic extraction from Twitter. Twitter is an unstructured short text and messy that it is critical to find topics from tweets. While topic modeling algorithms such as Latent Semantic Analysis (LSA) and Latent Dirichlet Allocation (LDA) are originally designed to derive topics from large documents such as articles, and books. They are often less efficient when applied to short text content like Twitter. Luckily, Twitter has many features that represent the interaction between users. Tweets have rich user-generated hashtags as keywords. In this paper, we exploit the hashtags feature to improve topics learned
This paper deals with the design and implementation of an ECG system. The proposed system gives a new concept of ECG signal manipulation, storing, and editing. It consists mainly of hardware circuits and the related software. The hardware includes the circuits of ECG signals capturing, and system interfaces. The software is written using Visual Basic languages, to perform the task of identification of the ECG signal. The main advantage of the system is to provide a reported ECG recording on a personal computer, so that it can be stored and processed at any time as required. This system was tested for different ECG signals, some of them are abnormal and the other is normal, and the results show that the system has a good quality of diagno
... Show MoreGender classification is a critical task in computer vision. This task holds substantial importance in various domains, including surveillance, marketing, and human-computer interaction. In this work, the face gender classification model proposed consists of three main phases: the first phase involves applying the Viola-Jones algorithm to detect facial images, which includes four steps: 1) Haar-like features, 2) Integral Image, 3) Adaboost Learning, and 4) Cascade Classifier. In the second phase, four pre-processing operations are employed, namely cropping, resizing, converting the image from(RGB) Color Space to (LAB) color space, and enhancing the images using (HE, CLAHE). The final phase involves utilizing Transfer lea
... Show MoreContent-based image retrieval has been keenly developed in numerous fields. This provides more active management and retrieval of images than the keyword-based method. So the content based image retrieval becomes one of the liveliest researches in the past few years. In a given set of objects, the retrieval of information suggests solutions to search for those in response to a particular description. The set of objects which can be considered are documents, images, videos, or sounds. This paper proposes a method to retrieve a multi-view face from a large face database according to color and texture attributes. Some of the features used for retrieval are color attributes such as the mean, the variance, and the color image's bitmap. In add
... Show MoreCryptography is a method used to mask text based on any encryption method, and the authorized user only can decrypt and read this message. An intruder tried to attack in many manners to access the communication channel, like impersonating, non-repudiation, denial of services, modification of data, threatening confidentiality and breaking availability of services. The high electronic communications between people need to ensure that transactions remain confidential. Cryptography methods give the best solution to this problem. This paper proposed a new cryptography method based on Arabic words; this method is done based on two steps. Where the first step is binary encoding generation used t
... Show MoreThe quality of Global Navigation Satellite Systems (GNSS) networks are considerably influenced by the configuration of the observed baselines. Where, this study aims to find an optimal configuration for GNSS baselines in terms of the number and distribution of baselines to improve the quality criteria of the GNSS networks. First order design problem (FOD) was applied in this research to optimize GNSS network baselines configuration, and based on sequential adjustment method to solve its objective functions.
FOD for optimum precision (FOD-p) was the proposed model which based on the design criteria of A-optimality and E-optimality. These design criteria were selected as objective functions of precision, whic
... Show MoreBiomarkers to detect Alzheimer’s disease (AD) would enable patients to gain access to appropriate services and may facilitate the development of new therapies. Given the large numbers of people affected by AD, there is a need for a low-cost, easy to use method to detect AD patients. Potentially, the electroencephalogram (EEG) can play a valuable role in this, but at present no single EEG biomarker is robust enough for use in practice. This study aims to provide a methodological framework for the development of robust EEG biomarkers to detect AD with a clinically acceptable performance by exploiting the combined strengths of key biomarkers. A large number of existing and novel EEG biomarkers associated with slowing of EEG, reductio
... Show More