Background: The figure for the clinical application of computed tomography have been increased significantly in oral and maxillofacial field that supply the dentists with sufficient data enables them to play a main role in screening osteoporosis, therefore Hounsfield units of mandibular computed tomography view used as a main indicator to predict general skeleton osteoporosis and fracture risk factor. Material and Methods: Thirty subjects (7 males &23 females) with a mean age of (60.1) years underwent computed tomographic scanning for different diagnostic assessment in head and neck region. The mandibular bone quality of them were determined through Hounsfield units of CT scan images and were correlated with the bone mineral density values obtained from t-scores of lumbar spine using dual x-ray absorptiometry scans (DEXA). Results: There was a highly signifi¬cant positive correlation [p-value 0.000 (HS)] of bone mineral density that measured by t-score of dual x-ray absorptiometrical scan and Hounsfield units with very strong relation in measuring the bone density (r test) = 0.969, this close relation lead to predict osteoporosity and the chance of fracture occurrence using a statistical equation that classified the patients as osteoporotic. Conclusion: Hounsfield units obtained from computed tomography scans that are made for any purposes can provide an alternative clinical parameter to predict osteoporosis at no additional cost to the patient and no additional radiation.
In the task of detecting intrinsic plagiarism, the cases where reference corpus is absent are to be dealt with. This task is entirely based on inconsistencies within a given document. Detection of internal plagiarism has been considered as a classification problem. It can be estimated through taking into consideration self-based information from a given document.
The core contribution of the work proposed in this paper is associated with the document representation. Wherein, the document, also, the disjoint segments generated from it, have been represented as weight vectors demonstrating their main content. Where, for each element in these vectors, its average weight has been considered instead of its frequency.
Th
... Show MoreCredit card fraud has become an increasing problem due to the growing reliance on electronic payment systems and technological advances that have improved fraud techniques. Numerous financial institutions are looking for the best ways to leverage technological advancements to provide better services to their end users, and researchers used various protection methods to provide security and privacy for credit cards. Therefore, it is necessary to identify the challenges and the proposed solutions to address them. This review provides an overview of the most recent research on the detection of fraudulent credit card transactions to protect those transactions from tampering or improper use, which includes imbalance classes, c
... Show MorePlagiarism is described as using someone else's ideas or work without their permission. Using lexical and semantic text similarity notions, this paper presents a plagiarism detection system for examining suspicious texts against available sources on the Web. The user can upload suspicious files in pdf or docx formats. The system will search three popular search engines for the source text (Google, Bing, and Yahoo) and try to identify the top five results for each search engine on the first retrieved page. The corpus is made up of the downloaded files and scraped web page text of the search engines' results. The corpus text and suspicious documents will then be encoded as vectors. For lexical plagiarism detection, the system will
... Show MoreImage pattern classification is considered a significant step for image and video processing.Although various image pattern algorithms have been proposed so far that achieved adequate classification,achieving higher accuracy while reducing the computation time remains challenging to date. A robust imagepattern classification method is essential to obtain the desired accuracy. This method can be accuratelyclassify image blocks into plain, edge, and texture (PET) using an efficient feature extraction mechanism.Moreover, to date, most of the existing studies are focused on evaluating their methods based on specificorthogonal moments, which limits the understanding of their potential application to various DiscreteOrthogonal Moments (DOMs). The
... Show MoreComputer vision is an emerging area with a huge number of applications. Identification of the fingertip is one of the major parts of those areas. Augmented reality and virtual reality are the most recent technological advancements that use fingertip identification. The interaction between computers and humans can be performed easily by this technique. Virtual reality, robotics, smart gaming are the main application domains of these fingertip detection techniques. Gesture recognition is one of the most fascinating fields of fingertip detection. Gestures are the easiest and productive methods of communication with regard to collaboration with the computer. This analysis examines the different studies done in the field of
... Show MoreForeground object detection is one of the major important tasks in the field of computer vision which attempt to discover important objects in still image or image sequences or locate related targets from the scene. Foreground objects detection is very important for several approaches like object recognition, surveillance, image annotation, and image retrieval, etc. In this work, a proposed method has been presented for detection and separation foreground object from image or video in both of moving and stable targets. Comparisons with general foreground detectors such as background subtraction techniques our approach are able to detect important target for case the target is moving or not and can separate foreground object with high det
... Show MoreImage pattern classification is considered a significant step for image and video processing. Although various image pattern algorithms have been proposed so far that achieved adequate classification, achieving higher accuracy while reducing the computation time remains challenging to date. A robust image pattern classification method is essential to obtain the desired accuracy. This method can be accurately classify image blocks into plain, edge, and texture (PET) using an efficient feature extraction mechanism. Moreover, to date, most of the existing studies are focused on evaluating their methods based on specific orthogonal moments, which limits the understanding of their potential application to various Discrete Orthogonal Moments (DOM
... Show MoreIn this paper, the botnet detection problem is defined as a feature selection problem and the genetic algorithm (GA) is used to search for the best significant combination of features from the entire search space of set of features. Furthermore, the Decision Tree (DT) classifier is used as an objective function to direct the ability of the proposed GA to locate the combination of features that can correctly classify the activities into normal traffics and botnet attacks. Two datasets namely the UNSW-NB15 and the Canadian Institute for Cybersecurity Intrusion Detection System 2017 (CICIDS2017), are used as evaluation datasets. The results reveal that the proposed DT-aware GA can effectively find the relevant features from
... Show More