<p><span>A Botnet is one of many attacks that can execute malicious tasks and develop continuously. Therefore, current research introduces a comparison framework, called BotDetectorFW, with classification and complexity improvements for the detection of Botnet attack using CICIDS2017 dataset. It is a free online dataset consist of several attacks with high-dimensions features. The process of feature selection is a significant step to obtain the least features by eliminating irrelated features and consequently reduces the detection time. This process implemented inside BotDetectorFW using two steps; data clustering and five distance measure formulas (cosine, dice, driver & kroeber, overlap, and pearson correlation) using C#, followed by selecting the best N features used as input into four classifier algorithms evaluated using machine learning (WEKA); multilayerperceptron, JRip, IBK, and random forest. In BotDetectorFW, the thoughtful and diligent cleaning of the dataset within the preprocessing stage beside the normalization, binary clustering of its features, followed by the adapting of feature selection based on suitable feature distance techniques, and finalized by testing of selected classification algorithms. All together contributed in satisfying the high-performance metrics using fewer features number (8 features as a minimum) compared to and outperforms other methods found in the literature that adopted (10 features or higher) using the same dataset. Furthermore, the results and performance evaluation of BotDetectorFM shows a competitive impact in terms of classification accuracy (ACC), precision (Pr), recall (Rc), and f-measure (F1) metrics.</span></p>
Background: This in vitro study measure and compare the effect of light curing tip distance on the depth of cure by measuring vickers microhardness value on two recently launched bulk fill resin based composites Tetric EvoCeram Bulk Fill and Surefil SDR Flow with 4 mm thickness in comparison to Filtek Z250 Universal Restorative with 2 mm thickness. In addition, measure and compare the bottom to top microhardness ratio with different light curing tip distances. Materials and Method: One hundred fifty composite specimens were obtained from two cylindrical plastic molds the first one for bulk fill composites (Tetric EvoCeram Bulk Fill and Surefil SDR Flow) with 4 mm diameter and 4 mm depth, the second one for Filtek Z250 Universal Restorative
... Show MoreThis research describes a new model inspired by Mobilenetv2 that was trained on a very diverse dataset. The goal is to enable fire detection in open areas to replace physical sensor-based fire detectors and reduce false alarms of fires, to achieve the lowest losses in open areas via deep learning. A diverse fire dataset was created that combines images and videos from several sources. In addition, another self-made data set was taken from the farms of the holy shrine of Al-Hussainiya in the city of Karbala. After that, the model was trained with the collected dataset. The test accuracy of the fire dataset that was trained with the new model reached 98.87%.
The paper aims to propose Teaching Learning based Optimization (TLBO) algorithm to solve 3-D packing problem in containers. The objective which can be presented in a mathematical model is optimizing the space usage in a container. Besides the interaction effect between students and teacher, this algorithm also observes the learning process between students in the classroom which does not need any control parameters. Thus, TLBO provides the teachers phase and students phase as its main updating process to find the best solution. More precisely, to validate the algorithm effectiveness, it was implemented in three sample cases. There was small data which had 5 size-types of items with 12 units, medium data which had 10 size-types of items w
... Show MoreBackground: Scientific education aims to be inclusive and to improve students learning achievements, through appropriate teaching and learning. Problem Based Learning (PBL) system, a student centered method, started in the second half of the previous century and is expanding progressively, organizes learning around problems and students learn about a subject through the experience of solving these problems.Objectives:To assess the opinions of undergraduate medical students regarding learning outcomes of PBL in small group teaching and to explore their views about the role of tutors and methods of evaluation. Type of the study: A cross-sectional study.Methods: This study was conducted in Kerbala Medical Colleges among second year students
... Show MoreNon-additive measures and corresponding integrals originally have been introduced by Choquet in 1953 (1) and independently defined by Sugeno in 1974 (2) in order to extend the classical measure by replacing the additivity property to non-additive property. An important feature of non –additive measures and fuzzy integrals is that they can represent the importance of individual information sources and interactions among them. There are many applications of non-additive measures and fuzzy integrals such as image processing, multi-criteria decision making, information fusion, classification, and pattern recognition. This paper presents a mathematical model for discussing an application of non-additive measures and corresp
... Show MoreTexture is an important characteristic for the analysis of many types of images because it provides a rich source of information about the image. Also it provides a key to understand basic mechanisms that underlie human visual perception. In this paper four statistical feature of texture (Contrast, Correlation, Homogeneity and Energy) was calculated from gray level Co-occurrence matrix (GLCM) of equal blocks (30×30) from both tumor tissue and normal tissue of three samples of CT-scan image of patients with lung cancer. It was found that the contrast feature is the best to differentiate between textures, while the correlation is not suitable for comparison, the energy and homogeneity features for tumor tissue always greater than its valu
... Show MoreIn the field of data security, the critical challenge of preserving sensitive information during its transmission through public channels takes centre stage. Steganography, a method employed to conceal data within various carrier objects such as text, can be proposed to address these security challenges. Text, owing to its extensive usage and constrained bandwidth, stands out as an optimal medium for this purpose. Despite the richness of the Arabic language in its linguistic features, only a small number of studies have explored Arabic text steganography. Arabic text, characterized by its distinctive script and linguistic features, has gained notable attention as a promising domain for steganographic ventures. Arabic text steganography harn
... Show MoreAlzheimer's disease (AD) increasingly affects the elderly and is a major killer of those 65 and over. Different deep-learning methods are used for automatic diagnosis, yet they have some limitations. Deep Learning is one of the modern methods that were used to detect and classify a medical image because of the ability of deep Learning to extract the features of images automatically. However, there are still limitations to using deep learning to accurately classify medical images because extracting the fine edges of medical images is sometimes considered difficult, and some distortion in the images. Therefore, this research aims to develop A Computer-Aided Brain Diagnosis (CABD) system that can tell if a brain scan exhibits indications of
... Show MoreIn education, exams are used to asses students’ acquired knowledge; however, the manual assessment of exams consumes a lot of teachers’ time and effort. In addition, educational institutions recently leaned toward distance education and e-learning due the Coronavirus pandemic. Thus, they needed to conduct exams electronically, which requires an automated assessment system. Although it is easy to develop an automated assessment system for objective questions. However, subjective questions require answers comprised of free text and are harder to automatically assess since grading them needs to semantically compare the students’ answers with the correct ones. In this paper, we present an automatic short answer grading metho
... Show MoreShadow removal is crucial for robot and machine vision as the accuracy of object detection is greatly influenced by the uncertainty and ambiguity of the visual scene. In this paper, we introduce a new algorithm for shadow detection and removal based on different shapes, orientations, and spatial extents of Gaussian equations. Here, the contrast information of the visual scene is utilized for shadow detection and removal through five consecutive processing stages. In the first stage, contrast filtering is performed to obtain the contrast information of the image. The second stage involves a normalization process that suppresses noise and generates a balanced intensity at a specific position compared to the neighboring intensit
... Show More