Artificial intelligence techniques are reaching us in several forms, some of which are useful but can be exploited in a way that harms us. One of these forms is called deepfakes. Deepfakes is used to completely modify video (or image) content to display something that was not in it originally. The danger of deepfake technology impact on society through the loss of confidence in everything is published. Therefore, in this paper, we focus on deepfakedetection technology from the view of two concepts which are deep learning and forensic tools. The purpose of this survey is to give the reader a deeper overview of i) the environment of deepfake creation and detection, ii) how deep learning and forensic tools contributed to the detection of deepfakes, and iii) finally how in the future incorporating both deep learning technology and tools for forensics can increase the efficiency of deepfakes detection.
The proliferation of many editing programs based on artificial intelligence techniques has contributed to the emergence of deepfake technology. Deepfakes are committed to fabricating and falsifying facts by making a person do actions or say words that he never did or said. So that developing an algorithm for deepfakes detection is very important to discriminate real from fake media. Convolutional neural networks (CNNs) are among the most complex classifiers, but choosing the nature of the data fed to these networks is extremely important. For this reason, we capture fine texture details of input data frames using 16 Gabor filters indifferent directions and then feed them to a binary CNN classifier instead of using the red-green-blue
... Show MoreText categorization refers to the process of grouping text or documents into classes or categories according to their content. Text categorization process consists of three phases which are: preprocessing, feature extraction and classification. In comparison to the English language, just few studies have been done to categorize and classify the Arabic language. For a variety of applications, such as text classification and clustering, Arabic text representation is a difficult task because Arabic language is noted for its richness, diversity, and complicated morphology. This paper presents a comprehensive analysis and a comparison for researchers in the last five years based on the dataset, year, algorithms and the accuracy th
... Show MoreText categorization refers to the process of grouping text or documents into classes or categories according to their content. Text categorization process consists of three phases which are: preprocessing, feature extraction and classification. In comparison to the English language, just few studies have been done to categorize and classify the Arabic language. For a variety of applications, such as text classification and clustering, Arabic text representation is a difficult task because Arabic language is noted for its richness, diversity, and complicated morphology. This paper presents a comprehensive analysis and a comparison for researchers in the last five years based on the dataset, year, algorithms and the accu
... Show MoreDistributed Denial of Service (DDoS) attacks on Web-based services have grown in both number and sophistication with the rise of advanced wireless technology and modern computing paradigms. Detecting these attacks in the sea of communication packets is very important. There were a lot of DDoS attacks that were directed at the network and transport layers at first. During the past few years, attackers have changed their strategies to try to get into the application layer. The application layer attacks could be more harmful and stealthier because the attack traffic and the normal traffic flows cannot be told apart. Distributed attacks are hard to fight because they can affect real computing resources as well as network bandwidth. DDoS attacks
... Show MoreSemantic segmentation realization and understanding is a stringent task not just for computer vision but also in the researches of the sciences of earth, semantic segmentation decompose compound architectures in one elements, the most mutual object in a civil outside or inside senses must classified then reinforced with information meaning of all object, it’s a method for labeling and clustering point cloud automatically. Three dimensions natural scenes classification need a point cloud dataset to representation data format as input, many challenge appeared with working of 3d data like: little number, resolution and accurate of three Dimensional dataset . Deep learning now is the po
Data scarcity is a major challenge when training deep learning (DL) models. DL demands a large amount of data to achieve exceptional performance. Unfortunately, many applications have small or inadequate data to train DL frameworks. Usually, manual labeling is needed to provide labeled data, which typically involves human annotators with a vast background of knowledge. This annotation process is costly, time-consuming, and error-prone. Usually, every DL framework is fed by a significant amount of labeled data to automatically learn representations. Ultimately, a larger amount of data would generate a better DL model and its performance is also application dependent. This issue is the main barrier for
Learning is the process of gaining knowledge and implementing this knowledge on behavior. The concept of learning is not strict to just human being, it expanded to include machine also. Now the machines can behave based on the gained knowledge learned from the environment. The learning process is evolving in both human and machine, to keep up with the technology in the world, the human learning evolved into micro-learning and the machine learning evolved to deep learning. In this paper, the evolution of learning is discussed as a formal survey accomplished with the foundation of machine learning and its evolved version of learning which is deep learning and micro-learning as a new learning technology can be imple
... Show MoreDetection of early clinical keratoconus (KCN) is a challenging task, even for expert clinicians. In this study, we propose a deep learning (DL) model to address this challenge. We first used Xception and InceptionResNetV2 DL architectures to extract features from three different corneal maps collected from 1371 eyes examined in an eye clinic in Egypt. We then fused features using Xception and InceptionResNetV2 to detect subclinical forms of KCN more accurately and robustly. We obtained an area under the receiver operating characteristic curves (AUC) of 0.99 and an accuracy range of 97–100% to distinguish normal eyes from eyes with subclinical and established KCN. We further validated the model based on an independent dataset with
... Show MoreProblem: Cancer is regarded as one of the world's deadliest diseases. Machine learning and its new branch (deep learning) algorithms can facilitate the way of dealing with cancer, especially in the field of cancer prevention and detection. Traditional ways of analyzing cancer data have their limits, and cancer data is growing quickly. This makes it possible for deep learning to move forward with its powerful abilities to analyze and process cancer data. Aims: In the current study, a deep-learning medical support system for the prediction of lung cancer is presented. Methods: The study uses three different deep learning models (EfficientNetB3, ResNet50 and ResNet101) with the transfer learning concept. The three models are trained using a
... Show MoreTo date, comprehensive reviews and discussions of the strengths and limitations of Remote Sensing (RS) standalone and combination approaches, and Deep Learning (DL)-based RS datasets in archaeology have been limited. The objective of this paper is, therefore, to review and critically discuss existing studies that have applied these advanced approaches in archaeology, with a specific focus on digital preservation and object detection. RS standalone approaches including range-based and image-based modelling (e.g., laser scanning and SfM photogrammetry) have several disadvantages in terms of spatial resolution, penetrations, textures, colours, and accuracy. These limitations have led some archaeological studies to fuse/integrate multip
... Show More