DeepFake is a concern for celebrities and everyone because it is simple to create. DeepFake images, especially high-quality ones, are difficult to detect using people, local descriptors, and current approaches. On the other hand, video manipulation detection is more accessible than an image, which many state-of-the-art systems offer. Moreover, the detection of video manipulation depends entirely on its detection through images. Many worked on DeepFake detection in images, but they had complex mathematical calculations in preprocessing steps, and many limitations, including that the face must be in front, the eyes have to be open, and the mouth should be open with the appearance of teeth, etc. Also, the accuracy of their counterfeit detection in all previous studies was less than what this paper achieved, especially with the benchmark Flickr faces high-quality dataset (FFHQ). This study proposed, a new, simple, but powerful method called image Re-representation by combining the local binary pattern of multiple-channel (IR-CLBP-MC) color space as an image re-representation technique improved DeepFake detection accuracy. The IRCLBP- MC is produced using the fundamental concept of the multiple-channel of the local binary pattern (MCLBP), an extension of the original LBP. The primary distinction is that in our method, the LBP decimal value is calculated in each local patch channel, merging them to re-represent the image and producing a new image with three color channels. A pretrained convolutional neural network (CNN) was utilized to extract the deep textural features from twelve sets of a dataset of IR-CLBP-MC images made from different color spaces: RGB, XYZ, HLS, HSV, YCbCr, and LAB. Other than that, the experimental results by applying the overlap and non-overlap techniques showed that the first technique was better with the IR-CLBP-MC, and the YCbCr image color space is the most accurate when used with the model and for both datasets. Extensive experimentation is done, and the high accuracy obtained are 99.4% in the FFHQ and 99.8% in the CelebFaces Attributes dataset (Celeb-A).
Information centric networking (ICN) is the next generation of internet architecture with its ability to provide in-network caching that make users retrieve their data efficiently regardless of their location. In ICN, security is applied to data itself rather than communication channels or devices. In-network caches are vulnerable to many types of attacks, such as cache poisoning attacks, cache privacy attacks, and cache pollution attacks (CPA). An attacker floods non-popular content to the network and makes the caches evict popular ones. As a result, the cache hit ratio for legitimate users will suffer from a performance degradation and an increase in the content’s retrieval latency. In this paper, a popularity variation me
... Show MoreWith the high usage of computers and networks in the current time, the amount of security threats is increased. The study of intrusion detection systems (IDS) has received much attention throughout the computer science field. The main objective of this study is to examine the existing literature on various approaches for Intrusion Detection. This paper presents an overview of different intrusion detection systems and a detailed analysis of multiple techniques for these systems, including their advantages and disadvantages. These techniques include artificial neural networks, bio-inspired computing, evolutionary techniques, machine learning, and pattern recognition.
As a result of the pandemic crisis and the shift to digitization, cyber-attacks are at an all-time high in the modern day despite good technological advancement. The use of wireless sensor networks (WSNs) is an indicator of technical advancement in most industries. For the safe transfer of data, security objectives such as confidentiality, integrity, and availability must be maintained. The security features of WSN are split into node level and network level. For the node level, a proactive strategy using deep learning /machine learning techniques is suggested. The primary benefit of this proactive approach is that it foresees the cyber-attack before it is launched, allowing for damage mitigation. A cryptography algorithm is put
... Show MoreJohn Updike’s use of setting in his fiction has elicited different and even conflicting reactions from critics, varying from symbolic interpretations of setting to a sense of confusion at his use of time and place in his stories. The present study is an attempt at examining John Updike’s treatment of binary settings in Pigeon Feathers and Other Stories (1962) to reveal theme, characters’ motives and conflicts. Analyzing Updike’s stories from a structuralist’s perspective reveals his employment of two different places and times in the individual stories as a means of reflecting the psychological state of the characters, as in “The Persistence of Desire”, or expressing conflicting views on social and political is
... Show MoreThe corrosion behavior of Titanium in a simulated saliva solution was improved by Nanotubular Oxide via electrochemical anodizing treatment using three electrodes cell potentiostat at 37°C. The anodization treatment was achieved in a non-aqueous electrolyte with the following composition: 200mL ethylene glycol containing 0.6g NH4F and 10 ml of deionized water and using different applied directed voltage at 10°C and constant time of anodizing (15 min.). The anodized titanium layer was examined using SEM, and AFM technique.
The results showed that increasing applied voltage resulted in formation titanium oxide nanotubes with higher corrosion resistance
In recent years, the world witnessed a rapid growth in attacks on the internet which resulted in deficiencies in networks performances. The growth was in both quantity and versatility of the attacks. To cope with this, new detection techniques are required especially the ones that use Artificial Intelligence techniques such as machine learning based intrusion detection and prevention systems. Many machine learning models are used to deal with intrusion detection and each has its own pros and cons and this is where this paper falls in, performance analysis of different Machine Learning Models for Intrusion Detection Systems based on supervised machine learning algorithms. Using Python Scikit-Learn library KNN, Support Ve
... Show MoreRA Ali, LK Abood, Int J Sci Res, 2017 - Cited by 2
Metaheuristic is one of the most well-known fields of research used to find optimum solutions for non-deterministic polynomial hard (NP-hard) problems, for which it is difficult to find an optimal solution in a polynomial time. This paper introduces the metaheuristic-based algorithms and their classifications and non-deterministic polynomial hard problems. It also compares the performance of two metaheuristic-based algorithms (Elephant Herding Optimization algorithm and Tabu Search) to solve the Traveling Salesman Problem (TSP), which is one of the most known non-deterministic polynomial hard problems and widely used in the performance evaluations for different metaheuristics-based optimization algorithms. The experimental results of Ele
... Show MoreNon Uniform Illumination biological image often leads to diminish structures and inhomogeneous intensities of the image. Algorithm has been proposed using Morphological Operations different types of structuring elements including (dick, line, square and ball) with the same parameters of (15).To correct the non-uniform illumination and enhancement biological images, the non-uniform background illumination have been removed from image, using (contrast adjustment, histogram equalization and adaptive histogram equalization). The used basic approach to extract the statistical features values from gray level of co-occurrence matrices (GLCM) can show the typical values for features content of biological images that can be in form of shape or sp
... Show MoreThe aim of this paper is to compare between classical and fuzzy filters for removing different types of noise in gray scale images. The processing used consists of three steps. First, different types of noise are added to the original image to produce a noisy image (with different noise ratios). Second, classical and fuzzy filters are used to filter the noisy image. Finally, comparing between resulting images depending on a quantitative measure called Peak Signal-to-Noise Ratio (PSNR) to determine the best filter in each case.
The image used in this paper is a 512 * 512 pixel and the size of all filters is a square window of size 3*3. Results indicate that fuzzy filters achieve varying successes in noise reduction in image compared to