DeepFake is a concern for celebrities and everyone because it is simple to create. DeepFake images, especially high-quality ones, are difficult to detect using people, local descriptors, and current approaches. On the other hand, video manipulation detection is more accessible than an image, which many state-of-the-art systems offer. Moreover, the detection of video manipulation depends entirely on its detection through images. Many worked on DeepFake detection in images, but they had complex mathematical calculations in preprocessing steps, and many limitations, including that the face must be in front, the eyes have to be open, and the mouth should be open with the appearance of teeth, etc. Also, the accuracy of their counterfeit detection in all previous studies was less than what this paper achieved, especially with the benchmark Flickr faces high-quality dataset (FFHQ). This study proposed, a new, simple, but powerful method called image Re-representation by combining the local binary pattern of multiple-channel (IR-CLBP-MC) color space as an image re-representation technique improved DeepFake detection accuracy. The IRCLBP- MC is produced using the fundamental concept of the multiple-channel of the local binary pattern (MCLBP), an extension of the original LBP. The primary distinction is that in our method, the LBP decimal value is calculated in each local patch channel, merging them to re-represent the image and producing a new image with three color channels. A pretrained convolutional neural network (CNN) was utilized to extract the deep textural features from twelve sets of a dataset of IR-CLBP-MC images made from different color spaces: RGB, XYZ, HLS, HSV, YCbCr, and LAB. Other than that, the experimental results by applying the overlap and non-overlap techniques showed that the first technique was better with the IR-CLBP-MC, and the YCbCr image color space is the most accurate when used with the model and for both datasets. Extensive experimentation is done, and the high accuracy obtained are 99.4% in the FFHQ and 99.8% in the CelebFaces Attributes dataset (Celeb-A).
In this paper we investigate the use of two types of local search methods (LSM), the Simulated Annealing (SA) and Particle Swarm Optimization (PSO), to solve the problems ( ) and . The results of the two LSMs are compared with the Branch and Bound method and good heuristic methods. This work shows the good performance of SA and PSO compared with the exact and heuristic methods in terms of best solutions and CPU time.
This research deals with processing and Interpretation of Bouguer anomaly gravity field, using two dimensional filtering techniques to separate the residual gravity field from the Bouguer gravity map for a part of Najaf Ashraf province in Iraq. The residual anomaly processed in order to reduce noise and give a more comprehensive vision about subsurface linear structures. Results for descriptive interpretation presented as colored surfaces and contour maps in order to locate directions and extensions of linear features which may interpret as faults. A comparison among gravity residual field , 1st derivative and horizontal gradient made along a profile across the study area in order to assign the exact location of a major fault. Furthermor
... Show MoreIn this paper, we present an approximate analytical and numerical solutions for the differential equations with multiple delay using the extend differential transform method (DTM). This method is used to solve many linear and non linear problems.
Background and Aim: due to the rapid growth of data communication and multimedia system applications, security becomes a critical issue in the communication and storage of images. This study aims to improve encryption and decryption for various types of images by decreasing time consumption and strengthening security. Methodology: An algorithm is proposed for encrypting images based on the Carlisle Adams and Stafford Tavares CAST block cipher algorithm with 3D and 2D logistic maps. A chaotic function that increases the randomness in the encrypted data and images, thereby breaking the relation sequence through the encryption procedure, is introduced. The time is decreased by using three secure and private S-Boxes rather than using si
... Show MoreIn modern times face recognition is one of the vital sides for computer vision. This is due to many reasons involving availability and accessibility of technologies and commercial applications. Face recognition in a brief statement is robotically recognizing a person from an image or video frame. In this paper, an efficient face recognition algorithm is proposed based on the benefit of wavelet decomposition to extract the most important and distractive features for the face and Eigen face method to classify faces according to the minimum distance with feature vectors. Faces94 data base is used to test the method. An excellent recognition with minimum computation time is obtained with accuracy reaches to 100% and recognition time decrease
... Show MoreOptimizing the Access Point (AP) deployment is of great importance in wireless applications owing the requirement to provide efficient and cost-effective communication. Highly targeted by many researchers and academic industries, Quality of Service (QOS) is an important primary parameter and objective in mind along with AP placement and overall publishing cost. This study proposes and investigates a multi-level optimization algorithm based on Binary Particle Swarm Optimization (BPSO). It aims to an optimal multi-floor AP placement with effective coverage that makes it more capable of supporting QOS and cost effectiveness. Five pairs (coverage, AP placement) of weights, signal threshol
The detection of diseases affecting wheat is very important as it relates to the issue of food security, which poses a serious threat to human life. Recently, farmers have heavily relied on modern systems and techniques for the control of the vast agricultural areas. Computer vision and data processing play a key role in detecting diseases that affect plants, depending on the images of their leaves. In this article, Fuzzy- logic based Histogram Equalization (FHE) is proposed to enhance the contrast of images. The fuzzy histogram is applied to divide the histograms into two subparts of histograms, based on the average value of the original image, then equalize them freely and independently to conserve the brightness of the image. The prop
... Show MoreThis study relates to the estimation of a simultaneous equations system for the Tobit model where the dependent variables ( ) are limited, and this will affect the method to choose the good estimator. So, we will use new estimations methods different from the classical methods, which if used in such a case, will produce biased and inconsistent estimators which is (Nelson-Olson) method and Two- Stage limited dependent variables(2SLDV) method to get of estimators that hold characteristics the good estimator .
That is , parameters will be estim
... Show MoreAbstract
This research aims to study and improve the passivating specifications of rubber resistant to vibration. In this paper, seven different rubber recipes were prepared based on mixtures of natural rubber(NR) as an essential part in addition to the synthetic rubber (IIR, BRcis, SBR, CR)with different rates. Mechanical tests such as tensile strength, hardness, friction, resistance to compression, fatigue and creep testing in addition to the rheological test were performed. Furthermore, scanning electron microscopy (SEM)test was used to examine the structure morphology of rubber. After studying and analyzing the results, we found that, recipe containing (BRcis) of 40% from th
... Show More