Deepfake is a type of artificial intelligence used to create convincing images, audio, and video hoaxes and it concerns celebrities and everyone because they are easy to manufacture. Deepfake are hard to recognize by people and current approaches, especially high-quality ones. As a defense against Deepfake techniques, various methods to detect Deepfake in images have been suggested. Most of them had limitations, like only working with one face in an image. The face has to be facing forward, with both eyes and the mouth open, depending on what part of the face they worked on. Other than that, a few focus on the impact of pre-processing steps on the detection accuracy of the models. This paper introduces a framework design focused on this aspect of the Deepfake detection task and proposes pre-processing steps to improve accuracy and close the gap between training and validation results with simple operations. Additionally, it differed from others by dealing with the positions of the face in various directions within the image, distinguishing the concerned face in an image containing multiple faces, and segmentation the face using facial landmarks points. All these were done using face detection, face box attributes, facial landmarks, and key points from the MediaPipe tool with the pre-trained model (DenseNet121). Lastly, the proposed model was evaluated using Deepfake Detection Challenge datasets, and after training for a few epochs, it achieved an accuracy of 97% in detecting the Deepfake
Recently, a new secure steganography algorithm has been proposed, namely, the secure Block Permutation Image Steganography (BPIS) algorithm. The new algorithm consists of five main steps, these are: convert the secret message to a binary sequence, divide the binary sequence into blocks, permute each block using a key-based randomly generated permutation, concatenate the permuted blocks forming a permuted binary sequence, and then utilize a plane-based Least-Significant-Bit (LSB) approach to embed the permuted binary sequence into BMP image file format. The performance of algorithm was given a preliminary evaluation through estimating the PSNR (Peak Signal-to-Noise Ratio) of the stego image for limited number of experiments comprised hiding
... Show MoreIn this study, a chaotic method is proposed that generates S-boxes similar to AES S-boxes with the help of a private key belonging to
In this study, dynamic encryption techniques are explored as an image cipher method to generate S-boxes similar to AES S-boxes with the help of a private key belonging to the user and enable images to be encrypted or decrypted using S-boxes. This study consists of two stages: the dynamic generation of the S-box method and the encryption-decryption method. S-boxes should have a non-linear structure, and for this reason, K/DSA (Knutt Durstenfeld Shuffle Algorithm), which is one of the pseudo-random techniques, is used to generate S-boxes dynamically. The biggest advantage of this approach is the produ
... Show MoreEmbedding an identifying data into digital media such as video, audio or image is known as digital watermarking. In this paper, a non-blind watermarking algorithm based on Berkeley Wavelet Transform is proposed. Firstly, the embedded image is scrambled by using Arnold transform for higher security, and then the embedding process is applied in transform domain of the host image. The experimental results show that this algorithm is invisible and has good robustness for some common image processing operations.
In this paper, an efficient method for compressing color image is presented. It allows progressive transmission and zooming of the image without need to extra storage. The proposed method is going to be accomplished using cubic Bezier surface (CBI) representation on wide area of images in order to prune the image component that shows large scale variation. Then, the produced cubic Bezier surface is subtracted from the image signal to get the residue component. Then, bi-orthogonal wavelet transform is applied to decompose the residue component. Both scalar quantization and quad tree coding steps are applied on the produced wavelet sub bands. Finally, adaptive shift coding is applied to handle the remaining statistical redundancy and attain e
... Show MoreGypseous soil covers approximately 30% of Iraqi lands and is widely used in geotechnical and construction engineering as it is. The demand for residential complexes has increased, so one of the significant challenges in studying gypsum soil due to its unique behavior is understanding its interaction with foundations, such as strip and square footing. This is because there is a lack of experiments that provide total displacement diagrams or failure envelopes, which are well-considered for non-problematic soil. The aim is to address a comprehensive understanding of the micromechanical properties of dry, saturated, and treated gypseous sandy soils and to analyze the interaction of strip base with this type of soil using particle image
... Show MoreAssessing the accuracy of classification algorithms is paramount as it provides insights into reliability and effectiveness in solving real-world problems. Accuracy examination is essential in any remote sensing-based classification practice, given that classification maps consistently include misclassified pixels and classification misconceptions. In this study, two imaginary satellites for Duhok province, Iraq, were captured at regular intervals, and the photos were analyzed using spatial analysis tools to provide supervised classifications. Some processes were conducted to enhance the categorization, like smoothing. The classification results indicate that Duhok province is divided into four classes: vegetation cover, buildings,
... Show MoreIt is commonly known that Euler-Bernoulli’s thin beam theorem is not applicable whenever a nonlinear distribution of strain/stress occurs, such as in deep beams, or the stress distribution is discontinuous. In order to design the members experiencing such distorted stress regions, the Strut-and-Tie Model (STM) could be utilized. In this paper, experimental investigation of STM technique for three identical small-scale deep beams was conducted. The beams were simply supported and loaded statically with a concentrated load at the mid span of the beams. These deep beams had two symmetrical openings near the application point of loading. Both the deep beam, where the stress distribution cannot be assumed linear, and the ex
... Show MoreBackground/Objectives: The purpose of this study was to classify Alzheimer’s disease (AD) patients from Normal Control (NC) patients using Magnetic Resonance Imaging (MRI). Methods/Statistical analysis: The performance evolution is carried out for 346 MR images from Alzheimer's Neuroimaging Initiative (ADNI) dataset. The classifier Deep Belief Network (DBN) is used for the function of classification. The network is trained using a sample training set, and the weights produced are then used to check the system's recognition capability. Findings: As a result, this paper presented a novel method of automated classification system for AD determination. The suggested method offers good performance of the experiments carried out show that the
... Show MoreOntology is a system for classifying human knowledge according to its objective characteristics and hierarchical relations through building clusters or that bear common characteristics. In digital environments, it is a mechanism that helps regulate a vast amount of information by achieving a complete link between sub-thematic concepts and their main assets. The purpose of this study is to survey the previously conducted studies that use ontology in organizing digital data on social networking sites, such as the search engines Yahoo, Google, and social networks as Facebook and their findings. Results have shown that all these studies invest ontology for the purpose of organizing digital content data, especially on
... Show MoreThis study aims to deliver the woman’s image and to unveil on how to be introduced in the TV series. The research is based on the survey method-using content analysis tool. The research sample represented in the TV series produced by the IMN, which were displayed in 2014 and used the pattern of Margaret Gallagher to analyze the content of the series in accordance with the frame analysis theory.
The study came up with declination of the woman’s representation compared with man in Iraqi TV drama, also the study finds that the series introduced the woman according to the personal, social, political, and economic frames in a standardizing method. It focuses on the characteristics always attributed to it as showing her obedient of the