Deepfake is a type of artificial intelligence used to create convincing images, audio, and video hoaxes and it concerns celebrities and everyone because they are easy to manufacture. Deepfake are hard to recognize by people and current approaches, especially high-quality ones. As a defense against Deepfake techniques, various methods to detect Deepfake in images have been suggested. Most of them had limitations, like only working with one face in an image. The face has to be facing forward, with both eyes and the mouth open, depending on what part of the face they worked on. Other than that, a few focus on the impact of pre-processing steps on the detection accuracy of the models. This paper introduces a framework design focused on this aspect of the Deepfake detection task and proposes pre-processing steps to improve accuracy and close the gap between training and validation results with simple operations. Additionally, it differed from others by dealing with the positions of the face in various directions within the image, distinguishing the concerned face in an image containing multiple faces, and segmentation the face using facial landmarks points. All these were done using face detection, face box attributes, facial landmarks, and key points from the MediaPipe tool with the pre-trained model (DenseNet121). Lastly, the proposed model was evaluated using Deepfake Detection Challenge datasets, and after training for a few epochs, it achieved an accuracy of 97% in detecting the Deepfake
The process of accurate localization of the basic components of human faces (i.e., eyebrows, eyes, nose, mouth, etc.) from images is an important step in face processing techniques like face tracking, facial expression recognition or face recognition. However, it is a challenging task due to the variations in scale, orientation, pose, facial expressions, partial occlusions and lighting conditions. In the current paper, a scheme includes the method of three-hierarchal stages for facial components extraction is presented; it works regardless of illumination variance. Adaptive linear contrast enhancement methods like gamma correction and contrast stretching are used to simulate the variance in light condition among images. As testing material
... Show MoreDeveloping an efficient algorithm for automated Magnetic Resonance Imaging (MRI) segmentation to characterize tumor abnormalities in an accurate and reproducible manner is ever demanding. This paper presents an overview of the recent development and challenges of the energy minimizing active contour segmentation model called snake for the MRI. This model is successfully used in contour detection for object recognition, computer vision and graphics as well as biomedical image processing including X-ray, MRI and Ultrasound images. Snakes being deformable well-defined curves in the image domain can move under the influence of internal forces and external forces are subsequently derived from the image data. We underscore a critical appraisal
... Show MoreBackground: The human face has its special characteristics. It may be categorized into essentially three kinds in horizontal and vertical directions: short or brachyfacial, medium or mesofacial and long or dolichofacial. The aim of this study was to describe several orofacial indices and proportions of adults, according to gender in Iraqi subjects by using cone beam computed tomography . materials and methods: This prospective study included 100 Iraqi patients (males and females) ranging from 20 to 40 years. All subjects attended the Oral and Maxillofacial Radiology Department of Health Specialist Center for Dentistry in AL Sadr city in Baghdad taking cone beam computed tomography scan for different diagnostic purposes from October 2016 to
... Show Moreconventional FCM algorithm does not fully utilize the spatial information in the image. In this research, we use a FCM algorithm that incorporates spatial information into the membership function for clustering. The spatial function is the summation of the membership functions in the neighborhood of each pixel under consideration. The advantages of the method are that it is less
sensitive to noise than other techniques, and it yields regions more homogeneous than those of other methods. This technique is a powerful method for noisy image segmentation.
Most recognition system of human facial emotions are assessed solely on accuracy, even if other performance criteria are also thought to be important in the evaluation process such as sensitivity, precision, F-measure, and G-mean. Moreover, the most common problem that must be resolved in face emotion recognition systems is the feature extraction methods, which is comparable to traditional manual feature extraction methods. This traditional method is not able to extract features efficiently. In other words, there are redundant amount of features which are considered not significant, which affect the classification performance. In this work, a new system to recognize human facial emotions from images is proposed. The HOG (Histograms of Or
... Show MoreLK Abood, RA Ali, M Maliki, International Journal of Science and Research, 2015 - Cited by 2
Skull image separation is one of the initial procedures used to detect brain abnormalities. In an MRI image of the brain, this process involves distinguishing the tissue that makes up the brain from the tissue that does not make up the brain. Even for experienced radiologists, separating the brain from the skull is a difficult task, and the accuracy of the results can vary quite a little from one individual to the next. Therefore, skull stripping in brain magnetic resonance volume has become increasingly popular due to the requirement for a dependable, accurate, and thorough method for processing brain datasets. Furthermore, skull stripping must be performed accurately for neuroimaging diagnostic systems since neither non-brain tissues nor
... Show MoreA new approach presented in this study to determine the optimal edge detection threshold value. This approach is base on extracting small homogenous blocks from unequal mean targets. Then, from these blocks we generate small image with known edges (edges represent the lines between the contacted blocks). So, these simulated edges can be assumed as true edges .The true simulated edges, compared with the detected edges in the small generated image is done by using different thresholding values. The comparison based on computing mean square errors between the simulated edge image and the produced edge image from edge detector methods. The mean square error computed for the total edge image (Er), for edge regio
... Show More