Deepfake is a type of artificial intelligence used to create convincing images, audio, and video hoaxes and it concerns celebrities and everyone because they are easy to manufacture. Deepfake are hard to recognize by people and current approaches, especially high-quality ones. As a defense against Deepfake techniques, various methods to detect Deepfake in images have been suggested. Most of them had limitations, like only working with one face in an image. The face has to be facing forward, with both eyes and the mouth open, depending on what part of the face they worked on. Other than that, a few focus on the impact of pre-processing steps on the detection accuracy of the models. This paper introduces a framework design focused on this aspect of the Deepfake detection task and proposes pre-processing steps to improve accuracy and close the gap between training and validation results with simple operations. Additionally, it differed from others by dealing with the positions of the face in various directions within the image, distinguishing the concerned face in an image containing multiple faces, and segmentation the face using facial landmarks points. All these were done using face detection, face box attributes, facial landmarks, and key points from the MediaPipe tool with the pre-trained model (DenseNet121). Lastly, the proposed model was evaluated using Deepfake Detection Challenge datasets, and after training for a few epochs, it achieved an accuracy of 97% in detecting the Deepfake
In the image processing’s field and computer vision it’s important to represent the image by its information. Image information comes from the image’s features that extracted from it using feature detection/extraction techniques and features description. Features in computer vision define informative data. For human eye its perfect to extract information from raw image, but computer cannot recognize image information. This is why various feature extraction techniques have been presented and progressed rapidly. This paper presents a general overview of the feature extraction categories for image.
The widespread of internet allover the world, in addition to the increasing of the huge number of users that they exchanged important information over it highlights the need for a new methods to protect these important information from intruders' corruption or modification. This paper suggests a new method that ensures that the texts of a given document cannot be modified by the intruders. This method mainly consists of mixture of three steps. The first step which barrows some concepts of "Quran" security system to detect some type of change(s) occur in a given text. Where a key of each paragraph in the text is extracted from a group of letters in that paragraph which occur as multiply of a given prime number. This step cannot detect the ch
... Show MoreThe present study investigates the implementation of machine learning models on crop data to predict crop yield in Rajasthan state, India. The key objective of the study is to identify which machine learning model performs are better to provide the most accurate predictions. For this purpose, two machine learning models (decision tree and random forest regression) were implemented, and gradient boosting regression was used as an optimization algorithm. The result clarifies that using gradient boosting regression can reduce the yield prediction mean square error to 6%. Additionally, for the present data set, random forest regression performed better than other models. We reported the machine learning model's performance using Mea
... Show MoreThe paper aims to propose Teaching Learning based Optimization (TLBO) algorithm to solve 3-D packing problem in containers. The objective which can be presented in a mathematical model is optimizing the space usage in a container. Besides the interaction effect between students and teacher, this algorithm also observes the learning process between students in the classroom which does not need any control parameters. Thus, TLBO provides the teachers phase and students phase as its main updating process to find the best solution. More precisely, to validate the algorithm effectiveness, it was implemented in three sample cases. There was small data which had 5 size-types of items with 12 units, medium data which had 10 size-types of items w
... Show MoreIn education, exams are used to asses students’ acquired knowledge; however, the manual assessment of exams consumes a lot of teachers’ time and effort. In addition, educational institutions recently leaned toward distance education and e-learning due the Coronavirus pandemic. Thus, they needed to conduct exams electronically, which requires an automated assessment system. Although it is easy to develop an automated assessment system for objective questions. However, subjective questions require answers comprised of free text and are harder to automatically assess since grading them needs to semantically compare the students’ answers with the correct ones. In this paper, we present an automatic short answer grading metho
... Show MoreA substantial portion of today’s multimedia data exists in the form of unstructured text. However, the unstructured nature of text poses a significant task in meeting users’ information requirements. Text classification (TC) has been extensively employed in text mining to facilitate multimedia data processing. However, accurately categorizing texts becomes challenging due to the increasing presence of non-informative features within the corpus. Several reviews on TC, encompassing various feature selection (FS) approaches to eliminate non-informative features, have been previously published. However, these reviews do not adequately cover the recently explored approaches to TC problem-solving utilizing FS, such as optimization techniques.
... Show MoreAfter the information revolution that occurred in the Western world, and the developments in all fields, especially in the field of education and e-learning, from an integrated system based on the effective employment of information and communication technology in the teaching and learning processes through an environment rich in computer and Internet applications, the community and the learner were able to access information sources and learning at any time and place, in a way that achieves mutual interaction between the elements of the system and the surrounding environment. After the occurrence of the phenomenon of Covid 19, it led to a major interruption in all educational systems that had never happened before, and the disrupt
... Show MoreIn this paper, a simple fast lossless image compression method is introduced for compressing medical images, it is based on integrates multiresolution coding along with polynomial approximation of linear based to decompose image signal followed by efficient coding. The test results indicate that the suggested method can lead to promising performance due to flexibility in overcoming the limitations or restrictions of the model order length and extra overhead information required compared to traditional predictive coding techniques.
Image quality has been estimated and predicted using the signal to noise ratio (SNR). The purpose of this study is to investigate the relationships between body mass index (BMI) and SNR measurements in PET imaging using patient studies with liver cancer. Three groups of 59 patients (24 males and 35 females) were divided according to BMI. After intravenous injection of 0.1 mCi of 18F-FDG per kilogram of body weight, PET emission scans were acquired for (1, 1.5, and 3) min/bed position according to the weight of patient. Because liver is an organ of homogenous metabolism, five region of interest (ROI) were made at the same location, five successive slices of the PET/CT scans to determine the mean uptake (signal) values and its standard deviat
... Show MoreIn recent years images have been used widely by online social networks providers or numerous organizations such as governments, police departments, colleges, universities, and private companies. It held in vast databases. Thus, efficient storage of such images is advantageous and its compression is an appealing application. Image compression generally represents the significant image information compactly with a smaller size of bytes while insignificant image information (redundancy) already been removed for this reason image compression has an important role in data transfer and storage especially due to the data explosion that is increasing significantly. It is a challenging task since there are highly complex unknown correlat
... Show More