Deepfake is a type of artificial intelligence used to create convincing images, audio, and video hoaxes and it concerns celebrities and everyone because they are easy to manufacture. Deepfake are hard to recognize by people and current approaches, especially high-quality ones. As a defense against Deepfake techniques, various methods to detect Deepfake in images have been suggested. Most of them had limitations, like only working with one face in an image. The face has to be facing forward, with both eyes and the mouth open, depending on what part of the face they worked on. Other than that, a few focus on the impact of pre-processing steps on the detection accuracy of the models. This paper introduces a framework design focused on this aspect of the Deepfake detection task and proposes pre-processing steps to improve accuracy and close the gap between training and validation results with simple operations. Additionally, it differed from others by dealing with the positions of the face in various directions within the image, distinguishing the concerned face in an image containing multiple faces, and segmentation the face using facial landmarks points. All these were done using face detection, face box attributes, facial landmarks, and key points from the MediaPipe tool with the pre-trained model (DenseNet121). Lastly, the proposed model was evaluated using Deepfake Detection Challenge datasets, and after training for a few epochs, it achieved an accuracy of 97% in detecting the Deepfake
Construction of photographed bullying scale of kindergarteners was the aim of this study. The study conducted to answer the raised question, could the bullying among kindergarteners be measured?. A total of (200) boy and girl were selected from city of Baghdad to be the sample of the study. The scale composed of (27) item with colored pictures. It takes about (15) minuets to answer the whole scale items. SPSS tools were used to process the collected data. The result showed that the bullying among kindergarteners could be measured.
Web application protection lies on two levels: the first is the responsibility of the server management, and the second is the responsibility of the programmer of the site (this is the scope of the research). This research suggests developing a secure web application site based on three-tier architecture (client, server, and database). The security of this system described as follows: using multilevel access by authorization, which means allowing access to pages depending on authorized level; password encrypted using Message Digest Five (MD5) and salt. Secure Socket Layer (SSL) protocol authentication used. Writing PHP code according to set of rules to hide source code to ensure that it cannot be stolen, verification of input before it is s
... Show MoreThis paper aims to develop a technique for helping disabled people elderly with physical disability, such as those who are unable to move hands and cannot speak howover, by using a computer vision; real time video and interaction between human and computer where these combinations provide a promising solution to assist the disabled people. The main objective of the work is to design a project as a wheelchair which contains two wheel drives. This project is based on real time video for detecting and tracking human face. The proposed design is multi speed based on pulse width modulation(PWM), technique. This project is a fast response to detect and track face direction with four operations movement (left, right, forward and stop). These opera
... Show MoreGumbel distribution was dealt with great care by researchers and statisticians. There are traditional methods to estimate two parameters of Gumbel distribution known as Maximum Likelihood, the Method of Moments and recently the method of re-sampling called (Jackknife). However, these methods suffer from some mathematical difficulties in solving them analytically. Accordingly, there are other non-traditional methods, like the principle of the nearest neighbors, used in computer science especially, artificial intelligence algorithms, including the genetic algorithm, the artificial neural network algorithm, and others that may to be classified as meta-heuristic methods. Moreover, this principle of nearest neighbors has useful statistical featu
... Show MoreThe use of deep learning.
Doses for most drugs are determined from population-level information, resulting in a standard ?one-size-fits-all’ dose range for all individuals. This review explores how doses can be personalised through the use of the individuals’ pharmacokinetic (PK)-pharmacodynamic (PD) profile, its particular application in children, and therapy areas where such approaches have made inroads.
The Bayesian forecasting approach, based on population PK/PD models that account for variability in exposure and response, is a potent method for personalising drug therapy. Its potential utility is eve
Isthmus life and prepare for it
Electrocoagulation is an electrochemical process of treating polluted water where sacrificial anode corrodes to produce active coagulant (usually aluminum or iron cations) into solution. Accompanying electrolytic reactions evolve gas (usually as hydrogen bubbles). The present study investigates the removal of phenol from water by this method. A glass tank with 1 liter volume and two electrodes were used to perform the experiments. The electrode connected to a D.C. power supply. The effect of various factors on the removal of phenol (initial phenol concentration, electrode size, electrodes gab, current density, pH and treatment time) were studied. The results indicated that the removal efficiency decreased as initial phenol concentration
... Show More