Estimating an individual's age from a photograph of their face is critical in many applications, including intelligence and defense, border security and human-machine interaction, as well as soft biometric recognition. There has been recent progress in this discipline that focuses on the idea of deep learning. These solutions need the creation and training of deep neural networks for the sole purpose of resolving this issue. In addition, pre-trained deep neural networks are utilized in the research process for the purpose of facial recognition and fine-tuning for accurate outcomes. The purpose of this study was to offer a method for estimating human ages from the frontal view of the face in a manner that is as accurate as possible and takes into account the majority of the challenges faced by existing methods of age estimate. Making use of the data set that serves as the foundation for the face estimation system in this region (IMDB-WIKI). By performing preparatory processing activities to setup and train the data in order to collect cases, and by using the CNN deep learning method, which yielded results with an accuracy of 0.960 percent, we were able to reach our objective.
Monaural source separation is a challenging issue due to the fact that there is only a single channel available; however, there is an unlimited range of possible solutions. In this paper, a monaural source separation model based hybrid deep learning model, which consists of convolution neural network (CNN), dense neural network (DNN) and recurrent neural network (RNN), will be presented. A trial and error method will be used to optimize the number of layers in the proposed model. Moreover, the effects of the learning rate, optimization algorithms, and the number of epochs on the separation performance will be explored. Our model was evaluated using the MIR-1K dataset for singing voice separation. Moreover, the proposed approach achi
... Show MoreThe proliferation of many editing programs based on artificial intelligence techniques has contributed to the emergence of deepfake technology. Deepfakes are committed to fabricating and falsifying facts by making a person do actions or say words that he never did or said. So that developing an algorithm for deepfakes detection is very important to discriminate real from fake media. Convolutional neural networks (CNNs) are among the most complex classifiers, but choosing the nature of the data fed to these networks is extremely important. For this reason, we capture fine texture details of input data frames using 16 Gabor filters indifferent directions and then feed them to a binary CNN classifier instead of using the red-green-blue
... Show MoreWith the rapid development of smart devices, people's lives have become easier, especially for visually disabled or special-needs people. The new achievements in the fields of machine learning and deep learning let people identify and recognise the surrounding environment. In this study, the efficiency and high performance of deep learning architecture are used to build an image classification system in both indoor and outdoor environments. The proposed methodology starts with collecting two datasets (indoor and outdoor) from different separate datasets. In the second step, the collected dataset is split into training, validation, and test sets. The pre-trained GoogleNet and MobileNet-V2 models are trained using the indoor and outdoor se
... Show MoreThe main aim of image compression is to reduce the its size to be able for transforming and storage, therefore many methods appeared to compress the image, one of these methods is "Multilayer Perceptron ". Multilayer Perceptron (MLP) method which is artificial neural network based on the Back-Propagation algorithm for compressing the image. In case this algorithm depends upon the number of neurons in the hidden layer only the above mentioned will not be quite enough to reach the desired results, then we have to take into consideration the standards which the compression process depend on to get the best results. We have trained a group of TIFF images with the size of (256*256) in our research, compressed them by using MLP for each
... Show MoreThe investigation of signature validation is crucial to the field of personal authenticity. The biometrics-based system has been developed to support some information security features.Aperson’s signature, an essential biometric trait of a human being, can be used to verify their identification. In this study, a mechanism for automatically verifying signatures has been suggested. The offline properties of handwritten signatures are highlighted in this study which aims to verify the authenticity of handwritten signatures whether they are real or forged using computer-based machine learning techniques. The main goal of developing such systems is to verify people through the validity of their signatures. In this research, images of a group o
... Show MoreIn this research Artificial Neural Network (ANN) technique was applied to study the filtration process in water treatment. Eight models have been developed and tested using data from a pilot filtration plant, working under different process design criteria; influent turbidity, bed depth, grain size, filtration rate and running time (length of the filtration run), recording effluent turbidity and head losses. The ANN models were constructed for the prediction of different performance criteria in the filtration process: effluent turbidity, head losses and running time. The results indicate that it is quite possible to use artificial neural networks in predicting effluent turbidity, head losses and running time in the filtration process, wi
... Show MoreClassification of network traffic is an important topic for network management, traffic routing, safe traffic discrimination, and better service delivery. Traffic examination is the entire process of examining traffic data, from intercepting traffic data to discovering patterns, relationships, misconfigurations, and anomalies in a network. Between them, traffic classification is a sub-domain of this field, the purpose of which is to classify network traffic into predefined classes such as usual or abnormal traffic and application type. Most Internet applications encrypt data during traffic, and classifying encrypted data during traffic is not possible with traditional methods. Statistical and intelligence methods can find and model traff
... Show MoreAn oil spill is a leakage of pipelines, vessels, oil rigs, or tankers that leads to the release of petroleum products into the marine environment or on land that happened naturally or due to human action, which resulted in severe damages and financial loss. Satellite imagery is one of the powerful tools currently utilized for capturing and getting vital information from the Earth's surface. But the complexity and the vast amount of data make it challenging and time-consuming for humans to process. However, with the advancement of deep learning techniques, the processes are now computerized for finding vital information using real-time satellite images. This paper applied three deep-learning algorithms for satellite image classification
... Show MoreImitation learning is an effective method for training an autonomous agent to accomplish a task by imitating expert behaviors in their demonstrations. However, traditional imitation learning methods require a large number of expert demonstrations in order to learn a complex behavior. Such a disadvantage has limited the potential of imitation learning in complex tasks where the expert demonstrations are not sufficient. In order to address the problem, we propose a Generative Adversarial Network-based model which is designed to learn optimal policies using only a single demonstration. The proposed model is evaluated on two simulated tasks in comparison with other methods. The results show that our proposed model is capable of completing co
... Show More