When images are customized to identify changes that have occurred using techniques such as spectral signature, which can be used to extract features, they can be of great value. In this paper, it was proposed to use the spectral signature to extract information from satellite images and then classify them into four categories. Here it is based on a set of data from the Kaggle satellite imagery website that represents different categories such as clouds, deserts, water, and green areas. After preprocessing these images, the data is transformed into a spectral signature using the Fast Fourier Transform (FFT) algorithm. Then the data of each image is reduced by selecting the top 20 features and transforming them from a two-dimensional matrix to a one-dimensional vector matrix using the Vector Quantization (VQ) algorithm. The data is divided into training and testing. Then it is fed into 23 layers of deep neural networks (DNN) that classify satellite images. The result is 2,145,020 parameters, and the evaluation of performance measures was accuracy = 100%, loopback = 100%, and the result F1 = 100 %.
Estimating an individual's age from a photograph of their face is critical in many applications, including intelligence and defense, border security and human-machine interaction, as well as soft biometric recognition. There has been recent progress in this discipline that focuses on the idea of deep learning. These solutions need the creation and training of deep neural networks for the sole purpose of resolving this issue. In addition, pre-trained deep neural networks are utilized in the research process for the purpose of facial recognition and fine-tuning for accurate outcomes. The purpose of this study was to offer a method for estimating human ages from the frontal view of the face in a manner that is as accurate as possible and takes
... Show MoreThe transition of customers from one telecom operator to another has a direct impact on the company's growth and revenue. Traditional classification algorithms fail to predict churn effectively. This research introduces a deep learning model for predicting customers planning to leave to another operator. The model works on a high-dimensional large-scale data set. The performance of the model was measured against other classification algorithms, such as Gaussian NB, Random Forrest, and Decision Tree in predicting churn. The evaluation was performed based on accuracy, precision, recall, F-measure, Area Under Curve (AUC), and Receiver Operating Characteristic (ROC) Curve. The proposed deep learning model performs better than othe
... Show MoreThis paper uses Artificial Intelligence (AI) based algorithm analysis to classify breast cancer Deoxyribonucleic (DNA). Main idea is to focus on application of machine and deep learning techniques. Furthermore, a genetic algorithm is used to diagnose gene expression to reduce the number of misclassified cancers. After patients' genetic data are entered, processing operations that require filling the missing values using different techniques are used. The best data for the classification process are chosen by combining each technique using the genetic algorithm and comparing them in terms of accuracy.
Age is a predominant parameter for arbitrating an individual, for security and access concerns of the data that exist in cyber space. Nowadays we find a rapid growth in unethical practices from youngsters as well as skilled cyber users. Facial image renders a variety of information that can be used, when processed to ascertain the age of individuals. In this paper, local facial features are considered to predict the age group, where local Binary Pattern (LBP) is extracted from four regions of facial images. The prominent areas where wrinkles are developed naturally in human as age increases are taken for feature extraction. Further these feature vectors are subjected to ensemble techniques that increases th
... Show MoreOne study whose importance has significantly grown in recent years is lip-reading, particularly with the widespread of using deep learning techniques. Lip reading is essential for speech recognition in noisy environments or for those with hearing impairments. It refers to recognizing spoken sentences using visual information acquired from lip movements. Also, the lip area, especially for males, suffers from several problems, such as the mouth area containing the mustache and beard, which may cover the lip area. This paper proposes an automatic lip-reading system to recognize and classify short English sentences spoken by speakers using deep learning networks. The input video extracts frames and each frame is passed to the Viola-Jone
... Show MoreThis study focusses on the effect of using ICA transform on the classification accuracy of satellite images using the maximum likelihood classifier. The study area represents an agricultural area north of the capital Baghdad - Iraq, as it was captured by the Landsat 8 satellite on 12 January 2021, where the bands of the OLI sensor were used. A field visit was made to a variety of classes that represent the landcover of the study area and the geographical location of these classes was recorded. Gaussian, Kurtosis, and LogCosh kernels were used to perform the ICA transform of the OLI Landsat 8 image. Different training sets were made for each of the ICA and Landsat 8 images separately that used in the classification phase, and used to calcula
... Show MoreThe digital world has been witnessing a fast progress in technology, which led to an enormous increase in using digital devices, such as cell phones, laptops, and digital cameras. Thus, photographs and videos function as the primary sources of legal proof in courtrooms concerning any incident or crime. It has become important to prove the trustworthiness of digital multimedia. Inter-frame video forgery one of common types of video manipulation performed in temporal domain. It deals with inter-frame video forgery detection that involves frame deletion, insertion, duplication, and shuffling. Deep Learning (DL) techniques have been proven effective in analysis and processing of visual media. Dealing with video data needs to handle th
... Show MoreStatistical learning theory serves as the foundational bedrock of Machine learning (ML), which in turn represents the backbone of artificial intelligence, ushering in innovative solutions for real-world challenges. Its origins can be linked to the point where statistics and the field of computing meet, evolving into a distinct scientific discipline. Machine learning can be distinguished by its fundamental branches, encompassing supervised learning, unsupervised learning, semi-supervised learning, and reinforcement learning. Within this tapestry, supervised learning takes center stage, divided in two fundamental forms: classification and regression. Regression is tailored for continuous outcomes, while classification specializes in c
... Show MoreThe proliferation of many editing programs based on artificial intelligence techniques has contributed to the emergence of deepfake technology. Deepfakes are committed to fabricating and falsifying facts by making a person do actions or say words that he never did or said. So that developing an algorithm for deepfakes detection is very important to discriminate real from fake media. Convolutional neural networks (CNNs) are among the most complex classifiers, but choosing the nature of the data fed to these networks is extremely important. For this reason, we capture fine texture details of input data frames using 16 Gabor filters indifferent directions and then feed them to a binary CNN classifier instead of using the red-green-blue
... Show MoreClinical keratoconus (KCN) detection is a challenging and time-consuming task. In the diagnosis process, ophthalmologists must revise demographic and clinical ophthalmic examinations. The latter include slit-lamb, corneal topographic maps, and Pentacam indices (PI). We propose an Ensemble of Deep Transfer Learning (EDTL) based on corneal topographic maps. We consider four pretrained networks, SqueezeNet (SqN), AlexNet (AN), ShuffleNet (SfN), and MobileNet-v2 (MN), and fine-tune them on a dataset of KCN and normal cases, each including four topographic maps. We also consider a PI classifier. Then, our EDTL method combines the output probabilities of each of the five classifiers to obtain a decision b