The deep learning algorithm has recently achieved a lot of success, especially in the field of computer vision. This research aims to describe the classification method applied to the dataset of multiple types of images (Synthetic Aperture Radar (SAR) images and non-SAR images). In such a classification, transfer learning was used followed by fine-tuning methods. Besides, pre-trained architectures were used on the known image database ImageNet. The model VGG16 was indeed used as a feature extractor and a new classifier was trained based on extracted features.The input data mainly focused on the dataset consist of five classes including the SAR images class (houses) and the non-SAR images classes (Cats, Dogs, Horses, and Humans). The Convolutional Neural Network (CNN) has been chosen as a better option for the training process because it produces a high accuracy. The final accuracy has reached 91.18% in five different classes. The results are discussed in terms of the probability of accuracy for each class in the image classification in percentage. Cats class got 99.6 %, while houses class got 100 %.Other types of classes were with an average score of 90 % and above.
Abstract
Objectives: To find out the association between enhancing learning needs and demographic characteristic of (gender, education level and age).
Methods: This study was conducted on purposive sample was selected to obtain representative and accurate data consisting of (90) patients who are in a peroid of recovering from myocardial infarction at Missan Center for Cardiac Diseases and Surgery, (10) patients were excluded for the pilot study, Data were analyzed using descriptive statistical data analysis approach of frequency, percentage, and analysis of variance (ANOVA).
Results: The study finding shows, there was sign
... Show MoreMonaural source separation is a challenging issue due to the fact that there is only a single channel available; however, there is an unlimited range of possible solutions. In this paper, a monaural source separation model based hybrid deep learning model, which consists of convolution neural network (CNN), dense neural network (DNN) and recurrent neural network (RNN), will be presented. A trial and error method will be used to optimize the number of layers in the proposed model. Moreover, the effects of the learning rate, optimization algorithms, and the number of epochs on the separation performance will be explored. Our model was evaluated using the MIR-1K dataset for singing voice separation. Moreover, the proposed approach achi
... Show MoreThe research aims at evaluating the illustrations images and determining the availability of good image standards in the illustrations images of the content of the second intermediate stage computer's book for the academic year (2019-2020) as seen by computer teachers. The sample was randomly selected, (30) teachers who are actually teaching the subject in schools within the geographical area of the province of Baghdad (Karkh III). To achieve this goal, ten standards were identified: scientific accuracy, suitability for the level of students, image clarity, image freshness, quality of coloring, suitability of its location of the subject, Matching their content glimpsed, The subject matter is appropriate in terms of area, matching its tit
... Show MoreAbstract
The study aims to examine the relationships between cognitive absorption and E-Learning readiness in the preparatory stage. The study sample consisted of (190) students who were chosen randomly. The Researcher has developed the cognitive absorption and E-Learning readiness scales. A correlational descriptive approach was adopted. The research revealed that there is a positive statistical relationship between cognitive absorption and eLearning readiness.
Artificial intelligence techniques are reaching us in several forms, some of which are useful but can be exploited in a way that harms us. One of these forms is called deepfakes. Deepfakes is used to completely modify video (or image) content to display something that was not in it originally. The danger of deepfake technology impact on society through the loss of confidence in everything is published. Therefore, in this paper, we focus on deepfakedetection technology from the view of two concepts which are deep learning and forensic tools. The purpose of this survey is to give the reader a deeper overview of i) the environment of deepfake creation and detection, ii) how deep learning and forensic tools contributed to the detection
... Show MoreIt takes a lot of time to classify the banana slices by sweetness level using traditional methods. By assessing the quality of fruits more focus is placed on its sweetness as well as the color since they affect the taste. The reason for sorting banana slices by their sweetness is to estimate the ripeness of bananas using the sweetness and color values of the slices. This classifying system assists in establishing the degree of ripeness of bananas needed for processing and consumption. The purpose of this article is to compare the efficiency of the SVM-linear, SVM-polynomial, and LDA classification of the sweetness of banana slices by their LRV level. The result of the experiment showed that the highest accuracy of 96.66% was achieved by the
... Show MoreDiagnosing heart disease has become a very important topic for researchers specializing in artificial intelligence, because intelligence is involved in most diseases, especially after the Corona pandemic, which forced the world to turn to intelligence. Therefore, the basic idea in this research was to shed light on the diagnosis of heart diseases by relying on deep learning of a pre-trained model (Efficient b3) under the premise of using the electrical signals of the electrocardiogram and resample the signal in order to introduce it to the neural network with only trimming processing operations because it is an electrical signal whose parameters cannot be changed. The data set (China Physiological Signal Challenge -cspsc2018) was ad
... Show MoreVariable selection is an essential and necessary task in the statistical modeling field. Several studies have triedto develop and standardize the process of variable selection, but it isdifficultto do so. The first question a researcher needs to ask himself/herself what are the most significant variables that should be used to describe a given dataset’s response. In thispaper, a new method for variable selection using Gibbs sampler techniqueshas beendeveloped.First, the model is defined, and the posterior distributions for all the parameters are derived.The new variable selection methodis tested usingfour simulation datasets. The new approachiscompared with some existingtechniques: Ordinary Least Squared (OLS), Least Absolute Shrinkage
... Show More