Medicine is one of the fields where the advancement of computer science is making significant progress. Some diseases require an immediate diagnosis in order to improve patient outcomes. The usage of computers in medicine improves precision and accelerates data processing and diagnosis. In order to categorize biological images, hybrid machine learning, a combination of various deep learning approaches, was utilized, and a meta-heuristic algorithm was provided in this research. In addition, two different medical datasets were introduced, one covering the magnetic resonance imaging (MRI) of brain tumors and the other dealing with chest X-rays (CXRs) of COVID-19. These datasets were introduced to the combination network that contained deep learning techniques, which were based on a convolutional neural network (CNN) or autoencoder, to extract features and combine them with the next step of the meta-heuristic algorithm in order to select optimal features using the particle swarm optimization (PSO) algorithm. This combination sought to reduce the dimensionality of the datasets while maintaining the original performance of the data. This is considered an innovative method and ensures highly accurate classification results across various medical datasets. Several classifiers were employed to predict the diseases. The COVID-19 dataset found that the highest accuracy was 99.76% using the combination of CNN-PSO-SVM. In comparison, the brain tumor dataset obtained 99.51% accuracy, the highest accuracy derived using the combination method of autoencoder-PSO-KNN.
In this article, the research presents a general overview of deep learning-based AVSS (audio-visual source separation) systems. AVSS has achieved exceptional results in a number of areas, including decreasing noise levels, boosting speech recognition, and improving audio quality. The advantages and disadvantages of each deep learning model are discussed throughout the research as it reviews various current experiments on AVSS. The TCD TIMIT dataset (which contains top-notch audio and video recordings created especially for speech recognition tasks) and the Voxceleb dataset (a sizable collection of brief audio-visual clips with human speech) are just a couple of the useful datasets summarized in the paper that can be used to test A
... Show MoreWidespread COVID-19 infections have sparked global attempts to contain the virus and eradicate it. Most researchers utilize machine learning (ML) algorithms to predict this virus. However, researchers face challenges, such as selecting the appropriate parameters and the best algorithm to achieve an accurate prediction. Therefore, an expert data scientist is needed. To overcome the need for data scientists and because some researchers have limited professionalism in data analysis, this study concerns developing a COVID-19 detection system using automated ML (AutoML) tools to detect infected patients. A blood test dataset that has 111 variables and 5644 cases was used. The model is built with three experiments using Python's Auto-
... Show MoreThe hydrological process has a dynamic nature characterised by randomness and complex phenomena. The application of machine learning (ML) models in forecasting river flow has grown rapidly. This is owing to their capacity to simulate the complex phenomena associated with hydrological and environmental processes. Four different ML models were developed for river flow forecasting located in semiarid region, Iraq. The effectiveness of data division influence on the ML models process was investigated. Three data division modeling scenarios were inspected including 70%–30%, 80%–20, and 90%–10%. Several statistical indicators are computed to verify the performance of the models. The results revealed the potential of the hybridized s
... Show MoreThis research aims to predict new COVID-19 cases in Bandung, Indonesia. The system implemented two types of deep learning methods to predict this. They were the recurrent neural networks (RNN) and long-short-term memory (LSTM) algorithms. The data used in this study were the numbers of confirmed COVID-19 cases in Bandung from March 2020 to December 2020. Pre-processing of the data was carried out, namely data splitting and scaling, to get optimal results. During model training, the hyperparameter tuning stage was carried out on the sequence length and the number of layers. The results showed that RNN gave a better performance. The test used the RMSE, MAE, and R2 evaluation methods, with the best numbers being 0.66975075, 0.470
... Show MoreEstimating an individual's age from a photograph of their face is critical in many applications, including intelligence and defense, border security and human-machine interaction, as well as soft biometric recognition. There has been recent progress in this discipline that focuses on the idea of deep learning. These solutions need the creation and training of deep neural networks for the sole purpose of resolving this issue. In addition, pre-trained deep neural networks are utilized in the research process for the purpose of facial recognition and fine-tuning for accurate outcomes. The purpose of this study was to offer a method for estimating human ages from the frontal view of the face in a manner that is as accurate as possible and takes
... Show MoreThe most common artifacts in ultrasound (US) imaging are reverberation and comet-tail. These are multiple reflection echoing the interface that causing them, and result in ghost echoes in the ultrasound image. A method to reduce these unwanted artifacts using a Otsu thresholding to find region of interest (reflection echoes) and output applied to median filter to remove noise. The developed method significantly reduced the magnitude of the reverberation and comet-tail artifacts. Support Vector Machine (SVM) algorithm is most suitable for hyperplane differentiate. For that, we use image enhancement, extraction of feature, region of interest, Otsu thresholding, and finally classification image datasets to normal or abnormal image.
... Show MoreThe proliferation of many editing programs based on artificial intelligence techniques has contributed to the emergence of deepfake technology. Deepfakes are committed to fabricating and falsifying facts by making a person do actions or say words that he never did or said. So that developing an algorithm for deepfakes detection is very important to discriminate real from fake media. Convolutional neural networks (CNNs) are among the most complex classifiers, but choosing the nature of the data fed to these networks is extremely important. For this reason, we capture fine texture details of input data frames using 16 Gabor filters indifferent directions and then feed them to a binary CNN classifier instead of using the red-green-blue
... Show MoreCryptography algorithms play a critical role in information technology against various attacks witnessed in the digital era. Many studies and algorithms are done to achieve security issues for information systems. The high complexity of computational operations characterizes the traditional cryptography algorithms. On the other hand, lightweight algorithms are the way to solve most of the security issues that encounter applying traditional cryptography in constrained devices. However, a symmetric cipher is widely applied for ensuring the security of data communication in constraint devices. In this study, we proposed a hybrid algorithm based on two cryptography algorithms PRESENT and Salsa20. Also, a 2D logistic map of a chaotic system is a
... Show MoreMany consumers of electric power have excesses in their electric power consumptions that exceed the permissible limit by the electrical power distribution stations, and then we proposed a validation approach that works intelligently by applying machine learning (ML) technology to teach electrical consumers how to properly consume without wasting energy expended. The validation approach is one of a large combination of intelligent processes related to energy consumption which is called the efficient energy consumption management (EECM) approaches, and it connected with the internet of things (IoT) technology to be linked to Google Firebase Cloud where a utility center used to check whether the consumption of the efficient energy is s
... Show MoreThe convolutional neural networks (CNN) are among the most utilized neural networks in various applications, including deep learning. In recent years, the continuing extension of CNN into increasingly complicated domains has made its training process more difficult. Thus, researchers adopted optimized hybrid algorithms to address this problem. In this work, a novel chaotic black hole algorithm-based approach was created for the training of CNN to optimize its performance via avoidance of entrapment in the local minima. The logistic chaotic map was used to initialize the population instead of using the uniform distribution. The proposed training algorithm was developed based on a specific benchmark problem for optical character recog
... Show More