Data scarcity is a major challenge when training deep learning (DL) models. DL demands a large amount of data to achieve exceptional performance. Unfortunately, many applications have small or inadequate data to train DL frameworks. Usually, manual labeling is needed to provide labeled data, which typically involves human annotators with a vast background of knowledge. This annotation process is costly, time-consuming, and error-prone. Usually, every DL framework is fed by a significant amount of labeled data to automatically learn representations. Ultimately, a larger amount of data would generate a better DL model and its performance is also application dependent. This issue is the main barrier for many applications dismissing the use of DL. Having sufficient data is the first step toward any successful and trustworthy DL application. This paper presents a holistic survey on state-of-the-art techniques to deal with training DL models to overcome three challenges including small, imbalanced datasets, and lack of generalization. This survey starts by listing the learning techniques. Next, the types of DL architectures are introduced. After that, state-of-the-art solutions to address the issue of lack of training data are listed, such as Transfer Learning (TL), Self-Supervised Learning (SSL), Generative Adversarial Networks (GANs), Model Architecture (MA), Physics-Informed Neural Network (PINN), and Deep Synthetic Minority Oversampling Technique (DeepSMOTE). Then, these solutions were followed by some related tips about data acquisition needed prior to training purposes, as well as recommendations for ensuring the trustworthiness of the training dataset. The survey ends with a list of applications that suffer from data scarcity, several alternatives are proposed in order to generate more data in each application including Electromagnetic Imaging (EMI), Civil Structural Health Monitoring, Medical imaging, Meteorology, Wireless Communications, Fluid Mechanics, Microelectromechanical system, and Cybersecurity. To the best of the authors’ knowledge, this is the first review that offers a comprehensive overview on strategies to tackle data scarcity in DL.
In this paper, integrated quantum neural network (QNN), which is a class of feedforward
neural networks (FFNN’s), is performed through emerging quantum computing (QC) with artificial neural network(ANN) classifier. It is used in data classification technique, and here iris flower data is used as a classification signals. For this purpose independent component analysis (ICA) is used as a feature extraction technique after normalization of these signals, the architecture of (QNN’s) has inherently built in fuzzy, hidden units of these networks (QNN’s) to develop quantized representations of sample information provided by the training data set in various graded levels of certainty. Experimental results presented here show that
... Show MoreIn this study, the mobile phone traces concern an ephemeral event which represents important densities of people. This research aims to study city pulse and human mobility evolution that would be arise during specific event (Armada festival), by modelling and simulating human mobility of the observed region, depending on CDRs (Call Detail Records) data. The most pivot questions of this research are: Why human mobility studied? What are the human life patterns in the observed region inside Rouen city during Armada festival? How life patterns and individuals' mobility could be extracted for this region from mobile DB (CDRs)? The radius of gyration parameter has been applied to elaborate human life patterns with regards to (work, off) days for
... Show MoreDifferent ANN architectures of MLP have been trained by BP and used to analyze Landsat TM images. Two different approaches have been applied for training: an ordinary approach (for one hidden layer M-H1-L & two hidden layers M-H1-H2-L) and one-against-all strategy (for one hidden layer (M-H1-1)xL, & two hidden layers (M-H1-H2-1)xL). Classification accuracy up to 90% has been achieved using one-against-all strategy with two hidden layers architecture. The performance of one-against-all approach is slightly better than the ordinary approach
Data hiding is the process of encoding extra information in an image by making small modification to its pixels. To be practical, the hidden data must be perceptually invisible yet robust to common signal processing operations. This paper introduces a scheme for hiding a signature image that could be as much as 25% of the host image data and hence could be used both in digital watermarking as well as image/data hiding. The proposed algorithm uses orthogonal discrete wavelet transforms with two zero moments and with improved time localization called discrete slantlet transform for both host and signature image. A scaling factor ? in frequency domain control the quality of the watermarked images. Experimental results of signature image
... Show MoreData compression offers an attractive approach to reducing communication costs using available bandwidth effectively. It makes sense to pursue research on developing algorithms that can most effectively use available network. It is also important to consider the security aspect of the data being transmitted is vulnerable to attacks. The basic aim of this work is to develop a module for combining the operation of compression and encryption on the same set of data to perform these two operations simultaneously. This is achieved through embedding encryption into compression algorithms since both cryptographic ciphers and entropy coders bear certain resemblance in the sense of secrecy. First in the secure compression module, the given text is p
... Show MoreA mathematical model constructed to study the combined effects of the concentration and the thermodiffusion on the nanoparticles of a Jeffrey fluid with a magnetic field effect the process of containing waves in a three-dimensional rectangular porous medium canal. Using the HPM to solve the nonlinear and coupled partial differential equations. Numerical results were obtained for temperature distribution, nanoparticles concentration, velocity, pressure rise, pressure gradient, friction force and stream function. Through the graphs, it was found that the velocity of fluid rises with the increase of a mean rate of volume flow and a magnetic parameter, while the velocity goes down with the increasing a Darcy number and lateral walls. Also, t
... Show MoreAbstract: Microfluidic devices present unique advantages for the development of efficient drug assay and screening. The microfluidic platforms might offer a more rapid and cost-effective alternative. Fluids are confined in devices that have a significant dimension on the micrometer scale. Due to this extreme confinement, the volumes used for drug assays are tiny (milliliters to femtoliters).
In this research, a microfluidic chip consists of micro-channels carved on substrate materials built by using Acrylic (Polymethyl Methacrylate, PMMA) chip was designed using a Carbon Dioxide (CO2) laser machine. The CO2 parameters have influence on the width, depth, roughness of the chip. In order to have regular
... Show MoreBackground: The synthesis and characterization of novel liquid crystalline compounds have garnered signi|cant attention due to their potential applications in biomedical sciences, including drug delivery systems, biosensing, and diagnostic tools. This study focuses on synthesizing and characterizing new thiazolothiadiazole-based liquid crystals and evaluating their mesophase properties. Methods: A series of novel compounds containing 5H-thiazolo[4,3−b][1,3,4] thiadiazole units were synthesized via multi-step chemical reactions. The synthesis involved the reaction of chloroethyl acetate with 4−hydroxybenzaldehyde to yield an aldehyde intermediate, followed by subsequent transformations using hydrazine hydrate, ethylacetoacetate, and 1,2
... Show More