Data scarcity is a major challenge when training deep learning (DL) models. DL demands a large amount of data to achieve exceptional performance. Unfortunately, many applications have small or inadequate data to train DL frameworks. Usually, manual labeling is needed to provide labeled data, which typically involves human annotators with a vast background of knowledge. This annotation process is costly, time-consuming, and error-prone. Usually, every DL framework is fed by a significant amount of labeled data to automatically learn representations. Ultimately, a larger amount of data would generate a better DL model and its performance is also application dependent. This issue is the main barrier for many applications dismissing the use of DL. Having sufficient data is the first step toward any successful and trustworthy DL application. This paper presents a holistic survey on state-of-the-art techniques to deal with training DL models to overcome three challenges including small, imbalanced datasets, and lack of generalization. This survey starts by listing the learning techniques. Next, the types of DL architectures are introduced. After that, state-of-the-art solutions to address the issue of lack of training data are listed, such as Transfer Learning (TL), Self-Supervised Learning (SSL), Generative Adversarial Networks (GANs), Model Architecture (MA), Physics-Informed Neural Network (PINN), and Deep Synthetic Minority Oversampling Technique (DeepSMOTE). Then, these solutions were followed by some related tips about data acquisition needed prior to training purposes, as well as recommendations for ensuring the trustworthiness of the training dataset. The survey ends with a list of applications that suffer from data scarcity, several alternatives are proposed in order to generate more data in each application including Electromagnetic Imaging (EMI), Civil Structural Health Monitoring, Medical imaging, Meteorology, Wireless Communications, Fluid Mechanics, Microelectromechanical system, and Cybersecurity. To the best of the authors’ knowledge, this is the first review that offers a comprehensive overview on strategies to tackle data scarcity in DL.
Economics / University of Mosul
Abstract
The spread of the phenomenon of excessive buying in our society, especially for cosmetics, and at the same time increase the marketing deception by the organizations to take quick profit 'and accordingly was identified the problem of research in several questions, including:
Is there a significant effect of consumption culture on marketing deception? &n
... Show MoreJournal of Theoretical and Applied Information Technology is a peer-reviewed electronic research papers & review papers journal with aim of promoting and publishing original high quality research dealing with theoretical and scientific aspects in all disciplines of IT (Informaiton Technology
Cadmium element is one of the group IIB and classified as heavy metal and effects on human health and environment. The present work concerns with the biosorption of Cd(II) ions from aqueous solution using the outer layer of onions. Adsorption of the used ions was found to be pH dependent and maximum removal of the ions by outer layer of onions and was found to be 99.7%.
Pot experiment was carried out at the College of Agriculture – Baghdad University during autumn season, 2007. Thirteen treatments were formulated to evaluate the effectiveness of four applications of Phosphorus (0, 60, 60×2 and 120 Kg P. h-1) and three applications of Zinc (0, 25×2 mg Zn. L-1 and 50 mg Zn. Kg soil-1) along with inoculating seeds of bean with strains mixture 889 and 1865 and non-inoculated treatment, on nodulation, yield and protein content in seeds (N%). The results showed that inoculated plants exceeded on non-inoculated one in all the studied characteristics. While, P and Zn, applications at the rate of 60×2 kg/ha and 25×2 mg/L respectively, significantly, increased, nodulation, yield, protein content in se
... Show MoreABSTRUCT
In This Paper, some semi- parametric spatial models were estimated, these models are, the semi – parametric spatial error model (SPSEM), which suffer from the problem of spatial errors dependence, and the semi – parametric spatial auto regressive model (SPSAR). Where the method of maximum likelihood was used in estimating the parameter of spatial error ( λ ) in the model (SPSEM), estimated the parameter of spatial dependence ( ρ ) in the model ( SPSAR ), and using the non-parametric method in estimating the smoothing function m(x) for these two models, these non-parametric methods are; the local linear estimator (LLE) which require finding the smoo
... Show More