Data scarcity is a major challenge when training deep learning (DL) models. DL demands a large amount of data to achieve exceptional performance. Unfortunately, many applications have small or inadequate data to train DL frameworks. Usually, manual labeling is needed to provide labeled data, which typically involves human annotators with a vast background of knowledge. This annotation process is costly, time-consuming, and error-prone. Usually, every DL framework is fed by a significant amount of labeled data to automatically learn representations. Ultimately, a larger amount of data would generate a better DL model and its performance is also application dependent. This issue is the main barrier for many applications dismissing the use of DL. Having sufficient data is the first step toward any successful and trustworthy DL application. This paper presents a holistic survey on state-of-the-art techniques to deal with training DL models to overcome three challenges including small, imbalanced datasets, and lack of generalization. This survey starts by listing the learning techniques. Next, the types of DL architectures are introduced. After that, state-of-the-art solutions to address the issue of lack of training data are listed, such as Transfer Learning (TL), Self-Supervised Learning (SSL), Generative Adversarial Networks (GANs), Model Architecture (MA), Physics-Informed Neural Network (PINN), and Deep Synthetic Minority Oversampling Technique (DeepSMOTE). Then, these solutions were followed by some related tips about data acquisition needed prior to training purposes, as well as recommendations for ensuring the trustworthiness of the training dataset. The survey ends with a list of applications that suffer from data scarcity, several alternatives are proposed in order to generate more data in each application including Electromagnetic Imaging (EMI), Civil Structural Health Monitoring, Medical imaging, Meteorology, Wireless Communications, Fluid Mechanics, Microelectromechanical system, and Cybersecurity. To the best of the authors’ knowledge, this is the first review that offers a comprehensive overview on strategies to tackle data scarcity in DL.
The current paper proposes a new estimator for the linear regression model parameters under Big Data circumstances. From the diversity of Big Data variables comes many challenges that can be interesting to the researchers who try their best to find new and novel methods to estimate the parameters of linear regression model. Data has been collected by Central Statistical Organization IRAQ, and the child labor in Iraq has been chosen as data. Child labor is the most vital phenomena that both society and education are suffering from and it affects the future of our next generation. Two methods have been selected to estimate the parameter
... Show MoreResearch on the automated extraction of essential data from an electrocardiography (ECG) recording has been a significant topic for a long time. The main focus of digital processing processes is to measure fiducial points that determine the beginning and end of the P, QRS, and T waves based on their waveform properties. The presence of unavoidable noise during ECG data collection and inherent physiological differences among individuals make it challenging to accurately identify these reference points, resulting in suboptimal performance. This is done through several primary stages that rely on the idea of preliminary processing of the ECG electrical signal through a set of steps (preparing raw data and converting them into files tha
... Show MoreThe current study aims at identifying the impact of using learning acceleration model on the achievement of mathematics for third intermediategrade students. Forachieving this, the researchers chose the School (Al-Kholood Secondary School for Girls) affiliated to the General Directorate of Babylon Education / Hashemite Education Department for the academic year (2021/2021), The sample reached to (70) female students from the third intermediate grade, with (35) female students for each of the two research groups. The two researchers prepared an achievement test consisting of (25) objective items of multiple choice type, The psychometric properties of the test were confirmed, and after the completion of the experiment, the achievement test wa
... Show MoreAbstract
The current research aims at identifying any of the dimensions of organizational learning abilities that are more influential in the knowledge capital of the university and the extent to which they can be applied effectively at Wasit University. The current research dealt with organizational learning abilities as an explanatory variable in four dimensions (Experimentation and openness, sharing and transfer of knowledge, dialogue, interaction with the external environment ), and knowledge capital as a transient variable, with four dimensions (human capital, structural capital, client capital, operational capital). The problem of research is the following questio
... Show MoreA total of nine swab samples were collected from inflamed teeth and gingiva of human’soral cavity from a dentist clinic in Baghdad. All specimens were cultured in Mitis Salivarius agar medium and the isolated bacterial pure colonies werethen identified by using VITEK2. Three samples were diagnosed and identified as Staphylococcus lentus. One of the three isolates which showed a distinctive heavy growth on the media was selected for further analysis in this study. Paper disk diffusion method was used to detect the antibacterial activityof three of mouthwash solutions (Zak, Colgate and Listerine). The results showed that “Colgate†was the most active solution with antibacterial activity compared with the other two s
... Show MoreIn data mining, classification is a form of data analysis that can be used to extract models describing important data classes. Two of the well known algorithms used in data mining classification are Backpropagation Neural Network (BNN) and Naïve Bayesian (NB). This paper investigates the performance of these two classification methods using the Car Evaluation dataset. Two models were built for both algorithms and the results were compared. Our experimental results indicated that the BNN classifier yield higher accuracy as compared to the NB classifier but it is less efficient because it is time-consuming and difficult to analyze due to its black-box implementation.
Reliable data transfer and energy efficiency are the essential considerations for network performance in resource-constrained underwater environments. One of the efficient approaches for data routing in underwater wireless sensor networks (UWSNs) is clustering, in which the data packets are transferred from sensor nodes to the cluster head (CH). Data packets are then forwarded to a sink node in a single or multiple hops manners, which can possibly increase energy depletion of the CH as compared to other nodes. While several mechanisms have been proposed for cluster formation and CH selection to ensure efficient delivery of data packets, less attention has been given to massive data co