Data scarcity is a major challenge when training deep learning (DL) models. DL demands a large amount of data to achieve exceptional performance. Unfortunately, many applications have small or inadequate data to train DL frameworks. Usually, manual labeling is needed to provide labeled data, which typically involves human annotators with a vast background of knowledge. This annotation process is costly, time-consuming, and error-prone. Usually, every DL framework is fed by a significant amount of labeled data to automatically learn representations. Ultimately, a larger amount of data would generate a better DL model and its performance is also application dependent. This issue is the main barrier for many applications dismissing the use of DL. Having sufficient data is the first step toward any successful and trustworthy DL application. This paper presents a holistic survey on state-of-the-art techniques to deal with training DL models to overcome three challenges including small, imbalanced datasets, and lack of generalization. This survey starts by listing the learning techniques. Next, the types of DL architectures are introduced. After that, state-of-the-art solutions to address the issue of lack of training data are listed, such as Transfer Learning (TL), Self-Supervised Learning (SSL), Generative Adversarial Networks (GANs), Model Architecture (MA), Physics-Informed Neural Network (PINN), and Deep Synthetic Minority Oversampling Technique (DeepSMOTE). Then, these solutions were followed by some related tips about data acquisition needed prior to training purposes, as well as recommendations for ensuring the trustworthiness of the training dataset. The survey ends with a list of applications that suffer from data scarcity, several alternatives are proposed in order to generate more data in each application including Electromagnetic Imaging (EMI), Civil Structural Health Monitoring, Medical imaging, Meteorology, Wireless Communications, Fluid Mechanics, Microelectromechanical system, and Cybersecurity. To the best of the authors’ knowledge, this is the first review that offers a comprehensive overview on strategies to tackle data scarcity in DL.
The term "nano gold," also known as "gold nanoparticles," is commonly used. These particles are extremely small, with a diameter of less than 100 nm, which is only a fraction of the width of a human hair. Due to their tiny size, nano gold particles are often found in a colloidal solution, where they are suspended in a liquid stabilizer. This colloidal gold is essentially another name for nano gold. The main method for producing gold nanoparticles in a colloidal solution is the citrate synthesis technique, which involves combining different solutions to precipitate the gold nanoparticles. In biological systems, copper complexes play a significant role at the active sites of many metalloproteins. These complexes have potential applications in
... Show MoreCarbazone Derivatives (CD) (semicarbazone, semithiocarbasone) are produced by the condensation reaction between a aldehyde (or ketone) with a carbazide derivatives (semicarbazide, semithiocarbazide). CD and their metal complexes existent a wide range of implementation that stretch from their ply in the medicinal and pharmaceutical area because of their major significant pharmacological characteristic such as anti-fungal,anti-bacterial, anti-cancer, anti-human immunodeficiency virus, anti-inflammation, anti-neoplastic,inhibition corrosion, antioxidation, antiradical. This paper reviews the definition, importance and various applications of carbazone derivatives with transitional meta
Abstract:
Research Topic: Ruling on the sale of big data
Its objectives: a statement of what it is, importance, source and governance.
The methodology of the curriculum is inductive, comparative and critical
One of the most important results: it is not permissible to attack it and it is a valuable money, and it is permissible to sell big data as long as it does not contain data to users who are not satisfied with selling it
Recommendation: Follow-up of studies dealing with the provisions of the issue
Subject Terms
Judgment, Sale, Data, Mega, Sayings, Jurists
Abstract Portable communication devices such as WLAN, WiMAX, LTE, ISM, and 5G utilize one or more of the triple bands at (2.32.7 GHz,3.4–3.6GHz,and5–6GHz)andsufferfromtheeffectofmultipathproblemsbecausetheyareusedinurbanregions.To date, no one has performed a review of the antennas used for these types of wireless communications. This study reviewed two types of microstrip antennas (slot and fractal) that have been reported by researchers (as a single element) using a survey that included the evaluation of several important specifications of the antennas in previous research, such as operating bandwidth, gain, efficiency, axial ratio bandwidth (ARBW), and size. The weaknesses in the design of all antennas were carefully identified to de
... Show MoreThe artistic mediation in theater connects the human elements with the material elements in the theatrical production process. The human artistic mediator plays a crucial role in achieving the artistic and expressive goals of the theatrical work, requiring high skills and expertise. Their role involves coordinating and executing the technical aspects of production, such as lighting, sound, set design, costumes, and acting, using their skills and techniques to transform the theatrical text into a successful stage performance. The artistic mediator significantly impacts the quality of the theatrical work and creates a distinct viewing experience for the audience. Technological advancements and continuous updates in theater contribute to th
... Show MoreINTRODUCTION: A range of tools and technologies are at disposal for the purpose of defect detection. These include but are not limited to sensors, Statistical Process Control (SPC) software, Artificial Intelligence (AI) and machine learning (ML) algorithms, X-ray systems, ultrasound systems, and eddy current systems. OBJECTIVES: The determination of the suitable instrument or combination of instruments is contingent upon the precise production procedure and the category of flaw being identified. In certain cases, defects may necessitate real-time monitoring and analysis through the use of sensors and SPC software, whereas more comprehensive analysis may be required for other defects through the utilization of X-ray or ultrasound sy
... Show MoreThis study attempts to focus on there lation ship between employment policies andsocietal changesinIraq.Theconstruction ofoperational policyincommunitiesin crisis remains fraught with challenges and risks, especially in countries that have longoutstanding conflict sand crises, it is important in this context to achieve those policy and build the foundations of human security and poverty alleviation, unemployment, to find effective ways to help the community to achieve stability and reduce the risk of renew edorrepeat the cycleofviolence-butthatwouldrequirearadicalrethinking, including rethinking the way evaluating therisksandchallengesand management.And thatsuchaprojectshouldbe based ona clear roadmap, andthevisionsofdevelopmentanda clea
... Show MoreFace recognition is a crucial biometric technology used in various security and identification applications. Ensuring accuracy and reliability in facial recognition systems requires robust feature extraction and secure processing methods. This study presents an accurate facial recognition model using a feature extraction approach within a cloud environment. First, the facial images undergo preprocessing, including grayscale conversion, histogram equalization, Viola-Jones face detection, and resizing. Then, features are extracted using a hybrid approach that combines Linear Discriminant Analysis (LDA) and Gray-Level Co-occurrence Matrix (GLCM). The extracted features are encrypted using the Data Encryption Standard (DES) for security
... Show More