The continuous advancement in the use of the IoT has greatly transformed industries, though at the same time it has made the IoT network vulnerable to highly advanced cybercrimes. There are several limitations with traditional security measures for IoT; the protection of distributed and adaptive IoT systems requires new approaches. This research presents novel threat intelligence for IoT networks based on deep learning, which maintains compliance with IEEE standards. Interweaving artificial intelligence with standardization frameworks is the goal of the study and, thus, improves the identification, protection, and reduction of cyber threats impacting IoT environments. The study is systematic and begins by examining IoT-specific threat data recovered from the publicly available data sets CICIDS2017 and IoT-23. Classification of network anomalies and feature extraction are carried out with the help of deep learning models such as CNN and LSTM. This paper’s proposed system complies with IEEE standards like IEEE 802.15.4 for secure IoT transmission and IEEE P2413 for architecture. A testbed is developed in order to use the model and assess its effectiveness in terms of overall accuracy, detection ratio, and time to detect an event. The findings of the study prove that threat intelligence systems built with deep learning provide explicit security to IoT networks when they are designed as per the IEEE guidelines. The proposed model retains a high detection rate, is scalable, and is useful in protecting against new forms of attacks. This research develops an approach to provide standard-compliant cybersecurity solutions to enable trust and reliability in the IoT applications across the industrial sectors. More future research can be devoted to the implementation of this system within the context of the newest advancements in technologies, such as edge computing.
Products’ quality inspection is an important stage in every production route, in which the quality of the produced goods is estimated and compared with the desired specifications. With traditional inspection, the process rely on manual methods that generates various costs and large time consumption. On the contrary, today’s inspection systems that use modern techniques like computer vision, are more accurate and efficient. However, the amount of work needed to build a computer vision system based on classic techniques is relatively large, due to the issue of manually selecting and extracting features from digital images, which also produces labor costs for the system engineers.
 
... Show MoreMetasurface polarizers are essential optical components in modern integrated optics and play a vital role in many optical applications including Quantum Key Distribution systems in quantum cryptography. However, inverse design of metasurface polarizers with high efficiency depends on the proper prediction of structural dimensions based on required optical response. Deep learning neural networks can efficiently help in the inverse design process, minimizing both time and simulation resources requirements, while better results can be achieved compared to traditional optimization methods. Hereby, utilizing the COMSOL Multiphysics Surrogate model and deep neural networks to design a metasurface grating structure with high extinction rat
... Show MoreThe emergence of SARS-CoV-2, the virus responsible for the COVID-19 pandemic, has resulted in a global health crisis leading to widespread illness, death, and daily life disruptions. Having a vaccine for COVID-19 is crucial to controlling the spread of the virus which will help to end the pandemic and restore normalcy to society. Messenger RNA (mRNA) molecules vaccine has led the way as the swift vaccine candidate for COVID-19, but it faces key probable restrictions including spontaneous deterioration. To address mRNA degradation issues, Stanford University academics and the Eterna community sponsored a Kaggle competition.This study aims to build a deep learning (DL) model which will predict deterioration rates at each base of the mRNA
... Show MoreObjective:
This study aims to asses the patients' compliance with essential hypertension in respect to antihypertensive
medications, follow-up, dietary pattern and health habits, to identify the associated long-term complications, and
to find out the relationship between patient's compliance, and demographic characteristics such as age, gender,
level of education, and duration of disease.
Methodology:
A descriptive study was carried out in Nasiriyah Teaching Hospital to achieve presented objectives .
Results:
The results of the study revealed that there were a significant association between educational level and total
patient's compliance, a significant association was found between the duration of disease and
Objective:
This study aims to asses the patients' compliance with essential hypertension in respect to antihypertensive
medications, follow-up, dietary pattern and health habits, to identify the associated long-term complications, and
to find out the relationship between patient's compliance, and demographic characteristics such as age, gender,
level of education, and duration of disease.
Methodology:
A descriptive study was carried out in Nasiriyah Teaching Hospital to achieve presented objectives .
Results:
The results of the study revealed that there were a significant association between educational level and total
patient's compliance, a significant association was found between the duration of disease and
Lung cancer is the most common dangerous disease that, if treated late, can lead to death. It is more likely to be treated if successfully discovered at an early stage before it worsens. Distinguishing the size, shape, and location of lymphatic nodes can identify the spread of the disease around these nodes. Thus, identifying lung cancer at the early stage is remarkably helpful for doctors. Lung cancer can be diagnosed successfully by expert doctors; however, their limited experience may lead to misdiagnosis and cause medical issues in patients. In the line of computer-assisted systems, many methods and strategies can be used to predict the cancer malignancy level that plays a significant role to provide precise abnormality detectio
... Show MoreThe evolution of the Internet of things (IoT) led to connect billions of heterogeneous physical devices together to improve the quality of human life by collecting data from their environment. However, there is a need to store huge data in big storage and high computational capabilities. Cloud computing can be used to store big data. The data of IoT devices is transferred using two types of protocols: Message Queuing Telemetry Transport (MQTT) and Hypertext Transfer Protocol (HTTP). This paper aims to make a high performance and more reliable system through efficient use of resources. Thus, load balancing in cloud computing is used to dynamically distribute the workload across nodes to avoid overloading any individual r
... Show MoreArtificial intelligence techniques are reaching us in several forms, some of which are useful but can be exploited in a way that harms us. One of these forms is called deepfakes. Deepfakes is used to completely modify video (or image) content to display something that was not in it originally. The danger of deepfake technology impact on society through the loss of confidence in everything is published. Therefore, in this paper, we focus on deepfakedetection technology from the view of two concepts which are deep learning and forensic tools. The purpose of this survey is to give the reader a deeper overview of i) the environment of deepfake creation and detection, ii) how deep learning and forensic tools contributed to the detection
... Show MoreData scarcity is a major challenge when training deep learning (DL) models. DL demands a large amount of data to achieve exceptional performance. Unfortunately, many applications have small or inadequate data to train DL frameworks. Usually, manual labeling is needed to provide labeled data, which typically involves human annotators with a vast background of knowledge. This annotation process is costly, time-consuming, and error-prone. Usually, every DL framework is fed by a significant amount of labeled data to automatically learn representations. Ultimately, a larger amount of data would generate a better DL model and its performance is also application dependent. This issue is the main barrier for