Natural gas and oil are one of the mainstays of the global economy. However, many issues surround the pipelines that transport these resources, including aging infrastructure, environmental impacts, and vulnerability to sabotage operations. Such issues can result in leakages in these pipelines, requiring significant effort to detect and pinpoint their locations. The objective of this project is to develop and implement a method for detecting oil spills caused by leaking oil pipelines using aerial images captured by a drone equipped with a Raspberry Pi 4. Using the message queuing telemetry transport Internet of Things (MQTT IoT) protocol, the acquired images and the global positioning system (GPS) coordinates of the images' acquisition are sent to the base station. Using deep learning approaches such as holistically-nested edge detection (HED) and extreme inception (Xception) networks, images are analyzed at the base station to identify contours using dense extreme inception networks for edge detection (DexiNed). This algorithm is capable of finding many contours in images. Moreover, the CIELAB color space (LAB) is employed to locate black-colored contours, which may indicate oil spills. The suggested method involves eliminating smaller contours to calculate the area of larger contours. If the contour's area exceeds a certain threshold, it is classified as a spill; otherwise, it is stored in a database for further review. In the experiments, spill sizes of 1m2, 2m2, and 3m2 were established at three separate test locations. The drone was operated at three different heights (5 m, 10 m, and 15 m) to capture the scenes. The results show that efficient detection can be achieved at a height of 10 meters using the DexiNed algorithm. Statistical comparison with other edge detection methods using basic metrics, such as perimage best threshold (OIS = 0.867), fixed contour threshold (ODS = 0.859), and average precision (AP = 0.905), validates the effectiveness of the DexiNed algorithm in generating thin edge maps and identifying oil slicks. © 2023 Lavoisier. All rights reserved.
The rise of edge-cloud continuum computing is a result of the growing significance of edge computing, which has become a complementary or substitute option for traditional cloud services. The convergence of networking and computers presents a notable challenge due to their distinct historical development. Task scheduling is a major challenge in the context of edge-cloud continuum computing. The selection of the execution location of tasks, is crucial in meeting the quality-of-service (QoS) requirements of applications. An efficient scheduling strategy for distributing workloads among virtual machines in the edge-cloud continuum data center is mandatory to ensure the fulfilment of QoS requirements for both customer and service provider. E
... Show MoreThe purpose of the study is to identify the teaching techniques that mathematics' teachers use due to the Brain-based learning theory. The sample is composed of (90) teacher: (50) male, (40) female. The results have shown no significant differences between male and female responses' mean. Additionally, through the observation of author, he found a lack of using Brain-based learning techniques. Thus, the researcher recommend that it is necessary to involve teachers in remedial courses to enhance their ability to create a classroom that raise up brain-based learning skills.
In this work Different weight of pure Zinc powder suspended particles in 4ml base engine Oil were used.
Intensity of Kα Line was measured for the suspended particles ,also for mixture which consist from Zinc particle blended with Engine base Oil. Calibration Curve was drawn between Ikα line Intensity and Zinc concentration at different operation condition. The Lower Limit detection (LLD) and Sensitivity (m) of Spectrometer were determined for different Zinc Concentration (Wt%). The results of LLD and m for Samples were analyzed at Operation Condition of 30KV,17mA is best from Samples were analyzed at Operation Condition of 25KV,15mA
The convolutional neural networks (CNN) are among the most utilized neural networks in various applications, including deep learning. In recent years, the continuing extension of CNN into increasingly complicated domains has made its training process more difficult. Thus, researchers adopted optimized hybrid algorithms to address this problem. In this work, a novel chaotic black hole algorithm-based approach was created for the training of CNN to optimize its performance via avoidance of entrapment in the local minima. The logistic chaotic map was used to initialize the population instead of using the uniform distribution. The proposed training algorithm was developed based on a specific benchmark problem for optical character recog
... Show MoreProstate cancer is the commonest male cancer and the second leading cause of cancer-related death in men. Over many decades, prostate cancer detection represented a continuous challenge to urologists. Although all urologists and pathologists agree that tissue diagnosis is essential especially before commencing active surgical or radiation treatment, the best way to obtain the biopsy was always the big hurdle. The heterogenicity of the tumor pathology is very well seen in its radiological appearance. Ultrasound has been proven to be of limited sensitivity and specificity in detecting prostate cancer. However, it was the only available targeting technique for years and was used to guide biopsy needle passed transrectally or transperineally
... Show MoreImage compression is a serious issue in computer storage and transmission, that simply makes efficient use of redundancy embedded within an image itself; in addition, it may exploit human vision or perception limitations to reduce the imperceivable information Polynomial coding is a modern image compression technique based on modelling concept to remove the spatial redundancy embedded within the image effectively that composed of two parts, the mathematical model and the residual. In this paper, two stages proposed technqies adopted, that starts by utilizing the lossy predictor model along with multiresolution base and thresholding techniques corresponding to first stage. Latter by incorporating the near lossless com
... Show MoreVoice Activity Detection (VAD) is considered as an important pre-processing step in speech processing systems such as speech enhancement, speech recognition, gender and age identification. VAD helps in reducing the time required to process speech data and to improve final system accuracy by focusing the work on the voiced part of the speech. An automatic technique for VAD using Fuzzy-Neuro technique (FN-AVAD) is presented in this paper. The aim of this work is to alleviate the problem of choosing the best threshold value in traditional VAD methods and achieves automaticity by combining fuzzy clustering and machine learning techniques. Four features are extracted from each speech segment, which are short term energy, zero-crossing rate, auto
... Show MoreText categorization refers to the process of grouping text or documents into classes or categories according to their content. Text categorization process consists of three phases which are: preprocessing, feature extraction and classification. In comparison to the English language, just few studies have been done to categorize and classify the Arabic language. For a variety of applications, such as text classification and clustering, Arabic text representation is a difficult task because Arabic language is noted for its richness, diversity, and complicated morphology. This paper presents a comprehensive analysis and a comparison for researchers in the last five years based on the dataset, year, algorithms and the accuracy th
... Show MoreBreast cancer is a heterogeneous disease characterized by molecular complexity. This research utilized three genetic expression profiles—gene expression, deoxyribonucleic acid (DNA) methylation, and micro ribonucleic acid (miRNA) expression—to deepen the understanding of breast cancer biology and contribute to the development of a reliable survival rate prediction model. During the preprocessing phase, principal component analysis (PCA) was applied to reduce the dimensionality of each dataset before computing consensus features across the three omics datasets. By integrating these datasets with the consensus features, the model's ability to uncover deep connections within the data was significantly improved. The proposed multimodal deep
... Show More