Natural gas and oil are one of the mainstays of the global economy. However, many issues surround the pipelines that transport these resources, including aging infrastructure, environmental impacts, and vulnerability to sabotage operations. Such issues can result in leakages in these pipelines, requiring significant effort to detect and pinpoint their locations. The objective of this project is to develop and implement a method for detecting oil spills caused by leaking oil pipelines using aerial images captured by a drone equipped with a Raspberry Pi 4. Using the message queuing telemetry transport Internet of Things (MQTT IoT) protocol, the acquired images and the global positioning system (GPS) coordinates of the images' acquisition are sent to the base station. Using deep learning approaches such as holistically-nested edge detection (HED) and extreme inception (Xception) networks, images are analyzed at the base station to identify contours using dense extreme inception networks for edge detection (DexiNed). This algorithm is capable of finding many contours in images. Moreover, the CIELAB color space (LAB) is employed to locate black-colored contours, which may indicate oil spills. The suggested method involves eliminating smaller contours to calculate the area of larger contours. If the contour's area exceeds a certain threshold, it is classified as a spill; otherwise, it is stored in a database for further review. In the experiments, spill sizes of 1m2, 2m2, and 3m2 were established at three separate test locations. The drone was operated at three different heights (5 m, 10 m, and 15 m) to capture the scenes. The results show that efficient detection can be achieved at a height of 10 meters using the DexiNed algorithm. Statistical comparison with other edge detection methods using basic metrics, such as perimage best threshold (OIS = 0.867), fixed contour threshold (ODS = 0.859), and average precision (AP = 0.905), validates the effectiveness of the DexiNed algorithm in generating thin edge maps and identifying oil slicks. © 2023 Lavoisier. All rights reserved.
A novel encapsulated deep eutectic solvent (DES) was introduced for biodiesel production via a two-step process. The DES was encapsulated in medical capsules and were used to reduce the free fatty acid (FFA) content of acidic crude palm oil (ACPO) to the minimum acceptable level (< 1%). The DES was synthesized from methyltriphenylphosphonium bromide (MTPB) and p-toluenesulfonic acid (PTSA). The effects pertaining to different operating conditions such as capsule dosage, reaction time, molar ratio, and reaction temperature were optimized. The FFA content of ACPO was reduced from existing 9.61% to less than 1% under optimum operating conditions. This indicated that encapsulated MTPB-DES performed high catalytic activity in FFA esterificatio
... Show MoreCarbonate reservoirs are an essential source of hydrocarbons worldwide, and their petrophysical properties play a crucial role in hydrocarbon production. Carbonate reservoirs' most critical petrophysical properties are porosity, permeability, and water saturation. A tight reservoir refers to a reservoir with low porosity and permeability, which means it is difficult for fluids to move from one side to another. This study's primary goal is to evaluate reservoir properties and lithological identification of the SADI Formation in the Halfaya oil field. It is considered one of Iraq's most significant oilfields, 35 km south of Amarah. The Sadi formation consists of four units: A, B1, B2, and B3. Sadi A was excluded as it was not filled with h
... Show MoreThe design of future will still be the most confusing and puzzling issue and misgivings that arouse worry and leading to the spirit of adventures to make progress and arrive at the ways of reviving, creativity and modernism. The idea of prevailing of a certain culture or certain product in design depends on the given and available techniques, due to the fact that the computer and their artistic techniques become very important and vital to reinforce the image in the design. Thus, it is very necessary to link between these techniques and suitable way to reform the mentality by which the design will be reformed, from what has been said, (there has no utilization for the whole modern and available graphic techniques in the design proce
... Show MoreCognitive radios have the potential to greatly improve spectral efficiency in wireless networks. Cognitive radios are considered lower priority or secondary users of spectrum allocated to a primary user. Their fundamental requirement is to avoid interference to potential primary users in their vicinity. Spectrum sensing has been identified as a key enabling functionality to ensure that cognitive radios would not interfere with primary users, by reliably detecting primary user signals. In addition, reliable sensing creates spectrum opportunities for capacity increase of cognitive networks. One of the key challenges in spectrum sensing is the robust detection of primary signals in highly negative signal-to-noise regimes (SNR).In this paper ,
... Show MoreThere are many researches deals with constructing an efficient solutions for real problem having Multi - objective confronted with each others. In this paper we construct a decision for Multi – objectives based on building a mathematical model formulating a unique objective function by combining the confronted objectives functions. Also we are presented some theories concerning this problem. Areal application problem has been presented to show the efficiency of the performance of our model and the method. Finally we obtained some results by randomly generating some problems.
Heart disease is a significant and impactful health condition that ranks as the leading cause of death in many countries. In order to aid physicians in diagnosing cardiovascular diseases, clinical datasets are available for reference. However, with the rise of big data and medical datasets, it has become increasingly challenging for medical practitioners to accurately predict heart disease due to the abundance of unrelated and redundant features that hinder computational complexity and accuracy. As such, this study aims to identify the most discriminative features within high-dimensional datasets while minimizing complexity and improving accuracy through an Extra Tree feature selection based technique. The work study assesses the efficac
... Show MoreIn this paper, we will present proposed enhance process of image compression by using RLE algorithm. This proposed yield to decrease the size of compressing image, but the original method used primarily for compressing a binary images [1].Which will yield increasing the size of an original image mostly when used for color images. The test of an enhanced algorithm is performed on sample consists of ten BMP 24-bit true color images, building an application by using visual basic 6.0 to show the size after and before compression process and computing the compression ratio for RLE and for the enhanced RLE algorithm.
Image segmentation can be defined as a cutting or segmenting process of the digital image into many useful points which are called segmentation, that includes image elements contribute with certain attributes different form Pixel that constitute other parts. Two phases were followed in image processing by the researcher in this paper. At the beginning, pre-processing image on images was made before the segmentation process through statistical confidence intervals that can be used for estimate of unknown remarks suggested by Acho & Buenestado in 2018. Then, the second phase includes image segmentation process by using "Bernsen's Thresholding Technique" in the first phase. The researcher drew a conclusion that in case of utilizing
... Show MoreIn recent years, with the rapid development of the current classification system in digital content identification, automatic classification of images has become the most challenging task in the field of computer vision. As can be seen, vision is quite challenging for a system to automatically understand and analyze images, as compared to the vision of humans. Some research papers have been done to address the issue in the low-level current classification system, but the output was restricted only to basic image features. However, similarly, the approaches fail to accurately classify images. For the results expected in this field, such as computer vision, this study proposes a deep learning approach that utilizes a deep learning algorithm.
... Show MoreFractal image compression gives some desirable properties like fast decoding image, and very good rate-distortion curves, but suffers from a high encoding time. In fractal image compression a partitioning of the image into ranges is required. In this work, we introduced good partitioning process by means of merge approach, since some ranges are connected to the others. This paper presents a method to reduce the encoding time of this technique by reducing the number of range blocks based on the computing the statistical measures between them . Experimental results on standard images show that the proposed method yields minimize (decrease) the encoding time and remain the quality results passable visually.