The successful implementation of deep learning nets opens up possibilities for various applications in viticulture, including disease detection, plant health monitoring, and grapevine variety identification. With the progressive advancements in the domain of deep learning, further advancements and refinements in the models and datasets can be expected, potentially leading to even more accurate and efficient classification systems for grapevine leaves and beyond. Overall, this research provides valuable insights into the potential of deep learning for agricultural applications and paves the way for future studies in this domain. This work employs a convolutional neural network (CNN)-based architecture to perform grapevine leaf image classification by adapting VGG-16 net and VGG-19 net models and subsequently identifying the optimal performer between the two nets during the classification process. A publicly available dataset comprising 500 images categorized into 5 distinct classes (100 images per class), was utilized in this work. The obtained empirical outcomes demonstrate a remarkable accuracy rate of 99.6% for the VGG-16 net model, while VGG-19 net achieves a 100% accuracy rate. Based on these findings, it can be inferred that VGG-19 net exhibits superior performance in classifying images of grapevine leaves compared to the VGG-16 net. © (2024), (Universitas Ahmad Dahlan). All Rights Reserved.
Natural gas and oil are one of the mainstays of the global economy. However, many issues surround the pipelines that transport these resources, including aging infrastructure, environmental impacts, and vulnerability to sabotage operations. Such issues can result in leakages in these pipelines, requiring significant effort to detect and pinpoint their locations. The objective of this project is to develop and implement a method for detecting oil spills caused by leaking oil pipelines using aerial images captured by a drone equipped with a Raspberry Pi 4. Using the message queuing telemetry transport Internet of Things (MQTT IoT) protocol, the acquired images and the global positioning system (GPS) coordinates of the images' acquisition are
... Show MoreGeneral Background: Deep image matting is a fundamental task in computer vision, enabling precise foreground extraction from complex backgrounds, with applications in augmented reality, computer graphics, and video processing. Specific Background: Despite advancements in deep learning-based methods, preserving fine details such as hair and transparency remains a challenge. Knowledge Gap: Existing approaches struggle with accuracy and efficiency, necessitating novel techniques to enhance matting precision. Aims: This study integrates deep learning with fusion techniques to improve alpha matte estimation, proposing a lightweight U-Net model incorporating color-space fusion and preprocessing. Results: Experiments using the AdobeComposition-1k
... Show More<span lang="EN-US">The need for robotics systems has become an urgent necessity in various fields, especially in video surveillance and live broadcasting systems. The main goal of this work is to design and implement a rover robotic monitoring system based on raspberry pi 4 model B to control this overall system and display a live video by using a webcam (USB camera) as well as using you only look once algorithm-version five (YOLOv5) to detect, recognize and display objects in real-time. This deep learning algorithm is highly accurate and fast and is implemented by Python, OpenCV, PyTorch codes and the Context Object Detection Task (COCO) 2020 dataset. This robot can move in all directions and in different places especially in
... Show MoreEstimating an individual's age from a photograph of their face is critical in many applications, including intelligence and defense, border security and human-machine interaction, as well as soft biometric recognition. There has been recent progress in this discipline that focuses on the idea of deep learning. These solutions need the creation and training of deep neural networks for the sole purpose of resolving this issue. In addition, pre-trained deep neural networks are utilized in the research process for the purpose of facial recognition and fine-tuning for accurate outcomes. The purpose of this study was to offer a method for estimating human ages from the frontal view of the face in a manner that is as accurate as possible and takes
... Show MoreMonaural source separation is a challenging issue due to the fact that there is only a single channel available; however, there is an unlimited range of possible solutions. In this paper, a monaural source separation model based hybrid deep learning model, which consists of convolution neural network (CNN), dense neural network (DNN) and recurrent neural network (RNN), will be presented. A trial and error method will be used to optimize the number of layers in the proposed model. Moreover, the effects of the learning rate, optimization algorithms, and the number of epochs on the separation performance will be explored. Our model was evaluated using the MIR-1K dataset for singing voice separation. Moreover, the proposed approach achi
... Show MoreArtificial intelligence techniques are reaching us in several forms, some of which are useful but can be exploited in a way that harms us. One of these forms is called deepfakes. Deepfakes is used to completely modify video (or image) content to display something that was not in it originally. The danger of deepfake technology impact on society through the loss of confidence in everything is published. Therefore, in this paper, we focus on deepfakedetection technology from the view of two concepts which are deep learning and forensic tools. The purpose of this survey is to give the reader a deeper overview of i) the environment of deepfake creation and detection, ii) how deep learning and forensic tools contributed to the detection
... Show MoreIn this research, a group of gray texture images of the Brodatz database was studied by building the features database of the images using the gray level co-occurrence matrix (GLCM), where the distance between the pixels was one unit and for four angles (0, 45, 90, 135). The k-means classifier was used to classify the images into a group of classes, starting from two to eight classes, and for all angles used in the co-occurrence matrix. The distribution of the images on the classes was compared by comparing every two methods (projection of one class onto another where the distribution of images was uneven, with one category being the dominant one. The classification results were studied for all cases using the confusion matrix between every
... Show More