Pavement crack and pothole identification are important tasks in transportation maintenance and road safety. This study offers a novel technique for automatic asphalt pavement crack and pothole detection which is based on image processing. Different types of cracks (transverse, longitudinal, alligator-type, and potholes) can be identified with such techniques. The goal of this research is to evaluate road surface damage by extracting cracks and potholes, categorizing them from images and videos, and comparing the manual and the automated methods. The proposed method was tested on 50 images. The results obtained from image processing showed that the proposed method can detect cracks and potholes and identify their severity levels with a medium validity of 76%. There are two kinds of methods, manual and automated, for distress evaluation that are used to assess pavement condition. A committee of three expert engineers in the maintenance department of the Mayoralty of Baghdad did the manual assessment of a highway in Baghdad city by using a Pavement Condition Index (PCI). The automated method was assessed by processing the videos of the road. By comparing the automated with the manual method, the accuracy percentage for this case study was 88.44%. The suggested method proved to be an encouraging solution for identifying cracks and potholes in asphalt pavements and sorting their severity. This technique can replace manual road damage assessment.
Currently, one of the topical areas of application of machine learning methods is the prediction of material characteristics. The aim of this work is to develop machine learning models for determining the rheological properties of polymers from experimental stress relaxation curves. The paper presents an overview of the main directions of metaheuristic approaches (local search, evolutionary algorithms) to solving combinatorial optimization problems. Metaheuristic algorithms for solving some important combinatorial optimization problems are described, with special emphasis on the construction of decision trees. A comparative analysis of algorithms for solving the regression problem in CatBoost Regressor has been carried out. The object of
... Show MoreSorting and grading agricultural crops using manual sorting is a cumbersome and arduous process, in addition to the high costs and increased labor, as well as the low quality of sorting and grading compared to automatic sorting. the importance of deep learning, which includes the artificial neural network in prediction, also shows the importance of automated sorting in terms of efficiency, quality, and accuracy of sorting and grading. artificial neural network in predicting values and choosing what is good and suitable for agricultural crops, especially local lemons.
A new method presented in this work to detect the existence of hidden
data as a secret message in images. This method must be applyied only on images which have the same visible properties (similar in perspective) where the human eyes cannot detect the difference between them.
This method is based on Image Quality Metrics (Structural Contents
Metric), which means the comparison between the original images and stego images, and determines the size ofthe hidden data. We applied the method to four different images, we detect by this method the hidden data and find exactly the same size of the hidden data.
There are many images you need to large Khoznah space With the continued evolution of storage technology for computers, there is a need nailed required to reduce Alkhoznip space for pictures and image compression in a good way, the conversion method Alamueja
The recent emergence of sophisticated Large Language Models (LLMs) such as GPT-4, Bard, and Bing has revolutionized the domain of scientific inquiry, particularly in the realm of large pre-trained vision-language models. This pivotal transformation is driving new frontiers in various fields, including image processing and digital media verification. In the heart of this evolution, our research focuses on the rapidly growing area of image authenticity verification, a field gaining immense relevance in the digital era. The study is specifically geared towards addressing the emerging challenge of distinguishing between authentic images and deep fakes – a task that has become critically important in a world increasingly reliant on digital med
... Show MoreRapid worldwide urbanization and drastic population growth have increased the demand for new road construction, which will cause a substantial amount of natural resources such as aggregates to be consumed. The use of recycled concrete aggregate could be one of the possible ways to offset the aggregate shortage problem and reduce environmental pollution. This paper reports an experimental study of unbound granular material using recycled concrete aggregate for pavement subbase construction. Five percentages of recycled concrete aggregate obtained from two different sources with an originally designed compressive strength of 20–30 MPa as well as 31–40 MPa at three particle size levels, i.e., coarse, fine, and extra fine, were test
... Show MoreImage compression is a serious issue in computer storage and transmission, that simply makes efficient use of redundancy embedded within an image itself; in addition, it may exploit human vision or perception limitations to reduce the imperceivable information Polynomial coding is a modern image compression technique based on modelling concept to remove the spatial redundancy embedded within the image effectively that composed of two parts, the mathematical model and the residual. In this paper, two stages proposed technqies adopted, that starts by utilizing the lossy predictor model along with multiresolution base and thresholding techniques corresponding to first stage. Latter by incorporating the near lossless com
... Show MoreIn the image processing’s field and computer vision it’s important to represent the image by its information. Image information comes from the image’s features that extracted from it using feature detection/extraction techniques and features description. Features in computer vision define informative data. For human eye its perfect to extract information from raw image, but computer cannot recognize image information. This is why various feature extraction techniques have been presented and progressed rapidly. This paper presents a general overview of the feature extraction categories for image.
Digital tampering identification, which detects picture modification, is a significant area of image analysis studies. This area has grown with time with exceptional precision employing machine learning and deep learning-based strategies during the last five years. Synthesis and reinforcement-based learning techniques must now evolve to keep with the research. However, before doing any experimentation, a scientist must first comprehend the current state of the art in that domain. Diverse paths, associated outcomes, and analysis lay the groundwork for successful experimentation and superior results. Before starting with experiments, universal image forensics approaches must be thoroughly researched. As a result, this review of variou
... Show MoreRecently, increasing material prices coupled with more acute environmental awareness and the implementation of regulation has driven a strong movement toward the adoption of sustainable construction technology. In the pavement industry, using low temperature asphalt mixes and recycled concrete aggregate are viewed as effective engineering solutions to address the challenges posed by climate change and sustainable development. However, to date, no research has investigated these two factors simultaneously for pavement material. This paper reports on initial work which attempts to address this shortcoming. At first, a novel treatment method is used to improve the quality of recycled concrete coarse aggregates. Thereafter, the treated recycled
... Show More