Pavement crack and pothole identification are important tasks in transportation maintenance and road safety. This study offers a novel technique for automatic asphalt pavement crack and pothole detection which is based on image processing. Different types of cracks (transverse, longitudinal, alligator-type, and potholes) can be identified with such techniques. The goal of this research is to evaluate road surface damage by extracting cracks and potholes, categorizing them from images and videos, and comparing the manual and the automated methods. The proposed method was tested on 50 images. The results obtained from image processing showed that the proposed method can detect cracks and potholes and identify their severity levels with a medium validity of 76%. There are two kinds of methods, manual and automated, for distress evaluation that are used to assess pavement condition. A committee of three expert engineers in the maintenance department of the Mayoralty of Baghdad did the manual assessment of a highway in Baghdad city by using a Pavement Condition Index (PCI). The automated method was assessed by processing the videos of the road. By comparing the automated with the manual method, the accuracy percentage for this case study was 88.44%. The suggested method proved to be an encouraging solution for identifying cracks and potholes in asphalt pavements and sorting their severity. This technique can replace manual road damage assessment.
In this paper the behavior of the quality of the gradient that implemented on an image as a function of noise error is presented. The cross correlation coefficient (ccc) between the derivative of the original image before and after introducing noise error shows dramatic decline compared with the corresponding images before taking derivatives. Mathematical equations have been constructed to control the relation between (ccc) and the noise parameter.
The image caption is the process of adding an explicit, coherent description to the contents of the image. This is done by using the latest deep learning techniques, which include computer vision and natural language processing, to understand the contents of the image and give it an appropriate caption. Multiple datasets suitable for many applications have been proposed. The biggest challenge for researchers with natural language processing is that the datasets are incompatible with all languages. The researchers worked on translating the most famous English data sets with Google Translate to understand the content of the images in their mother tongue. In this paper, the proposed review aims to enhance the understanding o
... Show MoreIn this paper, a method is proposed to increase the compression ratio for the color images by
dividing the image into non-overlapping blocks and applying different compression ratio for these
blocks depending on the importance information of the block. In the region that contain important
information the compression ratio is reduced to prevent loss of the information, while in the
smoothness region which has not important information, high compression ratio is used .The
proposed method shows better results when compared with classical methods(wavelet and DCT).
Background: Determination of sex and estimation of stature from the skeleton is vital to medicolegal investigations. Skull is composed of hard tissue and is the best preserved part of skeleton after death, hence, in many cases it is the only available part for forensic examination. Lateral cephalogram is ideal for the skull examination as it gives details of various anatomical points in a single radiograph. This study was undertaken to evaluate the accuracy of digital cephalometric system as quick, easy and reproducible supplement tool in sex determination in Iraqi samples in different age range using certain linear and angular craniofacial measurements in predicting sex. Materials and Method The sample consisted of 113of true lateral cepha
... Show MoreIn this paper, an algorithm through which we can embed more data than the
regular methods under spatial domain is introduced. We compressed the secret data
using Huffman coding and then this compressed data is embedded using laplacian
sharpening method.
We used Laplace filters to determine the effective hiding places, then based on
threshold value we found the places with the highest values acquired from these filters
for embedding the watermark. In this work our aim is increasing the capacity of
information which is to be embedded by using Huffman code and at the same time
increasing the security of the algorithm by hiding data in the places that have highest
values of edges and less noticeable.
The perform
This paper suggest two method of recognition, these methods depend on the extraction of the feature of the principle component analysis when applied on the wavelet domain(multi-wavelet). First method, an idea of increasing the space of recognition, through calculating the eigenstructure of the diagonal sub-image details at five depths of wavelet transform is introduced. The effective eigen range selected here represent the base for image recognition. In second method, an idea of obtaining invariant wavelet space at all projections is presented. A new recursive from that represents invariant space of representing any image resolutions obtained from wavelet transform is adopted. In this way, all the major problems that effect the image and
... Show MoreIn the recent years, remote sensing applications have a great interest because it's offers many advantages, benefits and possibilities for the applications that using this concept, satellite it's one must important applications for remote sensing, it's provide us with multispectral images allow as study many problems like changing in ecological cover or biodiversity for earth surfers, and illustrated biological diversity of the studied areas by the presentation of the different areas of the scene taken depending on the length of the characteristic wave, Thresholding it's a common used operation for image segmentation, it's seek to extract a monochrome image from gray image by segment this image to two region (for
... Show MoreThe useful of remote sensing techniques in Environmental Engineering and another science is to save time, Coast and efforts, also to collect more accurate information under monitoring mechanism. In this research a number of statistical models were used for determining the best relationships between each water quality parameter and the mean reflectance values generated for different channels of radiometer operate simulated to the thematic Mappar satellite image. Among these models are the regression models which enable us to as certain and utilize a relation between a variable of interest. Called a dependent variable; and one or more independent variables