Image classification is the process of finding common features in images from various classes and applying them to categorize and label them. The main problem of the image classification process is the abundance of images, the high complexity of the data, and the shortage of labeled data, presenting the key obstacles in image classification. The cornerstone of image classification is evaluating the convolutional features retrieved from deep learning models and training them with machine learning classifiers. This study proposes a new approach of “hybrid learning” by combining deep learning with machine learning for image classification based on convolutional feature extraction using the VGG-16 deep learning model and seven classifiers. A hybrid supervised learning system that takes advantage of rich intermediate features extracted from deep learning compared to traditional feature extraction to boost classification accuracy and parameters is suggested. They provide the same set of characteristics to discover and verify which classifier yields the best classification with our new proposed approach of “hybrid learning.” To achieve this, the performance of classifiers was assessed depending on a genuine dataset that was taken by our camera system. The simulation results show that the support vector machine (SVM) has a mean square error of 0.011, a total accuracy ratio of 98.80%, and an F1 score of 0.99. Moreover, the results show that the LR classifier has a mean square error of 0.035 and a total ratio of 96.42%, and an F1 score of 0.96 comes in the second place. The ANN classifier has a mean square error of 0.047 and a total ratio of 95.23%, and an F1 score of 0.94 comes in the third place. Furthermore, RF, WKNN, DT, and NB with a mean square error and an F1 score advance to the next stage with accuracy ratios of 91.66%, 90.47%, 79.76%, and 75%, respectively. As a result, the main contribution is the enhancement of the classification performance parameters with images of varying brightness and clarity using the proposed hybrid learning approach.
Medical image segmentation is one of the most actively studied fields in the past few decades, as the development of modern imaging modalities such as magnetic resonance imaging (MRI) and computed tomography (CT), physicians and technicians nowadays have to process the increasing number and size of medical images. Therefore, efficient and accurate computational segmentation algorithms become necessary to extract the desired information from these large data sets. Moreover, sophisticated segmentation algorithms can help the physicians delineate better the anatomical structures presented in the input images, enhance the accuracy of medical diagnosis and facilitate the best treatment planning. Many of the proposed algorithms could perform w
... Show MorePavement crack and pothole identification are important tasks in transportation maintenance and road safety. This study offers a novel technique for automatic asphalt pavement crack and pothole detection which is based on image processing. Different types of cracks (transverse, longitudinal, alligator-type, and potholes) can be identified with such techniques. The goal of this research is to evaluate road surface damage by extracting cracks and potholes, categorizing them from images and videos, and comparing the manual and the automated methods. The proposed method was tested on 50 images. The results obtained from image processing showed that the proposed method can detect cracks and potholes and identify their severity levels wit
... Show MoreAn image retrieval system is a computer system for browsing, looking and recovering pictures from a huge database of advanced pictures. The objective of Content-Based Image Retrieval (CBIR) methods is essentially to extract, from large (image) databases, a specified number of images similar in visual and semantic content to a so-called query image. The researchers were developing a new mechanism to retrieval systems which is mainly based on two procedures. The first procedure relies on extract the statistical feature of both original, traditional image by using the histogram and statistical characteristics (mean, standard deviation). The second procedure relies on the T-
... Show MoreWith a great diversity in the curriculum contemporary monetary and visions, and development that hit the graphic design field, it has become imperative for the workers in the contemporary design research and investigation in accordance with the intellectual treatises and methods of modern criticism, because the work design requires the designer and recipient both know the mechanics of tibographic text analysis in a heavy world of texts and images varied vocabulary and graphics, and designer on before anyone else manages the process of analysis to know what you offer others of shipments visual often of oriented intended from behind, what is meant, in the midst of this world, the curriculum Alsemiae directly overlap with such diverse offer
... Show MoreWe explore the transform coefficients of fractal and exploit new method to improve the compression capabilities of these schemes. In most of the standard encoder/ decoder systems the quantization/ de-quantization managed as a separate step, here we introduce new way (method) to work (managed) simultaneously. Additional compression is achieved by this method with high image quality as you will see later.
In this paper three techniques for image compression are implemented. The proposed techniques consist of three dimension (3-D) two level discrete wavelet transform (DWT), 3-D two level discrete multi-wavelet transform (DMWT) and 3-D two level hybrid (wavelet-multiwavelet transform) technique. Daubechies and Haar are used in discrete wavelet transform and Critically Sampled preprocessing is used in discrete multi-wavelet transform. The aim is to maintain to increase the compression ratio (CR) with respect to increase the level of the transformation in case of 3-D transformation, so, the compression ratio is measured for each level. To get a good compression, the image data properties, were measured, such as, image entropy (He), percent root-
... Show MoreGrain size and shape are important yield indicators. A hint for reexamining the visual markers of grain weight can be found in the wheat grain width. A digital vernier caliper is used to measure length, width, and thickness. The data consisted of 1296 wheat grains, with measurements for each grain. In this data set, the average weight (We) of the twenty-four grains was measured and recorded. To determine measure of the length (L), width (W), thickness (T), weight (We), and volume(V). These features were manipulated to develop two mathematical models that were passed on to the multiple regression models. The results of the weight model demonstrated that the length and width of the grai
In the reverse engineering approach, a massive amount of point data is gathered together during data acquisition and this leads to larger file sizes and longer information data handling time. In addition, fitting of surfaces of these data point is time-consuming and demands particular skills. In the present work a method for getting the control points of any profile has been presented. Where, many process for an image modification was explained using Solid Work program, and a parametric equation of the profile that proposed has been derived using Bezier technique with the control points that adopted. Finally, the proposed profile was machined using 3-aixs CNC milling machine and a compression in dimensions process has been occurred betwe
... Show MoreWith the increased development in digital media and communication, the need for methods to protection and security became very important factor, where the exchange and transmit date over communication channel led to make effort to protect these data from unauthentication access.
This paper present a new method to protect color image from unauthentication access using watermarking. The watermarking algorithm hide the encoded mark image in frequency domain using Discrete Cosine Transform. The main principle of the algorithm is encode frequent mark in cover color image. The watermark image bits are spread by repeat the mark and arrange in encoded method that provide algorithm more robustness and security. The propos
... Show More