The availability of different processing levels for satellite images makes it important to measure their suitability for classification tasks. This study investigates the impact of the Landsat data processing level on the accuracy of land cover classification using a support vector machine (SVM) classifier. The classification accuracy values of Landsat 8 (LS8) and Landsat 9 (LS9) data at different processing levels vary notably. For LS9, Collection 2 Level 2 (C2L2) achieved the highest accuracy of (86.55%) with the polynomial kernel of the SVM classifier, surpassing the Fast Line-of-Sight Atmospheric Analysis of Spectral Hypercubes (FLAASH) at (85.31%) and Collection 2 Level 1 (C2L1) at (84.93%). The LS8 data exhibits similar behavior. Conversely, when using the maximum-likelihood classifier, the highest accuracy (83.06%) was achieved with FLAASH. The results demonstrate significant variations in accuracies for different land cover classes, which emphasizes the importance of per-class accuracy. The results highlight the critical role of preprocessing techniques and classifier selection in optimizing the classification processes and land cover mapping accuracy for remote sensing geospatial applications. Finally, the actual differences in classification accuracy between processing levels are larger than those given by the confusion matrix. So, the consideration of alternative evaluation methods with the absence of reference images is critical.
Finding orthogonal matrices in different sizes is very complex and important because it can be used in different applications like image processing and communications (eg CDMA and OFDM). In this paper we introduce a new method to find orthogonal matrices by using tensor products between two or more orthogonal matrices of real and imaginary numbers with applying it in images and communication signals processing. The output matrices will be orthogonal matrices too and the processing by our new method is very easy compared to other classical methods those use basic proofs. The results are normal and acceptable in communication signals and images but it needs more research works.
This study deals with the orthographic processing ability of homophones
which can account for variance in word recognition and production skills due to
phonological processing. The study aims at: A)Investigating whether the students
can recognize correct usage and spelling comprehension of different homophones
by using appropriate word that overlapped in both phonology and orthography.
B)Assessing spelling production word association to the written form of the
homophone in the sentence comprehension task. To achieve these aims, two tests
have been conducted and distributed on 50 students at first stage at the College of
Education(Ibn-Rushd) for the academic year 2010-2011. The two tests are exposed
to a jury of
Laboratory experience in Iraq with cold asphalt concrete mixtures is very limited. The design and use of cold mixed asphalt concrete had no technical requirements. In this study, two asphalt concrete mixtures used for the base course were prepared in the laboratory using conventional cold-mixing techniques to test cold asphalt mixture (CAM) against aging and moisture susceptibility. Cold asphalt mixtures specimens have been prepared in the lab with cutback and emulsion binders, different fillers, and curing times. Based on the Marshal test result, the cutback proportion was selected with the filler, also based on the Marshal test emulsion. The first mixture was medium setting cationic emulsion (MSCE) as a binder, hydrate
... Show MoreShadow removal is crucial for robot and machine vision as the accuracy of object detection is greatly influenced by the uncertainty and ambiguity of the visual scene. In this paper, we introduce a new algorithm for shadow detection and removal based on different shapes, orientations, and spatial extents of Gaussian equations. Here, the contrast information of the visual scene is utilized for shadow detection and removal through five consecutive processing stages. In the first stage, contrast filtering is performed to obtain the contrast information of the image. The second stage involves a normalization process that suppresses noise and generates a balanced intensity at a specific position compared to the neighboring intensit
... Show MoreThe current research dealt with the study and analysis of the documentary
program on RT satellite channel dealt with research in a problem identified
by the researcher in a major question, which is the methods of technical
processing of documentary programs in satellite channels? The goal of
identifying the channel's handling of its documentary programs.
The research is considered descriptive research in which the researcher
used the survey method, the content analysis method to analyze (12)
documentary programs, It was determined by the comprehensive inventory
method within the temporal field of research extending from 1/10/2019 to
29/12/2019.
The researcher has obtained several results, the most important
In the reverse engineering approach, a massive amount of point data is gathered together during data acquisition and this leads to larger file sizes and longer information data handling time. In addition, fitting of surfaces of these data point is time-consuming and demands particular skills. In the present work a method for getting the control points of any profile has been presented. Where, many process for an image modification was explained using Solid Work program, and a parametric equation of the profile that proposed has been derived using Bezier technique with the control points that adopted. Finally, the proposed profile was machined using 3-aixs CNC milling machine and a compression in dimensions process has been occurred betwe
... Show MoreIn many video and image processing applications, the frames are partitioned into blocks, which are extracted and processed sequentially. In this paper, we propose a fast algorithm for calculation of features of overlapping image blocks. We assume the features are projections of the block on separable 2D basis functions (usually orthogonal polynomials) where we benefit from the symmetry with respect to spatial variables. The main idea is based on a construction of auxiliary matrices that virtually extends the original image and makes it possible to avoid a time-consuming computation in loops. These matrices can be pre-calculated, stored and used repeatedly since they are independent of the image itself. We validated experimentally th
... Show More
