Three-dimensional (3D) image and medical image processing, which are considered big data analysis, have attracted significant attention during the last few years. To this end, efficient 3D object recognition techniques could be beneficial to such image and medical image processing. However, to date, most of the proposed methods for 3D object recognition experience major challenges in terms of high computational complexity. This is attributed to the fact that the computational complexity and execution time are increased when the dimensions of the object are increased, which is the case in 3D object recognition. Therefore, finding an efficient method for obtaining high recognition accuracy with low computational complexity is essential. To this end, this paper presents an efficient method for 3D object recognition with low computational complexity. Specifically, the proposed method uses a fast overlapped technique, which deals with higher-order polynomials and high-dimensional objects. The fast overlapped block-processing algorithm reduces the computational complexity of feature extraction. This paper also exploits Charlier polynomials and their moments along with support vector machine (SVM). The evaluation of the presented method is carried out using a well-known dataset, the McGill benchmark dataset. Besides, comparisons are performed with existing 3D object recognition methods. The results show that the proposed 3D object recognition approach achieves high recognition rates under different noisy environments. Furthermore, the results show that the presented method has the potential to mitigate noise distortion and outperforms existing methods in terms of computation time under noise-free and different noisy environments.
A hand gesture recognition system provides a robust and innovative solution to nonverbal communication through human–computer interaction. Deep learning models have excellent potential for usage in recognition applications. To overcome related issues, most previous studies have proposed new model architectures or have fine-tuned pre-trained models. Furthermore, these studies relied on one standard dataset for both training and testing. Thus, the accuracy of these studies is reasonable. Unlike these works, the current study investigates two deep learning models with intermediate layers to recognize static hand gesture images. Both models were tested on different datasets, adjusted to suit the dataset, and then trained under different m
... Show MoreThis investigation proposed an identification system of offline signature by utilizing rotation compensation depending on the features that were saved in the database. The proposed system contains five principle stages, they are: (1) data acquisition, (2) signature data file loading, (3) signature preprocessing, (4) feature extraction, and (5) feature matching. The feature extraction includes determination of the center point coordinates, and the angle for rotation compensation (θ), implementation of rotation compensation, determination of discriminating features and statistical condition. During this work seven essential collections of features are utilized to acquire the characteristics: (i) density (D), (ii) average (A), (iii) s
... Show MoreFace recognition is a crucial biometric technology used in various security and identification applications. Ensuring accuracy and reliability in facial recognition systems requires robust feature extraction and secure processing methods. This study presents an accurate facial recognition model using a feature extraction approach within a cloud environment. First, the facial images undergo preprocessing, including grayscale conversion, histogram equalization, Viola-Jones face detection, and resizing. Then, features are extracted using a hybrid approach that combines Linear Discriminant Analysis (LDA) and Gray-Level Co-occurrence Matrix (GLCM). The extracted features are encrypted using the Data Encryption Standard (DES) for security
... Show MoreThis work implements the face recognition system based on two stages, the first stage is feature extraction stage and the second stage is the classification stage. The feature extraction stage consists of Self-Organizing Maps (SOM) in a hierarchical format in conjunction with Gabor Filters and local image sampling. Different types of SOM’s were used and a comparison between the results from these SOM’s was given.
The next stage is the classification stage, and consists of self-organizing map neural network; the goal of this stage is to find the similar image to the input image. The proposal method algorithm implemented by using C++ packages, this work is successful classifier for a face database consist of 20
... Show MoreZernike Moments has been popularly used in many shape-based image retrieval studies due to its powerful shape representation. However its strength and weaknesses have not been clearly highlighted in the previous studies. Thus, its powerful shape representation could not be fully utilized. In this paper, a method to fully capture the shape representation properties of Zernike Moments is implemented and tested on a single object for binary and grey level images. The proposed method works by determining the boundary of the shape object and then resizing the object shape to the boundary of the image. Three case studies were made. Case 1 is the Zernike Moments implementation on the original shape object image. In Case 2, the centroid of the s
... Show MoreIn this paper, an approach for object tracking that is inspired from human oculomotor system is proposed and verified experimentally. The developed approach divided into two phases, fast tracking or saccadic phase and smooth pursuit phase. In the first phase, the field of the view is segmented into four regions that are analogue to retinal periphery in the oculomotor system. When the object of interest is entering these regions, the developed vision system responds by changing the values of the pan and tilt angles to allow the object lies in the fovea area and then the second phase will activate. A fuzzy logic method is implemented in the saccadic phase as an intelligent decision maker to select the values of the pan and tilt angle based
... Show More<span>Deepfakes have become possible using artificial intelligence techniques, replacing one person’s face with another person’s face (primarily a public figure), making the latter do or say things he would not have done. Therefore, contributing to a solution for video credibility has become a critical goal that we will address in this paper. Our work exploits the visible artifacts (blur inconsistencies) which are generated by the manipulation process. We analyze focus quality and its ability to detect these artifacts. Focus measure operators in this paper include image Laplacian and image gradient groups, which are very fast to compute and do not need a large dataset for training. The results showed that i) the Laplacian
... Show MoreLand Use / Land Cover (LULC) classification is considered one of the basic tasks that decision makers and map makers rely on to evaluate the infrastructure, using different types of satellite data, despite the large spectral difference or overlap in the spectra in the same land cover in addition to the problem of aberration and the degree of inclination of the images that may be negatively affect rating performance. The main objective of this study is to develop a working method for classifying the land cover using high-resolution satellite images using object based method. Maximum likelihood pixel based supervised as well as object approaches were examined on QuickBird satellite image in Karbala, Iraq. This study illustrated that
... Show More