Computer vision seeks to mimic the human visual system and plays an essential role in artificial intelligence. It is based on different signal reprocessing techniques; therefore, developing efficient techniques becomes essential to achieving fast and reliable processing. Various signal preprocessing operations have been used for computer vision, including smoothing techniques, signal analyzing, resizing, sharpening, and enhancement, to reduce reluctant falsifications, segmentation, and image feature improvement. For example, to reduce the noise in a disturbed signal, smoothing kernels can be effectively used. This is achievedby convolving the distributed signal with smoothing kernels. In addition, orthogonal moments (OMs) are a crucial technique in signal preprocessing, serving as key descriptors for signal analysis and recognition. OMs are obtained by the projection of orthogonal polynomials (OPs) onto the signal domain. However, when dealing with 3D signals, the traditional approach of convolving kernels with the signal and computing OMs beforehand significantly increases the computational cost of computer vision algorithms. To address this issue, this paper develops a novel mathematical model to embed the kernel directly into the OPs functions, seamlessly integrating these two processes into a more efficient and accurate approach. The proposed model allows the computation of OMs for smoothed versions of 3D signals directly, thereby reducing computational overhead. Extensive experiments conducted on 3D objects demonstrate that the proposed method outperforms traditional approaches across various metrics. The average recognition accuracy improves to 83.85% when the polynomial order is increased to 10. Experimental results show that the proposed method exhibits higher accuracy and lower computational costs compared to the benchmark methods in various conditions for a wide range of parameter values.
In this paper we investigate the automatic recognition of emotion in text. We propose a new method for emotion recognition based on the PPM (PPM is short for Prediction by Partial Matching) character-based text compression scheme in order to recognize Ekman’s six basic emotions (Anger, Disgust, Fear, Happiness, Sadness, Surprise). Experimental results with three datasets show that the new method is very effective when compared with traditional word-based text classification methods. We have also found that our method works best if the sizes of text in all classes used for training are similar, and that performance significantly improves with increased data.
This paper presents a study of wavelet self-organizing maps (WSOM) for face recognition. The WSOM is a feed forward network that estimates optimized wavelet based for the discrete wavelet transform (DWT) on the basis of the distribution of the input data, where wavelet basis transforms are used as activation function.
Text documents are unstructured and high dimensional. Effective feature selection is required to select the most important and significant feature from the sparse feature space. Thus, this paper proposed an embedded feature selection technique based on Term Frequency-Inverse Document Frequency (TF-IDF) and Support Vector Machine-Recursive Feature Elimination (SVM-RFE) for unstructured and high dimensional text classificationhis technique has the ability to measure the feature’s importance in a high-dimensional text document. In addition, it aims to increase the efficiency of the feature selection. Hence, obtaining a promising text classification accuracy. TF-IDF act as a filter approach which measures features importance of the te
... Show MoreAn efficient modification and a novel technique combining the homotopy concept with Adomian decomposition method (ADM) to obtain an accurate analytical solution for Riccati matrix delay differential equation (RMDDE) is introduced in this paper . Both methods are very efficient and effective. The whole integral part of ADM is used instead of the integral part of homotopy technique. The major feature in current technique gives us a large convergence region of iterative approximate solutions .The results acquired by this technique give better approximations for a larger region as well as previously. Finally, the results conducted via suggesting an efficient and easy technique, and may be addressed to other non-linear problems.
This paper introduces a non-conventional approach with multi-dimensional random sampling to solve a cocaine abuse model with statistical probability. The mean Latin hypercube finite difference (MLHFD) method is proposed for the first time via hybrid integration of the classical numerical finite difference (FD) formula with Latin hypercube sampling (LHS) technique to create a random distribution for the model parameters which are dependent on time [Formula: see text]. The LHS technique gives advantage to MLHFD method to produce fast variation of the parameters’ values via number of multidimensional simulations (100, 1000 and 5000). The generated Latin hypercube sample which is random or non-deterministic in nature is further integ
... Show MoreIn this paper, a computer simulation is implemented to generate of an optical aberration by means of Zernike polynomials. Defocus, astigmatism, coma, and spherical Zernike aberrations were simulated in a subroutine using MATLAB function and applied as a phase error in the aperture function of an imaging system. The studying demonstrated that the Point Spread Function (PSF) and Modulation Transfer Function (MTF) have been affected by these optical aberrations. Areas under MTF for different radii of the aperture of imaging system have been computed to assess the quality and efficiency of optical imaging systems. Phase conjugation of these types aberration has been utilized in order to correct a distorted wavefront. The results showed that
... Show MoreMany satellite systems take cover images like QuickBird for terrain so that these images scan be used to construct 3D models likes Triangulated Irregular Network (TIN), and Digital Elevation Model (DEM). This paper presents a production of 3D TIN for Al-Karkh University of Science in Baghdad - Iraq using QuickBird image data with pixel resolution of 0.6 m. The recent generations of high-resolution satellite imaging systems open a new era of digital mapping and earth observation. It provides not only multi-spectral and high-resolution data but also the capability for stereo mapping. The result of this study is a production of 3D satellite images of the university by merging 1 m DEM with satellite image for ROI using ArcGIS package Version
... Show MoreExtracting moving object from video sequence is one of the most important steps
in the video-based analysis. Background subtraction is the most commonly used
moving object detection methods in video, in which the extracted object will be
feed to a higher-level process ( i.e. object localization, object tracking ).
The main requirement of background subtraction method is to construct a
stationary background model and then to compare every new coming frame with it
in order to detect the moving object.
Relied on the supposition that the background occurs with the higher appearance
frequency, a proposed background reconstruction algorithm has been presented
based on pixel intensity classification ( PIC ) approach.