The huge amount of documents in the internet led to the rapid need of text classification (TC). TC is used to organize these text documents. In this research paper, a new model is based on Extreme Machine learning (EML) is used. The proposed model consists of many phases including: preprocessing, feature extraction, Multiple Linear Regression (MLR) and ELM. The basic idea of the proposed model is built upon the calculation of feature weights by using MLR. These feature weights with the extracted features introduced as an input to the ELM that produced weighted Extreme Learning Machine (WELM). The results showed a great competence of the proposed WELM compared to the ELM.
Examination of skewness makes academics more aware of the importance of accurate statistical analysis. Undoubtedly, most phenomena contain a certain percentage of skewness which resulted to the appearance of what is -called "asymmetry" and, consequently, the importance of the skew normal family . The epsilon skew normal distribution ESN (μ, σ, ε) is one of the probability distributions which provide a more flexible model because the skewness parameter provides the possibility to fluctuate from normal to skewed distribution. Theoretically, the estimation of linear regression model parameters, with an average error value that is not zero, is considered a major challenge due to having difficulties, as no explicit formula to calcula
... Show MoreBackground/Objectives: The purpose of current research aims to a modified image representation framework for Content-Based Image Retrieval (CBIR) through gray scale input image, Zernike Moments (ZMs) properties, Local Binary Pattern (LBP), Y Color Space, Slantlet Transform (SLT), and Discrete Wavelet Transform (DWT). Methods/Statistical analysis: This study surveyed and analysed three standard datasets WANG V1.0, WANG V2.0, and Caltech 101. The features an image of objects in this sets that belong to 101 classes-with approximately 40-800 images for every category. The suggested infrastructure within the study seeks to present a description and operationalization of the CBIR system through automated attribute extraction system premised on CN
... Show MoreThe main task of creating new digital images of different skin diseases is to increase the resolution of the specific textures and colors of each skin disease. In this paper, the performance of generative adversarial networks has been optimized to generate multicolor and histological color digital images of a variety of skin diseases (melanoma, birthmarks, and basal cell carcinomas). Two architectures for generative adversarial networks were built using two models: the first is a model for generating new images of dermatology through training processes, and the second is a discrimination model whose main task is to identify the generated digital images as either real or fake. The gray wolf swarm algorithm and the whale swarm alg
... Show MoreDeepfake is a type of artificial intelligence used to create convincing images, audio, and video hoaxes and it concerns celebrities and everyone because they are easy to manufacture. Deepfake are hard to recognize by people and current approaches, especially high-quality ones. As a defense against Deepfake techniques, various methods to detect Deepfake in images have been suggested. Most of them had limitations, like only working with one face in an image. The face has to be facing forward, with both eyes and the mouth open, depending on what part of the face they worked on. Other than that, a few focus on the impact of pre-processing steps on the detection accuracy of the models. This paper introduces a framework design focused on this asp
... Show MoreUniversal image stego-analytic has become an important issue due to the natural images features curse of dimensionality. Deep neural networks, especially deep convolution networks, have been widely used for the problem of universal image stegoanalytic design. This paper describes the effect of selecting suitable value for number of levels during image pre-processing with Dual Tree Complex Wavelet Transform. This value may significantly affect the detection accuracy which is obtained to evaluate the performance of the proposed system. The proposed system is evaluated using three content-adaptive methods, named Highly Undetetable steGO (HUGO), Wavelet Obtained Weights (WOW) and UNIversal WAvelet Relative Distortion (UNIWARD).
The obtain
Human posture estimation is a crucial topic in the computer vision field and has become a hotspot for research in many human behaviors related work. Human pose estimation can be understood as the human key point recognition and connection problem. The paper presents an optimized symmetric spatial transformation network designed to connect with single-person pose estimation network to propose high-quality human target frames from inaccurate human bounding boxes, and introduces parametric pose non-maximal suppression to eliminate redundant pose estimation, and applies an elimination rule to eliminate similar pose to obtain unique human pose estimation results. The exploratory outcomes demonstrate the way that the proposed technique can pre
... Show MoreResearchers have increased interest in recent years in determining the optimum sample size to obtain sufficient accuracy and estimation and to obtain high-precision parameters in order to evaluate a large number of tests in the field of diagnosis at the same time. In this research, two methods were used to determine the optimum sample size to estimate the parameters of high-dimensional data. These methods are the Bennett inequality method and the regression method. The nonlinear logistic regression model is estimated by the size of each sampling method in high-dimensional data using artificial intelligence, which is the method of artificial neural network (ANN) as it gives a high-precision estimate commensurate with the dat
... Show MoreSome maps of the chaotic firefly algorithm were selected to select variables for data on blood diseases and blood vessels obtained from Nasiriyah General Hospital where the data were tested and tracking the distribution of Gamma and it was concluded that a Chebyshevmap method is more efficient than a Sinusoidal map method through mean square error criterion.
Mixture experiments are response variables based on the proportions of component for this mixture. In our research we will compare the scheffʼe model with the kronecker model for the mixture experiments, especially when the experimental area is restricted.
Because of the experience of the mixture of high correlation problem and the problem of multicollinearity between the explanatory variables, which has an effect on the calculation of the Fisher information matrix of the regression model.
to estimate the parameters of the mixture model, we used the (generalized inverse ) And the Stepwise Regression procedure
... Show MoreCuring of concrete is the maintenance of a satisfactory moisture content and temperature for a
period of time immediately following placing so the desired properties are developed. Accelerated
curing is advantages where early strength gain in concrete is important. The expose of concrete
specimens to the accelerated curing conditions which permit the specimens to develop a significant
portion of their ultimate strength within a period of time (1-2 days), depends on the method of the
curing cycle.Three accelerated curing test methods are adopted in this study. These are warm water,
autogenous and proposed test methods. The results of this study has shown good correlation
between the accelerated strength especially for