Text based-image clustering (TBIC) is an insufficient approach for clustering related web images. It is a challenging task to abstract the visual features of images with the support of textual information in a database. In content-based image clustering (CBIC), image data are clustered on the foundation of specific features like texture, colors, boundaries, shapes. In this paper, an effective CBIC) technique is presented, which uses texture and statistical features of the images. The statistical features or moments of colors (mean, skewness, standard deviation, kurtosis, and variance) are extracted from the images. These features are collected in a one dimension array, and then genetic algorithm (GA) is applied for image clustering. The extraction of features gave a high distinguishability and helped GA reach the solution more accurately and faster.
The study aims to achieve several objectives, including follow-up scientific developments and transformations in the modern concepts of the Holistic Manufacturing System for the purpose of identifying the methods of switching to the entrances of artificial intelligence, and clarifying the mechanism of operation of the genetic algorithm under the Holonic Manufacturing System, to benefit from the advantages of systems and to achieve the maximum savings in time and cost of machines Using the Holistic Manufacturing System method and the Genetic algorithm, which allows for optimal maintenance time and minimizing the total cost, which in turn enables the workers of these machines to control the vacations in th
... Show MoreIn the present work, an image compression method have been modified by combining The Absolute Moment Block Truncation Coding algorithm (AMBTC) with a VQ-based image coding. At the beginning, the AMBTC algorithm based on Weber's law condition have been used to distinguish low and high detail blocks in the original image. The coder will transmit only mean of low detailed block (i.e. uniform blocks like background) on the channel instate of transmit the two reconstruction mean values and bit map for this block. While the high detail block is coded by the proposed fast encoding algorithm for vector quantized method based on the Triangular Inequality Theorem (TIE), then the coder will transmit the two reconstruction mean values (i.e. H&L)
... Show MoreChacha 20 is a stream cypher that is used as lightweight on many CPUs that do not have dedicated AES instructions. As stated by Google, that is the reason why they use it on many devices, such as mobile devices, for authentication in TLS protocol. This paper proposes an improvement of chaha20 stream cypher algorithm based on tent and Chebyshev functions (IChacha20). The main objectives of the proposed IChacha20 algorithm are increasing security layer, designing a robust structure of the IChacha20 to be enabled to resist various types of attacks, implementing the proposed algorithm for encryption of colour images, and transiting it in a secure manner. The test results proved that the MSE, PSNR, UQI and NCC metrics
... Show MoreStatistics has an important role in studying the characteristics of diverse societies. By using statistical methods, the researcher can make appropriate decisions to reject or accept statistical hypotheses. In this paper, the statistical analysis of the data of variables related to patients infected with the Coronavirus was conducted through the method of multivariate analysis of variance (MANOVA) and the statement of the effect of these variables.
Speech is the essential way to interact between humans or between human and machine. However, it is always contaminated with different types of environment noise. Therefore, speech enhancement algorithms (SEA) have appeared as a significant approach in speech processing filed to suppress background noise and return back the original speech signal. In this paper, a new efficient two-stage SEA with low distortion is proposed based on minimum mean square error sense. The estimation of clean signal is performed by taking the advantages of Laplacian speech and noise modeling based on orthogonal transform (Discrete Krawtchouk-Tchebichef transform) coefficients distribution. The Discrete Kra
A novel median filter based on crow optimization algorithms (OMF) is suggested to reduce the random salt and pepper noise and improve the quality of the RGB-colored and gray images. The fundamental idea of the approach is that first, the crow optimization algorithm detects noise pixels, and that replacing them with an optimum median value depending on a criterion of maximization fitness function. Finally, the standard measure peak signal-to-noise ratio (PSNR), Structural Similarity, absolute square error and mean square error have been used to test the performance of suggested filters (original and improved median filter) used to removed noise from images. It achieves the simulation based on MATLAB R2019b and the resul
... Show MoreCrow Search Algorithm (CSA) can be defined as one of the new swarm intelligence algorithms that has been developed lately, simulating the behavior of a crow in a storage place and the retrieval of the additional food when required. In the theory of the optimization, a crow represents a searcher, the surrounding environment represents the search space, and the random storage of food location represents a feasible solution. Amongst all the food locations, the one where the maximum amount of the food is stored is considered as the global optimum solution, and objective function represents the food amount. Through the simulation of crows’ intelligent behavior, the CSA attempts to find the optimum solutions to a variety of the proble
... Show MoreThe estimation of the parameters of linear regression is based on the usual Least Square method, as this method is based on the estimation of several basic assumptions. Therefore, the accuracy of estimating the parameters of the model depends on the validity of these hypotheses. The most successful technique was the robust estimation method which is minimizing maximum likelihood estimator (MM-estimator) that proved its efficiency in this purpose. However, the use of the model becomes unrealistic and one of these assumptions is the uniformity of the variance and the normal distribution of the error. These assumptions are not achievable in the case of studying a specific problem that may include complex data of more than one model. To
... Show More