Many objective optimizations (MaOO) algorithms that intends to solve problems with many objectives (MaOP) (i.e., the problem with more than three objectives) are widely used in various areas such as industrial manufacturing, transportation, sustainability, and even in the medical sector. Various approaches of MaOO algorithms are available and employed to handle different MaOP cases. In contrast, the performance of the MaOO algorithms assesses based on the balance between the convergence and diversity of the non-dominated solutions measured using different evaluation criteria of the quality performance indicators. Although many evaluation criteria are available, yet most of the evaluation and benchmarking of the MaOO with state-of-art algorithms perform using one or two performance indicators without clear evidence or justification of the efficiency of these indicators over others. Thus, unify a set of most suitable evaluation criteria of the MaOO is needed. This study proposed a distinct unifying model for the MaOO evaluation criteria using the fuzzy Delphi method. The study followed a systematic procedure to analyze 49 evaluation criteria, sub-criteria, and its performance indicators, a penal of 23 domain experts, participated in this study. Lastly, the most suitable criteria outcomes are formulated in the unifying model and evaluate by experts to verify the appropriateness and suitability of the model in assessing the MaOO algorithms fairly and effectively.
In recent years, observed focus greatly on gold nanoparticles synthesis due to its unique properties and tremendous applicability. In most of these researches, the citrate reduction method has been adopted. The aim of this study was to prepare and optimize monodisperse ultrafine particles by addition of reducing agent to gold salt, as a result of seed mediated growth mechanism. In this research, gold nanoparticles suspension (G) was prepared by traditional standard Turkevich method and optimized by studying different variables such as reactants concentrations, preparation temperature and stirring rate on controlling size and uniformity of nanoparticles through preparing twenty formulas (G1-G20). Subsequently, the selected formula that pr
... Show MoreAggregate production planning (APP) is one of the most significant and complicated problems in production planning and aim to set overall production levels for each product category to meet fluctuating or uncertain demand in future. and to set decision concerning hiring, firing, overtime, subcontract, carrying inventory level. In this paper, we present a simulated annealing (SA) for multi-objective linear programming to solve APP. SA is considered to be a good tool for imprecise optimization problems. The proposed model minimizes total production and workforce costs. In this study, the proposed SA is compared with particle swarm optimization (PSO). The results show that the proposed SA is effective in reducing total production costs and req
... Show MoreThe influx of data in bioinformatics is primarily in the form of DNA, RNA, and protein sequences. This condition places a significant burden on scientists and computers. Some genomics studies depend on clustering techniques to group similarly expressed genes into one cluster. Clustering is a type of unsupervised learning that can be used to divide unknown cluster data into clusters. The k-means and fuzzy c-means (FCM) algorithms are examples of algorithms that can be used for clustering. Consequently, clustering is a common approach that divides an input space into several homogeneous zones; it can be achieved using a variety of algorithms. This study used three models to cluster a brain tumor dataset. The first model uses FCM, whic
... Show MoreThe linear segment with parabolic blend (LSPB) trajectory deviates from the specified waypoints. It is restricted to that the acceleration must be sufficiently high. In this work, it is proposed to engage modified LSPB trajectory with particle swarm optimization (PSO) so as to create through points on the trajectory. The assumption of normal LSPB method that parabolic part is centered in time around waypoints is replaced by proposed coefficients for calculating the time duration of the linear part. These coefficients are functions of velocities between through points. The velocities are obtained by PSO so as to force the LSPB trajectory passing exactly through the specified path points. Also, relations for velocity correction and exact v
... Show MoreAn easy, eclectic, precise high-Performance Liquid Chromatographic (HPLC) procedure was evolved and validated to estimate of Piroxicam and Codeine phosphate. Chromatographic demarcation was accomplished on a C18 column [Use BDS Hypersil C18, 5μ, 150 x 4.6 mm] using a mobile phase of methanol: phosphate buffer (60:40, v/v, pH=2.3), the flow rate was 1.1 mL/min, UV detection was at 214 nm. System Suitability tests (SSTs) are typically performed to assess the suitability and effectiveness of the entire chromatography system. The retention time for Piroxicam was found to be 3.95 minutes and 1.46 minutes for Codeine phosphate. The evolved method has been validated through precision, limit of quantitation, specificity,
... Show MoreDeep learning convolution neural network has been widely used to recognize or classify voice. Various techniques have been used together with convolution neural network to prepare voice data before the training process in developing the classification model. However, not all model can produce good classification accuracy as there are many types of voice or speech. Classification of Arabic alphabet pronunciation is a one of the types of voice and accurate pronunciation is required in the learning of the Qur’an reading. Thus, the technique to process the pronunciation and training of the processed data requires specific approach. To overcome this issue, a method based on padding and deep learning convolution neural network is proposed to
... Show MoreIn this study, an unknown force function dependent on the space in the wave equation is investigated. Numerically wave equation splitting in two parts, part one using the finite-difference method (FDM). Part two using separating variables method. This is the continuation and changing technique for solving inverse problem part in (1,2). Instead, the boundary element method (BEM) in (1,2), the finite-difference method (FDM) has applied. Boundary data are in the role of overdetermination data. The second part of the problem is inverse and ill-posed, since small errors in the extra boundary data cause errors in the force solution. Zeroth order of Tikhonov regularization, and several parameters of regularization are employed to decrease error
... Show More