Feature selection (FS) constitutes a series of processes used to decide which relevant features/attributes to include and which irrelevant features to exclude for predictive modeling. It is a crucial task that aids machine learning classifiers in reducing error rates, computation time, overfitting, and improving classification accuracy. It has demonstrated its efficacy in myriads of domains, ranging from its use for text classification (TC), text mining, and image recognition. While there are many traditional FS methods, recent research efforts have been devoted to applying metaheuristic algorithms as FS techniques for the TC task. However, there are few literature reviews concerning TC. Therefore, a comprehensive overview was systematically studied by exploring available studies of different metaheuristic algorithms used for FS to improve TC. This paper will contribute to the body of existing knowledge by answering four research questions (RQs): 1) What are the different approaches of FS that apply metaheuristic algorithms to improve TC? 2) Does applying metaheuristic algorithms for TC lead to better accuracy than the typical FS methods? 3) How effective are the modified, hybridized metaheuristic algorithms for text FS problems?, and 4) What are the gaps in the current studies and their future directions? These RQs led to a study of recent works on metaheuristic-based FS methods, their contributions, and limitations. Hence, a final list of thirty-seven (37) related articles was extracted and investigated to align with our RQs to generate new knowledge in the domain of study. Most of the conducted papers focused on addressing the TC in tandem with metaheuristic algorithms based on the wrapper and hybrid FS approaches. Future research should focus on using a hybrid-based FS approach as it intuitively handles complex optimization problems and potentiality provide new research opportunities in this rapidly developing field.
The research studied and analyzed the hybrid parallel-series systems of asymmetrical components by applying different experiments of simulations used to estimate the reliability function of those systems through the use of the maximum likelihood method as well as the Bayes standard method via both symmetrical and asymmetrical loss functions following Rayleigh distribution and Informative Prior distribution. The simulation experiments included different sizes of samples and default parameters which were then compared with one another depending on Square Error averages. Following that was the application of Bayes standard method by the Entropy Loss function that proved successful throughout the experimental side in finding the reliability fun
... Show MoreSimple, economic and sensitive mathematical spectrophotometric methods were developed for the estimation 4-aminoantipyrine in presence of its acidic product. The estimation of binary mixture 4-aminoantipyrine and its acidic product was achieved by first derivative and second derivative spectrophotometric methods by applying zero-crossing at (valley 255.9nm and 234.5nm) for 4-aminoantipyrine and (peak 243.3 nm and 227.3nm) for acidic product. The value of coefficient of determination for the liner graphs were not less than 0.996 and the recovery percentage were found to be in the range from 96.555 to 102.160. Normal ratio spectrophotometric method 0DD was used 50 mg/l acidic product as a divisor
... Show MoreSimple, economic and sensitive mathematical spectrophotometric methods were developed for the estimation 4-aminoantipyrine in presence of its acidic product. The estimation of binary mixture 4-aminoantipyrine and its acidic product was achieved by first derivative and second derivative spectrophotometric methods by applying zero-crossing at (valley 255.9nm and 234.5nm) for 4-aminoantipyrine and (peak 243.3 nm and 227.3nm) for acidic product. The value of coefficient of determination for the liner graphs were not less than 0.996 and the recovery percentage were found to be in the range from 96.555 to 102.160. Normal ratio spectrophotometric method 0DD was used 50 mg/l acidic product as a divisor and then measured at 299.9 nm with correlat
... Show MoreAbstract
The problem of missing data represents a major obstacle before researchers in the process of data analysis in different fields since , this problem is a recurrent one in all fields of study including social , medical , astronomical and clinical experiments .
The presence of such a problem within the data to be studied may influence negatively on the analysis and it may lead to misleading conclusions , together with the fact that these conclusions that result from a great bias caused by that problem in spite of the efficiency of wavelet methods but they are also affected by the missing of data , in addition to the impact of the problem of miss of accuracy estimation
... Show MoreThe available experimental data of proton electronic stopping power for Polyethylene, Mylar, Kapton and Polystyrene are compared with Mathematica, SRIM2013, PSTAR and libdEdx programs or databases. The comparison involves sketching out both experimental and databases data for each polymer to discuss the agreement. Further, we use statistical means via standard deviation resulting from the mean normalized difference to describe the precise agreement among the databases and the experimental data. We found that there is not a specific one database can describe the experimental data for certain material at given proton energy.
The objective of the study is to demonstrate the predictive ability is better between the logistic regression model and Linear Discriminant function using the original data first and then the Home vehicles to reduce the dimensions of the variables for data and socio-economic survey of the family to the province of Baghdad in 2012 and included a sample of 615 observation with 13 variable, 12 of them is an explanatory variable and the depended variable is number of workers and the unemployed.
Was conducted to compare the two methods above and it became clear by comparing the logistic regression model best of a Linear Discriminant function written
... Show MorePathological blood clot in blood vessels, which often leads to cardiovascular diseases, are one of the most common causes of death in humans. Therefore, enzymatic therapy to degrade blood clots is vital. To achieve this goal, bromelain was immobilized and used for the biodegradation of blood clots. Bromelain was extracted from the pineapple fruit pulp (Ananas comosus) and purified by ion exchange chromatography after precipitation with ammonium sulphate (0-80 %), resulting in a yield of 70%, purification fold of 1.42, and a specific activity of 1175 U/mg. Bromelain was covalently immobilized on functionalized multi-walled carbon nanotubes (MWCNT), with an enzyme loading of 71.35%. The results of the characterization of free and immobilized
... Show MoreThe Research is interested in the detailed comparative study of certain selection of
Imam Alsamarqandi in some subjects of washing and touching the Gracious Quran.
The value of this study is that it is related to one aspect of the duties obliged on Muslim
like parity.
The study has tried to collect certain scholars’ opinions of eight doctrines with the
selection of Alsamarqandi to make a comp arson between them and to show how
Alsamarqandi is able to create legal laws from his sources to lead the researchers nto have
knowledge about the syllabi of the famous scientists.
Finally we ask God to bless to what is right and to accept this study and make it part of
our good deeds, Ameen.