This paper proposes a better solution for EEG-based brain language signals classification, it is using machine learning and optimization algorithms. This project aims to replace the brain signal classification for language processing tasks by achieving the higher accuracy and speed process. Features extraction is performed using a modified Discrete Wavelet Transform (DWT) in this study which increases the capability of capturing signal characteristics appropriately by decomposing EEG signals into significant frequency components. A Gray Wolf Optimization (GWO) algorithm method is applied to improve the results and select the optimal features which achieves more accurate results by selecting impactful features with maximum relevance while minimizing redundancy. This optimization process improves the performance of the classification model in general. In case of classification, the Support Vector Machine (SVM) and Neural Network (NN) hybrid model is presented. This combines an SVM classifier's capacity to manage functions in high dimensional space, as well as a neural network capacity to learn non-linearly with its feature (pattern learning). The model was trained and tested on an EEG dataset and performed a classification accuracy of 97%, indicating the robustness and efficacy of our method. The results indicate that this improved classifier is able to be used in brain–computer interface systems and neurologic evaluations. The combination of machine learning and optimization techniques has established this paradigm as a highly effective way to pursue further research in EEG signal processing for brain language recognition.
The High Power Amplifiers (HPAs), which are used in wireless communication, are distinctly characterized by nonlinear properties. The linearity of the HPA can be accomplished by retreating an HPA to put it in a linear region on account of power performance loss. Meanwhile the Orthogonal Frequency Division Multiplex signal is very rough. Therefore, it will be required a large undo to the linear action area that leads to a vital loss in power efficiency. Thereby, back-off is not a positive solution. A Simplicial Canonical Piecewise-Linear (SCPWL) model based digital predistorters are widely employed to compensating the nonlinear distortion that introduced by a HPA component in OFDM technology. In this paper, the genetic al
... Show MoreHeuristic approaches are traditionally applied to find the optimal size and optimal location of Flexible AC Transmission Systems (FACTS) devices in power systems. Genetic Algorithm (GA) technique has been applied to solve power engineering optimization problems giving better results than classical methods. This paper shows the application of GA for optimal sizing and allocation of a Static Compensator (STATCOM) in a power system. STATCOM devices used to increase transmission systems capacity and enhance voltage stability by regulate the voltages at its terminal by controlling the amount of reactive power injected into or absorbed from the power system. IEEE 5-bus standard system is used as an example to illustrate the te
... Show MoreThe objective of the current research is to find an optimum design of hybrid laminated moderate thick composite plates with static constraint. The stacking sequence and ply angle is required for optimization to achieve minimum deflection for hybrid laminated composite plates consist of glass and carbon long fibers reinforcements that impeded in epoxy matrix with known plates dimension and loading. The analysis of plate is by adopting the first-order shear deformation theory and using Navier's solution with Genetic Algorithm to approach the current objective. A program written with MATLAB to find best stacking sequence and ply angles that give minimum deflection, and the results comparing with ANSYS.
Artificial Intelligence Algorithms have been used in recent years in many scientific fields. We suggest employing flower pollination algorithm in the environmental field to find the best estimate of the semi-parametric regression function with measurement errors in the explanatory variables and the dependent variable, where measurement errors appear frequently in fields such as chemistry, biological sciences, medicine, and epidemiological studies, rather than an exact measurement. We estimate the regression function of the semi-parametric model by estimating the parametric model and estimating the non-parametric model, the parametric model is estimated by using an instrumental variables method (Wald method, Bartlett’s method, and Durbin
... Show MoreProblem: Cancer is regarded as one of the world's deadliest diseases. Machine learning and its new branch (deep learning) algorithms can facilitate the way of dealing with cancer, especially in the field of cancer prevention and detection. Traditional ways of analyzing cancer data have their limits, and cancer data is growing quickly. This makes it possible for deep learning to move forward with its powerful abilities to analyze and process cancer data. Aims: In the current study, a deep-learning medical support system for the prediction of lung cancer is presented. Methods: The study uses three different deep learning models (EfficientNetB3, ResNet50 and ResNet101) with the transfer learning concept. The three models are trained using a
... Show MoreSingle mode-no core-single mode fiber structure with a section of tuned no-core fiber diameter to sense changes in relative humidity has been experimentally demonstrated. The sensor performance with tuned NCF diameter was investigated to maximize the evanescent fields. Different tuned diameters of of (100, 80, and 60)μm were obtained by chemical etching process based on hydrofluoric acid immersion. The highest wavelength sensitivity was obtained 184.57 pm/RH% in the RH range of 30% –100% when the no-core fiber diameter diameter was 60 μm and the sensor response was in real-time measurements
There are many methods of searching large amount of data to find one particular piece of information. Such as find name of person in record of mobile. Certain methods of organizing data make the search process more efficient the objective of these methods is to find the element with least cost (least time). Binary search algorithm is faster than sequential and other commonly used search algorithms. This research develops binary search algorithm by using new structure called Triple, structure in this structure data are represented as triple. It consists of three locations (1-Top, 2-Left, and 3-Right) Binary search algorithm divide the search interval in half, this process makes the maximum number of comparisons (Average case com
... Show MoreThe objective of this work is to study the influence of end milling cutting process parameters, tool material and geometry on multi-response outputs for 4032 Al-alloy. This can be done by proposing an approach that combines Taguchi method with grey relational analysis. Three cutting parameters have been selected (spindle speed, feed rate and cut depth) with three levels for each parameter. Three tools with different materials and geometry have been also used to design the experimental tests and runs based on matrix L9. The end milling process with several output characteristics is solved using a grey relational analysis. The results of analysis of variance (ANOVA) showed that the major influencing parameters on multi-objective response w
... Show More