In high-dimensional semiparametric regression, balancing accuracy and interpretability often requires combining dimension reduction with variable selection. This study intro- duces two novel methods for dimension reduction in additive partial linear models: (i) minimum average variance estimation (MAVE) combined with the adaptive least abso- lute shrinkage and selection operator (MAVE-ALASSO) and (ii) MAVE with smoothly clipped absolute deviation (MAVE-SCAD). These methods leverage the flexibility of MAVE for sufficient dimension reduction while incorporating adaptive penalties to en- sure sparse and interpretable models. The performance of both methods is evaluated through simulations using the mean squared error and variable selection criteria, as- sessing the correct detection of zero coefficients and the false omission of nonzero coef- ficients. A practical application involving financial data from the Baghdad Soft Drinks Company demonstrates their utility in identifying key predictors of stock market value. The results indicate that MAVE-SCAD performs well in high-dimensional and complex scenarios, whereas MAVE-ALASSO is better suited to small samples, producing more parsimonious models. These results highlight the effectiveness of these two methods in addressing key challenges in semiparametric modeling
The linear segment with parabolic blend (LSPB) trajectory deviates from the specified waypoints. It is restricted to that the acceleration must be sufficiently high. In this work, it is proposed to engage modified LSPB trajectory with particle swarm optimization (PSO) so as to create through points on the trajectory. The assumption of normal LSPB method that parabolic part is centered in time around waypoints is replaced by proposed coefficients for calculating the time duration of the linear part. These coefficients are functions of velocities between through points. The velocities are obtained by PSO so as to force the LSPB trajectory passing exactly through the specified path points. Also, relations for velocity correction and exact v
... Show MoreFeature selection (FS) constitutes a series of processes used to decide which relevant features/attributes to include and which irrelevant features to exclude for predictive modeling. It is a crucial task that aids machine learning classifiers in reducing error rates, computation time, overfitting, and improving classification accuracy. It has demonstrated its efficacy in myriads of domains, ranging from its use for text classification (TC), text mining, and image recognition. While there are many traditional FS methods, recent research efforts have been devoted to applying metaheuristic algorithms as FS techniques for the TC task. However, there are few literature reviews concerning TC. Therefore, a comprehensive overview was systematicall
... Show MoreAmong the metaheuristic algorithms, population-based algorithms are an explorative search algorithm superior to the local search algorithm in terms of exploring the search space to find globally optimal solutions. However, the primary downside of such algorithms is their low exploitative capability, which prevents the expansion of the search space neighborhood for more optimal solutions. The firefly algorithm (FA) is a population-based algorithm that has been widely used in clustering problems. However, FA is limited in terms of its premature convergence when no neighborhood search strategies are employed to improve the quality of clustering solutions in the neighborhood region and exploring the global regions in the search space. On the
... Show MoreThis paper discusses an optimal path planning algorithm based on an Adaptive Multi-Objective Particle Swarm Optimization Algorithm (AMOPSO) for two case studies. First case, single robot wants to reach a goal in the static environment that contain two obstacles and two danger source. The second one, is improving the ability for five robots to reach the shortest way. The proposed algorithm solves the optimization problems for the first case by finding the minimum distance from initial to goal position and also ensuring that the generated path has a maximum distance from the danger zones. And for the second case, finding the shortest path for every robot and without any collision between them with the shortest time. In ord
... Show MoreNegotiations are distinguished in that they are an easy and simple means between the conflicting parties, and it is an effective means at the same time as the conflicting parties seek understanding on the most effective way to solve their dispute, but negotiations are not always appropriate to resolve international disputes, especially when there is a disparity in power between the negotiating countries, or when it is missing Goodwill, or even when one of the parties is absent or less flexible, and the internal circumstances of one of the conflicting countries may play a negative or positive role in the success of the negotiations, away from the influence of the role of external variables in that, a
... Show MoreMost Internet of Vehicles (IoV) applications are delay-sensitive and require resources for data storage and tasks processing, which is very difficult to afford by vehicles. Such tasks are often offloaded to more powerful entities, like cloud and fog servers. Fog computing is decentralized infrastructure located between data source and cloud, supplies several benefits that make it a non-frivolous extension of the cloud. The high volume data which is generated by vehicles’ sensors and also the limited computation capabilities of vehicles have imposed several challenges on VANETs systems. Therefore, VANETs is integrated with fog computing to form a paradigm namely Vehicular Fog Computing (VFC) which provide low-latency services to mo
... Show More
XML is being incorporated into the foundation of E-business data applications. This paper addresses the problem of the freeform information that stored in any organization and how XML with using this new approach will make the operation of the search very efficient and time consuming. This paper introduces new solution and methodology that has been developed to capture and manage such unstructured freeform information (multi information) depending on the use of XML schema technologies, neural network idea and object oriented relational database, in order to provide a practical solution for efficiently management multi freeform information system.