Within this research, The problem of scheduling jobs on a single machine is the subject of study to minimize the multi-criteria and multi-objective functions. The first problem, minimizing the multi-criteria, which include Total Completion Time, Total Late Work, and Maximum Earliness Time (∑𝐶𝑗, ∑𝑉𝑗, 𝐸𝑚𝑎𝑥), and the second problem, minimizing the multi-objective functions ∑𝐶𝑗 + ∑𝑉𝑗 +𝐸𝑚𝑎𝑥 are the problems at hand in this paper. In this study, a mathematical model is created to address the research problems, and some rules provide efficient (optimal) solutions to these problems. It has also been proven that each optimal solution for ∑𝐶𝑗 + ∑𝑉𝑗 + 𝐸𝑚𝑎𝑥 is an efficient solution to the problem (∑𝐶𝑗, ∑𝑉𝑗, 𝐸𝑚𝑎𝑥). Because these problems are NP-hard problems so it is difficult to determine the efficient (optimal) solution set for these problems so some special cases are shown and proven which find some efficient (optimal) solutions suitable for the discussed problem, and highlight the significance of the Dominance Rule (DR), which can be applied to this problem to enhance efficient solutions.
Future wireless communication systems must be able to accommodate a large number of users and simultaneously to provide the high data rates at the required quality of service. In this paper a method is proposed to perform the N-Discrete Hartley Transform (N-DHT) mapper, which are equivalent to 4-Quadrature Amplitude Modulation (QAM), 16-QAM, 64-QAM, 256-QAM, … etc. in spectral efficiency. The N-DHT mapper is chosen in the Multi Carrier Code Division Multiple Access (MC-CDMA) structure to serve as a data mapper instead of the conventional data mapping techniques like QPSK and QAM schemes. The proposed system is simulated using MATLAB and compared with conventional MC-CDMA for Additive White Gaussian Noise, flat, and multi-path selective fa
... Show MoreMost Internet of Vehicles (IoV) applications are delay-sensitive and require resources for data storage and tasks processing, which is very difficult to afford by vehicles. Such tasks are often offloaded to more powerful entities, like cloud and fog servers. Fog computing is decentralized infrastructure located between data source and cloud, supplies several benefits that make it a non-frivolous extension of the cloud. The high volume data which is generated by vehicles’ sensors and also the limited computation capabilities of vehicles have imposed several challenges on VANETs systems. Therefore, VANETs is integrated with fog computing to form a paradigm namely Vehicular Fog Computing (VFC) which provide low-latency services to mo
... Show MorePharmaceutical-instigated pollution is a major concern, especially in relation to aquatic environments and drugs such as meropenem antibiotics. Adsorbents, such as multi-walled carbon nanotubes, offer potential as means of removing polluting meropenem antibiotics and other similar compounds from water. In order to evaluate the effectiveness of multi-walled carbon nanotubes in this capacity, various experimental parameters, including contact time, initial concentration, pH, temperature and the dose of adsorbent have been investigated. The Langmuir and the Freundlich isotherm models have been used. The data obtained using a modified Langmuir model have been consistent with the experimental ones; the best pH value has been obtained to have the
... Show More
XML is being incorporated into the foundation of E-business data applications. This paper addresses the problem of the freeform information that stored in any organization and how XML with using this new approach will make the operation of the search very efficient and time consuming. This paper introduces new solution and methodology that has been developed to capture and manage such unstructured freeform information (multi information) depending on the use of XML schema technologies, neural network idea and object oriented relational database, in order to provide a practical solution for efficiently management multi freeform information system.
In high-dimensional semiparametric regression, balancing accuracy and interpretability often requires combining dimension reduction with variable selection. This study intro- duces two novel methods for dimension reduction in additive partial linear models: (i) minimum average variance estimation (MAVE) combined with the adaptive least abso- lute shrinkage and selection operator (MAVE-ALASSO) and (ii) MAVE with smoothly clipped absolute deviation (MAVE-SCAD). These methods leverage the flexibility of MAVE for sufficient dimension reduction while incorporating adaptive penalties to en- sure sparse and interpretable models. The performance of both methods is evaluated through simulations using the mean squared error and variable selection cri
... Show MoreLive the present companies in a competitive business environment going on and try to achieve excellence in their industry through the marketing of their products and achieve greater market share as possible to ensure its continued existence, and perhaps the concept of time production, which confirms, in essence, on the need to reduce inventory to a minimum in the production process as well as the concept of the marketing information system which asserts, in essence, to document all the events that are related to the marketing of the product provided by the production process, together constitute the subject deserves research and investigation as they have raised well-known in the fields of production management and marketing management.
... Show MoreEach phenomenon contains several variables. Studying these variables, we find mathematical formula to get the joint distribution and the copula that are a useful and good tool to find the amount of correlation, where the survival function was used to measure the relationship of age with the level of cretonne in the remaining blood of the person. The Spss program was also used to extract the influencing variables from a group of variables using factor analysis and then using the Clayton copula function that is used to find the shared binary distributions using multivariate distributions, where the bivariate distribution was calculated, and then the survival function value was calculated for a sample size (50) drawn from Yarmouk Ho
... Show More