Optimizing system performance in dynamic and heterogeneous environments and the efficient management of computational tasks are crucial. This paper therefore looks at task scheduling and resource allocation algorithms in some depth. The work evaluates five algorithms: Genetic Algorithms (GA), Particle Swarm Optimization (PSO), Ant Colony Optimization (ACO), Firefly Algorithm (FA) and Simulated Annealing (SA) across various workloads achieved by varying the task-to-node ratio. The paper identifies Finish Time and Deadline as two key performance metrics for gauging the efficacy of an algorithm, and a comprehensive investigation of the behaviors of these algorithms across different workloads was carried out. Results from the experiments reveal unique patterns in algorithmic behaviors by workload. In the 15-task and 5-node scenario, the GA and PSO algorithms outclass all others, completing 100 percent of tasks before deadlines, Task 5 was a bane to the ACO algorithm. The study proposes a more extensive system that promotes an adaptive algorithmic approach based on workload characteristics. Numerically, the GA and PSO algorithms triumphed completing 100 percent of tasks before their deadlines in the face of 10 tasks and 5 nodes, while the ACO algorithm stumbled on certain tasks. As it is stated in the study, The above-mentioned system offers an integrated approach to ill-structured problem of task scheduling and resource allocation. It offers an intelligent and aggressive scheduling scheme that runs asynchronously when a higher number of tasks is submitted for the completion in addition to those dynamically aborts whenever system load and utilization cascade excessively. The proposed design seems like full-fledged solution over project scheduling or resource allocation issues. It highlights a detailed method of the choice of algorithms based on semantic features, aiming at flexibility. Effects of producing quantifiable statistical results from the experiments on performance empirically demonstrate each algorithm performed under various settings.
The current research aims to adopt production quality decisions as the most important decisions , because they are accompanied by customer satisfaction through monitoring the quality of drinking water in iraq which reach through the pipeline network associated with water treatment projects of Tigris and Euphrates rivers. One of the indicators of quality control was the drawing of the C-chart by specifying the central line and the upper and lower limit of the control and the diagnosis of whether the production system as a whole within the scope of quality control or not and determine the strength and significance of the correlation between the quantities of water And actual needs for customers , the research has reached a number o
... Show MoreIn this paper reliable computational methods (RCMs) based on the monomial stan-dard polynomials have been executed to solve the problem of Jeffery-Hamel flow (JHF). In addition, convenient base functions, namely Bernoulli, Euler and Laguerre polynomials, have been used to enhance the reliability of the computational methods. Using such functions turns the problem into a set of solvable nonlinear algebraic system that MathematicaⓇ12 can solve. The JHF problem has been solved with the help of Improved Reliable Computational Methods (I-RCMs), and a review of the methods has been given. Also, published facts are used to make comparisons. As further evidence of the accuracy and dependability of the proposed methods, the maximum error remainder
... Show MoreIt is the regression analysis is the foundation stone of knowledge of statistics , which mostly depends on the ordinary least square method , but as is well known that the way the above mentioned her several conditions to operate accurately and the results can be unreliable , add to that the lack of certain conditions make it impossible to complete the work and analysis method and among those conditions are the multi-co linearity problem , and we are in the process of detected that problem between the independent variables using farrar –glauber test , in addition to the requirement linearity data and the lack of the condition last has been resorting to the
... Show MoreThis paper presents a proposed method for (CBIR) from using Discrete Cosine Transform with Kekre Wavelet Transform (DCT/KWT), and Daubechies Wavelet Transform with Kekre Wavelet Transform (D4/KWT) to extract features for Distributed Database system where clients/server as a Star topology, client send the query image and server (which has the database) make all the work and then send the retrieval images to the client. A comparison between these two approaches: first DCT compare with DCT/KWT and second D4 compare with D4/KWT are made. The work experimented over the image database of 200 images of 4 categories and the performance of image retrieval with respect to two similarity measures namely Euclidian distance (ED) and sum of absolute diff
... Show MoreUntil The Second Half Of Nineteenth Century , Psychology Depended On Philosophy In Its Experiences Which Means It Depend On Individuals Feelings Toward Psychological Experiences And Rapid Observation . As For The Scientific Organized Experimentation And Observation By Machines Have Been Used Later .
The Experimentation Looking For The Cause Of Phenomena And How Is That Happened Also Reveal The Causal Relationship Between Among Phenomena . The Experiences In Psychology Field Include In Its Simplest Form Only One Variable ( Effective) Which Is Called Independent Variable As Well As Another Variable ( Affected ) That Called Dependent Variable .
... Show MoreWith the escalation of cybercriminal activities, the demand for forensic investigations into these crimeshas grown significantly. However, the concept of systematic pre-preparation for potential forensicexaminations during the software design phase, known as forensic readiness, has only recently gainedattention. Against the backdrop of surging urban crime rates, this study aims to conduct a rigorous andprecise analysis and forecast of crime rates in Los Angeles, employing advanced Artificial Intelligence(AI) technologies. This research amalgamates diverse datasets encompassing crime history, varioussocio-economic indicators, and geographical locations to attain a comprehensive understanding of howcrimes manifest within the city. Lev
... Show More
Abstract
The Classical Normal Linear Regression Model Based on Several hypotheses, one of them is Heteroscedasticity as it is known that the wing of least squares method (OLS), under the existence of these two problems make the estimators, lose their desirable properties, in addition the statistical inference becomes unaccepted table. According that we put tow alternative, the first one is (Generalized Least Square) Which is denoted by (GLS), and the second alternative is to (Robust covariance matrix estimation) the estimated parameters method(OLS), and that the way (GLS) method neat and certified, if the capabilities (Efficient) and the statistical inference Thread on the basis of an acceptable
... Show MoreThis research a study model of linear regression problem of autocorrelation of random error is spread when a normal distribution as used in linear regression analysis for relationship between variables and through this relationship can predict the value of a variable with the values of other variables, and was comparing methods (method of least squares, method of the average un-weighted, Thiel method and Laplace method) using the mean square error (MSE) boxes and simulation and the study included fore sizes of samples (15, 30, 60, 100). The results showed that the least-squares method is best, applying the fore methods of buckwheat production data and the cultivated area of the provinces of Iraq for years (2010), (2011), (2012),
... Show More