To ensure that a software/hardware product is of sufficient quality and functionality, it is essential to conduct thorough testing and evaluations of the numerous individual software components that make up the application. Many different approaches exist for testing software, including combinatorial testing and covering arrays. Because of the difficulty of dealing with difficulties like a two-way combinatorial explosion, this brings up yet another problem: time. Using client-server architectures, this research introduces a parallel implementation of the TWGH algorithm. Many studies have been conducted to demonstrate the efficiency of this technique. The findings of this experiment were used to determine the increase in speed and compare it to the results obtained from the various methodologies. TWGH was shown to be a demonstration of scalability in studies involving speedups. When the recommended method was implemented, the rate of acceleration increased by eight.
The internet of medical things (IoMT), which is expected the lead to the biggest technology in worldwide distribution. Using 5th generation (5G) transmission, market possibilities and hazards related to IoMT are improved and detected. This framework describes a strategy for proactively addressing worries and offering a forum to promote development, alter attitudes and maintain people's confidence in the broader healthcare system without compromising security. It is combined with a data offloading system to speed up the transmission of medical data and improved the quality of service (QoS). As a result of this development, we suggested the enriched energy efficient fuzzy (EEEF) data offloading technique to enhance the delivery of dat
... Show MoreElectro-chemical Machining is significant process to remove metal with using anodic dissolution. Electro-chemical machining use to removed metal workpiece from (7025) aluminum alloy using Potassium chloride (KCl) solution .The tool used was made from copper. In this present the optimize processes input parameter use are( current, gap and electrolyte concentration) and surface roughness (Ra) as output .The experiments on electro-chemical machining with use current (30, 50, 70)A, gap (1.00, 1.25, 1.50) mm and electrolyte concentration (100, 200, 300) (g/L). The method (ANOVA) was used to limited the large influence factors affected on surface roughness and found the current was the large influence f
... Show MoreNowadays, cloud computing has attracted the attention of large companies due to its high potential, flexibility, and profitability in providing multi-sources of hardware and software to serve the connected users. Given the scale of modern data centers and the dynamic nature of their resource provisioning, we need effective scheduling techniques to manage these resources while satisfying both the cloud providers and cloud users goals. Task scheduling in cloud computing is considered as NP-hard problem which cannot be easily solved by classical optimization methods. Thus, both heuristic and meta-heuristic techniques have been utilized to provide optimal or near-optimal solutions within an acceptable time frame for such problems. In th
... Show More.
ABSTRACT
The research focuses on the key issue concerning the use of the best ways to test the financial stability in the banking sector, considering that financial stability cannot be achieved unless the financial sector in general and the banking sector in particular are able to perform its key role in addressing the economic and social development requirements, under the laws and regulations that control banking sector , as the only way that increases its ability to deal with any risks or negative effects experienced by banks and other financial institutions. The research goal is to evaluate the stability of the banking system in Iraq, through the use of a set of econometrics an
... Show MoreSurvival analysis is widely applied in data describing for the life time of item until the occurrence of an event of interest such as death or another event of understudy . The purpose of this paper is to use the dynamic approach in the deep learning neural network method, where in this method a dynamic neural network that suits the nature of discrete survival data and time varying effect. This neural network is based on the Levenberg-Marquardt (L-M) algorithm in training, and the method is called Proposed Dynamic Artificial Neural Network (PDANN). Then a comparison was made with another method that depends entirely on the Bayes methodology is called Maximum A Posterior (MAP) method. This method was carried out using numerical algorithms re
... Show More