To expedite the learning process, a group of algorithms known as parallel machine learning algorithmscan be executed simultaneously on several computers or processors. As data grows in both size andcomplexity, and as businesses seek efficient ways to mine that data for insights, algorithms like thesewill become increasingly crucial. Data parallelism, model parallelism, and hybrid techniques are justsome of the methods described in this article for speeding up machine learning algorithms. We alsocover the benefits and threats associated with parallel machine learning, such as data splitting,communication, and scalability. We compare how well various methods perform on a variety ofmachine learning tasks and datasets, and we talk about the advantages and disadvantages of thesemethods. Finally, we offer our thoughts on where this field of study is headed and where furtherresearch is needed. The importance of parallel machine learning for businesses that want to gleaninsights from massive datasets is emphasised, and the paper provides a thorough introduction of thediscipline.
Products’ quality inspection is an important stage in every production route, in which the quality of the produced goods is estimated and compared with the desired specifications. With traditional inspection, the process rely on manual methods that generates various costs and large time consumption. On the contrary, today’s inspection systems that use modern techniques like computer vision, are more accurate and efficient. However, the amount of work needed to build a computer vision system based on classic techniques is relatively large, due to the issue of manually selecting and extracting features from digital images, which also produces labor costs for the system engineers. In this research, we pr
... Show MoreDiagnosing heart disease has become a very important topic for researchers specializing in artificial intelligence, because intelligence is involved in most diseases, especially after the Corona pandemic, which forced the world to turn to intelligence. Therefore, the basic idea in this research was to shed light on the diagnosis of heart diseases by relying on deep learning of a pre-trained model (Efficient b3) under the premise of using the electrical signals of the electrocardiogram and resample the signal in order to introduce it to the neural network with only trimming processing operations because it is an electrical signal whose parameters cannot be changed. The data set (China Physiological Signal Challenge -cspsc2018) was ad
... Show MoreThe complexity and variety of language included in policy and academic documents make the automatic classification of research papers based on the United Nations Sustainable Development Goals (SDGs) somewhat difficult. Using both pre-trained and contextual word embeddings to increase semantic understanding, this study presents a complete deep learning pipeline combining Bidirectional Long Short-Term Memory (BiLSTM) and Convolutional Neural Network (CNN) architectures which aims primarily to improve the comprehensibility and accuracy of SDG text classification, thereby enabling more effective policy monitoring and research evaluation. Successful document representation via Global Vector (GloVe), Bidirectional Encoder Representations from Tra
... Show MoreA Wearable Robotic Knee (WRK) is a mobile device designed to assist disabled individuals in moving freely in undefined environments without external support. An advanced controller is required to track the output trajectory of a WRK device in order to resolve uncertainties that are caused by modeling errors and external disturbances. During the performance of a task, disturbances are caused by changes in the external load and dynamic work conditions, such as by holding weights while performing the task. The aim of this study is to address these issues and enhance the performance of the output trajectory tracking goal using an adaptive robust controller based on the Radial Basis Function (RBF) Neural Network (NN) system and Hamilton
... Show MoreThe aim of this research is to diagnose the impact of competitive dimensions represented by quality, cost, time, flexibility on the efficiency of e-learning, The research adopted the descriptive analytical method by identifying the impact of these dimensions on the efficiency of e-learning, as well as the use of the statistical method for the purpose of eliciting results. The research concluded that there is an impact of the competitive dimensions on the efficiency of e-learning, as it has been proven that the special models for each of the research hypotheses are statistically significant and at a level of significance of 5%, and that each of these dimensions has a positive impact on the dependent variable, and the research recommended
... Show MoreE-Learning packages are content and instructional methods delivered on a computer
(whether on the Internet, or an intranet), and designed to build knowledge and skills related to
individual or organizational goals. This definition addresses: The what: Training delivered
in digital form. The how: By content and instructional methods, to help learn the content.
The why: Improve organizational performance by building job-relevant knowledge and
skills in workers.
This paper has been designed and implemented a learning package for Prolog Programming
Language. This is done by using Visual Basic.Net programming language 2010 in
conjunction with the Microsoft Office Access 2007. Also this package introduces several
fac
To maintain a sustained competitive position in the contemporary environment of knowledge economy, organizations as an open social systems must have an ability to learn and know how to adapt to rapid changes in a proper fashion so that organizational objectives will be achieved efficiently and effectively. A multilevel approach is adopted proposing that organizational learning suffers from the lack of interest about the strategic competitive performance of the organization. This remains implicit almost in all models of organizational learning and there is little focus on how learning organizations achieve sustainable competitive advantage . A dynamic model that captures t
... Show MoreThe deep learning algorithm has recently achieved a lot of success, especially in the field of computer vision. This research aims to describe the classification method applied to the dataset of multiple types of images (Synthetic Aperture Radar (SAR) images and non-SAR images). In such a classification, transfer learning was used followed by fine-tuning methods. Besides, pre-trained architectures were used on the known image database ImageNet. The model VGG16 was indeed used as a feature extractor and a new classifier was trained based on extracted features.The input data mainly focused on the dataset consist of five classes including the SAR images class (houses) and the non-SAR images classes (Cats, Dogs, Horses, and Humans). The Conv
... Show More