Projects suspensions are between the most insistent tasks confronted by the construction field accredited to the sector’s difficulty and its essential delay risk foundations’ interdependence. Machine learning provides a perfect group of techniques, which can attack those complex systems. The study aimed to recognize and progress a wellorganized predictive data tool to examine and learn from delay sources depend on preceding data of construction projects by using decision trees and naïve Bayesian classification algorithms. An intensive review of available data has been conducted to explore the real reasons and causes of construction project delays. The results show that the postponement of delay of interim payments is at the forefront of delay factors caused by the employer’s decision. Even the least one is to leave the job site caused by the contractor’s second part of the contract, the repeated unjustified stopping of the work at the site, without permission or notice from the client’s representatives. The developed model was applied to about 97 projects and used as a prediction model. The decision tree model shows higher accuracy in the prediction.
Purpose: Providing practical knowledge of the requirements of a detailed feasibility study for selecting the investment project.
Findings: Directing the private sector towards investing in productive projects - the pre-cast reinforced concrete project - as it achieves a financial return as well as providing Providing foreign currencies by reducing imports and exploiting available natural resources
Practical implications: The importance of a detailed feasibility study to determining whether the project can be implemented or not.
The precast concrete method is one of the best modern c
... Show MoreTwo unsupervised classifiers for optimum multithreshold are presented; fast Otsu and k-means. The unparametric methods produce an efficient procedure to separate the regions (classes) by select optimum levels, either on the gray levels of image histogram (as Otsu classifier), or on the gray levels of image intensities(as k-mean classifier), which are represent threshold values of the classes. In order to compare between the experimental results of these classifiers, the computation time is recorded and the needed iterations for k-means classifier to converge with optimum classes centers. The variation in the recorded computation time for k-means classifier is discussed.
The support vector machine, also known as SVM, is a type of supervised learning model that can be used for classification or regression depending on the datasets. SVM is used to classify data points by determining the best hyperplane between two or more groups. Working with enormous datasets, on the other hand, might result in a variety of issues, including inefficient accuracy and time-consuming. SVM was updated in this research by applying some non-linear kernel transformations, which are: linear, polynomial, radial basis, and multi-layer kernels. The non-linear SVM classification model was illustrated and summarized in an algorithm using kernel tricks. The proposed method was examined using three simulation datasets with different sample
... Show MoreVarious theories have been proposed since in last century to predict the first sighting of a new crescent moon. None of them uses the concept of machine and deep learning to process, interpret and simulate patterns hidden in databases. Many of these theories use interpolation and extrapolation techniques to identify sighting regions through such data. In this study, a pattern recognizer artificial neural network was trained to distinguish between visibility regions. Essential parameters of crescent moon sighting were collected from moon sight datasets and used to build an intelligent system of pattern recognition to predict the crescent sight conditions. The proposed ANN learned the datasets with an accuracy of more than 72% in comp
... Show MoreThe expanding use of multi-processor supercomputers has made a significant impact on the speed and size of many problems. The adaptation of standard Message Passing Interface protocol (MPI) has enabled programmers to write portable and efficient codes across a wide variety of parallel architectures. Sorting is one of the most common operations performed by a computer. Because sorted data are easier to manipulate than randomly ordered data, many algorithms require sorted data. Sorting is of additional importance to parallel computing because of its close relation to the task of routing data among processes, which is an essential part of many parallel algorithms. In this paper, sequential sorting algorithms, the parallel implementation of man
... Show More