Construction contractors usually undertake multiple construction projects simultaneously. Such a situation involves sharing different types of resources, including monetary, equipment, and manpower, which may become a major challenge in many cases. In this study, the financial aspects of working on multiple projects at a time are addressed and investigated. The study considers dealing with financial shortages by proposing a multi-project scheduling optimization model for profit maximization, while minimizing the total project duration. Optimization genetic algorithm and finance-based scheduling are used to produce feasible schedules that balance the finance of activities at any time with the available funds. The model has been tested in multi scenarios, and the results are analyzed. The results show that negative cash flow is minimized from −693,784 to −634,514 in enterprise I and from −2,646,408 to −2,529,324 in enterprise II in the first scenario and also results show that negative cash flow is minimized to −612,768 with a profit of +200,116 in enterprise I and to −2,597,290 with a profit of +1,537,632 in enterprise II in the second scenario.
To ensure that a software/hardware product is of sufficient quality and functionality, it is essential to conduct thorough testing and evaluations of the numerous individual software components that make up the application. Many different approaches exist for testing software, including combinatorial testing and covering arrays. Because of the difficulty of dealing with difficulties like a two-way combinatorial explosion, this brings up yet another problem: time. Using client-server architectures, this research introduces a parallel implementation of the TWGH algorithm. Many studies have been conducted to demonstrate the efficiency of this technique. The findings of this experiment were used to determine the increase in speed and co
... Show MoreBootstrap is one of an important re-sampling technique which has given the attention of researches recently. The presence of outliers in the original data set may cause serious problem to the classical bootstrap when the percentage of outliers are higher than the original one. Many methods are proposed to overcome this problem such Dynamic Robust Bootstrap for LTS (DRBLTS) and Weighted Bootstrap with Probability (WBP). This paper try to show the accuracy of parameters estimation by comparison the results of both methods. The bias , MSE and RMSE are considered. The criterion of the accuracy is based on the RMSE value since the method that provide us RMSE value smaller than other is con
... Show MoreThe background subtraction is a leading technique adopted for detecting the moving objects in video surveillance systems. Various background subtraction models have been applied to tackle different challenges in many surveillance environments. In this paper, we propose a model of pixel-based color-histogram and Fuzzy C-means (FCM) to obtain the background model using cosine similarity (CS) to measure the closeness between the current pixel and the background model and eventually determine the background and foreground pixel according to a tuned threshold. The performance of this model is benchmarked on CDnet2014 dynamic scenes dataset using statistical metrics. The results show a better performance against the state-of the art
... Show MoreThis work addressed the assignment problem (AP) based on fuzzy costs, where the objective, in this study, is to minimize the cost. A triangular, or trapezoidal, fuzzy numbers were assigned for each fuzzy cost. In addition, the assignment models were applied on linguistic variables which were initially converted to quantitative fuzzy data by using the Yager’sorankingi method. The paper results have showed that the quantitative date have a considerable effect when considered in fuzzy-mathematic models.
The -mixing of - transition in Er 168 populated in Er)n,n(Er 168168 reaction is calculated in the present work by using a2- ratio method. This method has used in previou studies [4, 5, 6, 7] in case that the second transition is pure or for that transition which can be considered as pure only, but in one work we applied this method for two cases, in the first one for pure transition and in the 2nd one for non pure transitions. We take into accunt the experimental a2- coefficient for p revious works and -values for one transition only [1]. The results obtained are, in general, in agood agreement within associated errors, with those reported previously [1], the discrepancies that occur are due to inaccuracies existing
... Show MoreCryptography is the process of transforming message to avoid an unauthorized access of data. One of the main problems and an important part in cryptography with secret key algorithms is key. For higher level of secure communication key plays an important role. For increasing the level of security in any communication, both parties must have a copy of the secret key which, unfortunately, is not that easy to achieve. Triple Data Encryption Standard algorithm is weak due to its weak key generation, so that key must be reconfigured to make this algorithm more secure, effective, and strong. Encryption key enhances the Triple Data Encryption Standard algorithm securities. This paper proposed a combination of two efficient encryption algorithms to
... Show MoreAnomaly detection is still a difficult task. To address this problem, we propose to strengthen DBSCAN algorithm for the data by converting all data to the graph concept frame (CFG). As is well known that the work DBSCAN method used to compile the data set belong to the same species in a while it will be considered in the external behavior of the cluster as a noise or anomalies. It can detect anomalies by DBSCAN algorithm can detect abnormal points that are far from certain set threshold (extremism). However, the abnormalities are not those cases, abnormal and unusual or far from a specific group, There is a type of data that is do not happen repeatedly, but are considered abnormal for the group of known. The analysis showed DBSCAN using the
... Show MoreVariable selection is an essential and necessary task in the statistical modeling field. Several studies have triedto develop and standardize the process of variable selection, but it isdifficultto do so. The first question a researcher needs to ask himself/herself what are the most significant variables that should be used to describe a given dataset’s response. In thispaper, a new method for variable selection using Gibbs sampler techniqueshas beendeveloped.First, the model is defined, and the posterior distributions for all the parameters are derived.The new variable selection methodis tested usingfour simulation datasets. The new approachiscompared with some existingtechniques: Ordinary Least Squared (OLS), Least Absolute Shrinkage
... Show MoreThe ï¤-mixing of ï§ - transition in Er 168 populated in Er(n,n ) Er 168 168 ï‚¢ï§ reaction is calculated in the present work by using a2- ratio method. This method has used in previou studies [4, 5, 6, 7] in case that the second transition is pure or for that transition which can be considered as pure only, but in one work we applied this method for two cases, in the first one for pure transition and in the 2nd one for non pure transitions. We take into accunt the experimental a2- coefficient for previous works and ï¤-values for one transition only [1]. The results obtained are, in general, in agood agreement within associated errors, with those reported previously [1], the discrepancies that occur are due to in
... Show MoreMultiple myeloma (MM) is a heterogenous plasma cell malignancy with various complications. Sclerostin is a Wingless-type (Wnt) inhibitor specifically expressed by osteocytes; it acts as a negative regulator of bone formation.
To assess plasma sclerostin level in MM patients and find its correlations with clinical and laboratory data, including osteolytic bone disease and international staging system (ISS).
This cr