The problem of Bi-level programming is to reduce or maximize the function of the target by having another target function within the constraints. This problem has received a great deal of attention in the programming community due to the proliferation of applications and the use of evolutionary algorithms in addressing this kind of problem. Two non-linear bi-level programming methods are used in this paper. The goal is to achieve the optimal solution through the simulation method using the Monte Carlo method using different small and large sample sizes. The research reached the Branch Bound algorithm was preferred in solving the problem of non-linear two-level programming this is because the results were better.
The rise of edge-cloud continuum computing is a result of the growing significance of edge computing, which has become a complementary or substitute option for traditional cloud services. The convergence of networking and computers presents a notable challenge due to their distinct historical development. Task scheduling is a major challenge in the context of edge-cloud continuum computing. The selection of the execution location of tasks, is crucial in meeting the quality-of-service (QoS) requirements of applications. An efficient scheduling strategy for distributing workloads among virtual machines in the edge-cloud continuum data center is mandatory to ensure the fulfilment of QoS requirements for both customer and service provider. E
... Show MoreA three-stage learning algorithm for deep multilayer perceptron (DMLP) with effective weight initialisation based on sparse auto-encoder is proposed in this paper, which aims to overcome difficulties in training deep neural networks with limited training data in high-dimensional feature space. At the first stage, unsupervised learning is adopted using sparse auto-encoder to obtain the initial weights of the feature extraction layers of the DMLP. At the second stage, error back-propagation is used to train the DMLP by fixing the weights obtained at the first stage for its feature extraction layers. At the third stage, all the weights of the DMLP obtained at the second stage are refined by error back-propagation. Network structures an
... Show MoreConstruction joints are stopping places in the process of placing concrete, and they are required because in many structures it is impractical to place concrete in one continuous operation. The amount of concrete that can be placed at one time is governed by the batching and mixing capacity and by the strength of the formwork. A good construction joint should provide adequate flexural and shear continuity through the interface.
In this study, the effect of location of construction joints on the performance of reinforced concrete structural elements is experimentally investigated.
Nineteen beam specimens with dimensions of 200×200×950 mm were tested. The variables investigated are the location of the construction joints
... Show MoreThe article is devoted to the Russian-Arabic translation, a particular theory of which has not been developed in domestic translation studies to the extent that the mechanisms of translation from and into European languages are described. In this regard, as well as with the growing volumes of Russian-Arabic translation, the issues of this private theory of translation require significant additions and new approaches. The authors set the task of determining the means of translation (cognitive and mental operations and language transformations) that contribute to the achievement of the most equivalent correspondences of such typologically different languages as Russian and Arabic. The work summarizes and analyzes the accumulated exper
... Show Moreorder to increase the level of security, as this system encrypts the secret image before sending it through the internet to the recipient (by the Blowfish method). As The Blowfish method is known for its efficient security; nevertheless, the encrypting time is long. In this research we try to apply the smoothing filter on the secret image which decreases its size and consequently the encrypting and decrypting time are decreased. The secret image is hidden after encrypting it into another image called the cover image, by the use of one of these two methods" Two-LSB" or" Hiding most bits in blue pixels". Eventually we compare the results of the two methods to determine which one is better to be used according to the PSNR measurs

