The cost of pile foundations is part of the super structure cost, and it became necessary to reduce this cost by studying the pile types then decision-making in the selection of the optimal pile type in terms of cost and time of production and quality .So The main objective of this study is to solve the time–cost–quality trade-off (TCQT) problem by finding an optimal pile type with the target of "minimizing" cost and time while "maximizing" quality. There are many types In the world of piles but in this paper, the researcher proposed five pile types, one of them is not a traditional, and developed a model for the problem and then employed particle swarm optimization (PSO) algorithm, as one of evolutionary algorithms with the help of (Mat lab software), as a tool for decision making problem about choosing the best alternative of the traded piles, and proposes a multi objective optimization model, which aims to optimize the time, cost and quality of the pile types, and assist in selecting the most appropriate pile types. The researcher selected 10 of senior engineers to conduct interviews with them. And prepared some questions for interviews and open questionnaire. The individuals are selected from private and state sectors each one have 10 years or more experience in pile foundations work. From personal interviews and field survey the research has shown that most of the experts, engineers are not fully aware of new soft wear techniques to helps them in choosing alternatives, despite their belief in the usefulness of using modern technology and software. The Problem is multi objective optimization problem, so after running the PSO algorithm it is usual to have more than one optimal solution, for five proposed pile types, finally the researcher evaluated and discussed the output results and found out that pre-high tension spun (PHC)pile type was the optimal pile type.
In drilling processes, the rheological properties pointed to the nature of the run-off and the composition of the drilling mud. Drilling mud performance can be assessed for solving the problems of the hole cleaning, fluid management, and hydraulics controls. The rheology factors are typically termed through the following parameters: Yield Point (Yp) and Plastic Viscosity (μp). The relation of (YP/ μp) is used for measuring of levelling for flow. High YP/ μp percentages are responsible for well cuttings transportation through laminar flow. The adequate values of (YP/ μp) are between 0 to 1 for the rheological models which used in drilling. This is what appeared in most of the models that were used in this study. The pressure loss
... Show MoreThe research aims to achieve proof of convergence between optimal costs and standard costs in calculating costs for the economic unit, support efforts aimed at adopting optimal costs in cost accounts and accounting thought in general, and achieve benefit from the theory of convergence between optimal costs and standard costs in the field of achieving actual costs in The economic unit in order to reduce and converge, and this came to address the possibility of adopting the concept of optimal costs in the production costs calculations for the purposes of rationalizing administrative decisions, and rationalizing the preparation of financial statements within management accounting.
The research concluded that
... Show Moren each relapse. Objjec tt iiv es :: To sttudy diifffferentt ffacttors whiich miightt be associiatted or lleadiing tto
tthe occurrence off rellapse iin nephrottiic syndrome
Metthods:: A retrospective study of seventy patients with nephrotic syndrome with age range of 1-14 years, who were diagnosed and treated in Child's Central Teaching Hospital over the period of 1st of January and 1st of July 2008.
The patients were divided into three groups; frequent relapses group, infrequent relapses group and undetermined group. We compared between frequent relapses group and infrequent relapses group in regard to age, sex, type of presentation, biochemical findings which include; total serum protein, serum albumin and renal function test,
The aim of the research is to examine the multiple intelligence test item selection based on Howard Gardner's MI model using the Generalized Partial Estimation Form, generalized intelligence. The researcher adopted the scale of multiple intelligences by Kardner, it consists of (102) items with eight sub-scales. The sample consisted of (550) students from Baghdad universities, Technology University, al-Mustansiriyah university, and Iraqi University for the academic year (2019/2020). It was verified assumptions theory response to a single (one-dimensional, local autonomy, the curve of individual characteristics, speed factor and application), and analysis of the data according to specimen partial appreciation of the generalized, and limits
... Show MoreSome degree of noise is always present in any electronic device that
transmits or receives a signal . For televisions, this signal i has been to s the
broadcast data transmitted over cable-or received at the antenna; for digital
cameras, the signal is the light which hits the camera sensor. At any case, noise
is unavoidable. In this paper, an electronic noise has been generate on
TV-satellite images by using variable resistors connected to the transmitting cable
. The contrast of edges has been determined. This method has been applied by
capturing images from TV-satellite images (Al-arabiya channel) channel with
different resistors. The results show that when increasing resistance always
produced higher noise f
Abstract :-
The aim of the research is to explain the role of quality costs their importance and their classification, and to clarify the most important tools that help to reduce costs.
In order to achieve the objective of the research and test hypotheses adopted the descriptive approach, as well as the adoption of the analytical approach in the study of applied data has been relied upon in providing data on the financial and production reports of the research sample company, the data were used to study and analyze financial and productivity reports . A number of conclusions have been reached the most important being the following
... Show MoreThis paper presents a hybrid approach for solving null values problem; it hybridizes rough set theory with intelligent swarm algorithm. The proposed approach is a supervised learning model. A large set of complete data called learning data is used to find the decision rule sets that then have been used in solving the incomplete data problem. The intelligent swarm algorithm is used for feature selection which represents bees algorithm as heuristic search algorithm combined with rough set theory as evaluation function. Also another feature selection algorithm called ID3 is presented, it works as statistical algorithm instead of intelligent algorithm. A comparison between those two approaches is made in their performance for null values estima
... Show MoreThe secure data transmission over internet is achieved using Steganography. It is the art and science of concealing information in unremarkable cover media so as not to arouse an observer’s suspicion. In this paper the color cover image is divided into equally four parts, for each part select one channel from each part( Red, or Green, or Blue), choosing one of these channel depending on the high color ratio in that part. The chosen part is decomposing into four parts {LL, HL, LH, HH} by using discrete wavelet transform. The hiding image is divided into four part n*n then apply DCT on each part. Finally the four DCT coefficient parts embedding in four high frequency sub-bands {HH} in
... Show MoreIn cognitive radio networks, there are two important probabilities; the first probability is important to primary users called probability of detection as it indicates their protection level from secondary users, and the second probability is important to the secondary users called probability of false alarm which is used for determining their using of unoccupied channel. Cooperation sensing can improve the probabilities of detection and false alarm. A new approach of determine optimal value for these probabilities, is supposed and considered to face multi secondary users through discovering an optimal threshold value for each unique detection curve then jointly find the optimal thresholds. To get the aggregated throughput over transmission
... Show MoreRecently Genetic Algorithms (GAs) have frequently been used for optimizing the solution of estimation problems. One of the main advantages of using these techniques is that they require no knowledge or gradient information about the response surface. The poor behavior of genetic algorithms in some problems, sometimes attributed to design operators, has led to the development of other types of algorithms. One such class of these algorithms is compact Genetic Algorithm (cGA), it dramatically reduces the number of bits reqyuired to store the poulation and has a faster convergence speed. In this paper compact Genetic Algorithm is used to optimize the maximum likelihood estimator of the first order moving avergae model MA(1). Simulation results
... Show More