Linear programming currently occupies a prominent position in various fields and has wide applications, as its importance lies in being a means of studying the behavior of a large number of systems as well. It is also the simplest and easiest type of models that can be created to address industrial, commercial, military and other dilemmas. Through which to obtain the optimal quantitative value. In this research, we dealt with the post optimality solution, or what is known as sensitivity analysis, using the principle of shadow prices. The scientific solution to any problem is not a complete solution once the optimal solution is reached. Any change in the values of the model constants or what is known as the inputs of the model that will change the problem of linear programming and will affect the optimal solution, and therefore we need a method that helps us to stand on the impact of changing these constants on the optimal solution that has been reached. General concepts about the binary model and some related theories have also been addressed. By analyzing the sensitivity, we relied on real data for a company that transports crude oil and its derivatives. The mathematical model was formulated for it and the optimal solution was reached using the software. Ready-made sop WINQSB and then calculate the shadow price values for the binding constraints, in addition to what
It is well-known that the existence of outliers in the data will adversely affect the efficiency of estimation and results of the current study. In this paper four methods will be studied to detect outliers for the multiple linear regression model in two cases : first, in real data; and secondly, after adding the outliers to data and the attempt to detect it. The study is conducted for samples with different sizes, and uses three measures for comparing between these methods . These three measures are : the mask, dumping and standard error of the estimate.
The linear segment with parabolic blend (LSPB) trajectory deviates from the specified waypoints. It is restricted to that the acceleration must be sufficiently high. In this work, it is proposed to engage modified LSPB trajectory with particle swarm optimization (PSO) so as to create through points on the trajectory. The assumption of normal LSPB method that parabolic part is centered in time around waypoints is replaced by proposed coefficients for calculating the time duration of the linear part. These coefficients are functions of velocities between through points. The velocities are obtained by PSO so as to force the LSPB trajectory passing exactly through the specified path points. Also, relations for velocity correction and exact v
... Show MoreThis article aims to estimate the partially linear model by using two methods, which are the Wavelet and Kernel Smoothers. Simulation experiments are used to study the small sample behavior depending on different functions, sample sizes, and variances. Results explained that the wavelet smoother is the best depending on the mean average squares error criterion for all cases that used.
the research ptesents a proposed method to compare or determine the linear equivalence of the key-stream from linear or nonlinear key-stream
This study suggests using the recycled plastic waste to prepare the polymer matrix composite (PMCs) to use in different applications. Composite materials were prepared by mixing the polyester resin (UP) with plastic waste, two types of plastic waste were used in this work included polyethylene-terephthalate (PET) and Polyvinyl chloride (PVC) with varies weight fractions (0, 5, 10, 15, 20 and 25 %) added as a filler in flakes form. Charpy impact test was performed on the prepared samples to calculate the values of impact strength (I.S). Flexural and hardness tests were carried out to calculate the values of flexural strength and hardness. Acoustic insulation and optical microscope tests were carried out. In general, it is found that UP/PV
... Show MoreUsing the Internet, nothing is secure and as we are in need of means of protecting our data, the use of passwords has become important in the electronic world. To ensure that there is no hacking and to protect the database that contains important information such as the ID card and banking information, the proposed system stores the username after hashing it using the 256 hash algorithm and strong passwords are saved to repel attackers using one of two methods: -The first method is to add a random salt to the password using the CSPRNG algorithm, then hash it using hash 256 and store it on the website. -The second method is to use the PBKDF2 algorithm, which salts the passwords and extends them (deriving the password) before being ha
... Show MoreThis study aimed to prepare a program (physical-nutritional) for women with polycystic ovary, as well as to identify the effect of this program on some body measurements and the incidence of polycystic ovarian syndrome in the research sample. A total of 12 women (aged 20-25 years) with Polycystic Ovary Syndrome (PCOS) participated in the randomized controlled trial design. They were divided equally into two groups (experimental and control group). The experimental group received the physical-nutritional program accompanying the treatment program, while the control group received only the instructions of the specialist doctor and the treatment program prepared by them. The two researchers applied their nutritional progr
... Show MoreIn this paper, simulation studies and applications of the New Weibull-Inverse Lomax (NWIL) distribution were presented. In the simulation studies, different sample sizes ranging from 30, 50, 100, 200, 300, to 500 were considered. Also, 1,000 replications were considered for the experiment. NWIL is a fat tail distribution. Higher moments are not easily derived except with some approximations. However, the estimates have higher precisions with low variances. Finally, the usefulness of the NWIL distribution was illustrated by fitting two data sets
A mixture model is used to model data that come from more than one component. In recent years, it became an effective tool in drawing inferences about the complex data that we might come across in real life. Moreover, it can represent a tremendous confirmatory tool in classification observations based on similarities amongst them. In this paper, several mixture regression-based methods were conducted under the assumption that the data come from a finite number of components. A comparison of these methods has been made according to their results in estimating component parameters. Also, observation membership has been inferred and assessed for these methods. The results showed that the flexible mixture model outperformed the
... Show More