The problem of Bi-level programming is to reduce or maximize the function of the target by having another target function within the constraints. This problem has received a great deal of attention in the programming community due to the proliferation of applications and the use of evolutionary algorithms in addressing this kind of problem. Two non-linear bi-level programming methods are used in this paper. The goal is to achieve the optimal solution through the simulation method using the Monte Carlo method using different small and large sample sizes. The research reached the Branch Bound algorithm was preferred in solving the problem of non-linear two-level programming this is because the results were better.
The purpose of this paper is use the Dynamic Programming to solve a deterministic periodic review model for inventory problem and then to find the optimal policies that the company must uses in the purchase or production (in the practical application example the Al Aksa company purchase the generators from out side country).
A sensitivity-turbidimetric method at (0-180o) was used for detn. of mebeverine in drugs by two solar cell and six source with C.F.I.A.. The method was based on the formation of ion pair for the pinkish banana color precipitate by the reaction of Mebeverine hydrochloride with Phosphotungstic acid. Turbidity was measured via the reflection of incident light that collides on the surface particles of precipitated at 0-180o. All variables were optimized. The linearity ranged of Mebeverine hydrochloride was 0.05-12.5mmol.L-1, the L.D. (S/N= 3)(3SB) was 521.92 ng/sample depending on dilution for the minimum concentration , with correlation coefficient r = 0.9966while was R.S.D%
... Show MoreSome cases of common fixed point theory for classes of generalized nonexpansive maps are studied. Also, we show that the Picard-Mann scheme can be employed to approximate the unique solution of a mixed-type Volterra-Fredholm functional nonlinear integral equation.
The aim of this research is to use robust technique by trimming, as the analysis of maximum likelihood (ML) often fails in the case of outliers in the studied phenomenon. Where the (MLE) will lose its advantages because of the bad influence caused by the Outliers. In order to address this problem, new statistical methods have been developed so as not to be affected by the outliers. These methods have robustness or resistance. Therefore, maximum trimmed likelihood: (MTL) is a good alternative to achieve more results. Acceptability and analogies, but weights can be used to increase the efficiency of the resulting capacities and to increase the strength of the estimate using the maximum weighted trimmed likelihood (MWTL). In order to perform t
... Show MoreThe transmitting and receiving of data consume the most resources in Wireless Sensor Networks (WSNs). The energy supplied by the battery is the most important resource impacting WSN's lifespan in the sensor node. Therefore, because sensor nodes run from their limited battery, energy-saving is necessary. Data aggregation can be defined as a procedure applied for the elimination of redundant transmissions, and it provides fused information to the base stations, which in turn improves the energy effectiveness and increases the lifespan of energy-constrained WSNs. In this paper, a Perceptually Important Points Based Data Aggregation (PIP-DA) method for Wireless Sensor Networks is suggested to reduce redundant data before sending them to the
... Show MoreIn this research, we dealt with the study of the Non-Homogeneous Poisson process, which is one of the most important statistical issues that have a role in scientific development as it is related to accidents that occur in reality, which are modeled according to Poisson’s operations, because the occurrence of this accident is related to time, whether with the change of time or its stability. In our research, this clarifies the Non-Homogeneous hemispheric process and the use of one of these models of processes, which is an exponentiated - Weibull model that contains three parameters (α, β, σ) as a function to estimate the time rate of occurrence of earthquakes in Erbil Governorate, as the governorate is adjacent to two countr
... Show MoreIn this paper, the effective computational method (ECM) based on the standard monomial polynomial has been implemented to solve the nonlinear Jeffery-Hamel flow problem. Moreover, novel effective computational methods have been developed and suggested in this study by suitable base functions, namely Chebyshev, Bernstein, Legendre, and Hermite polynomials. The utilization of the base functions converts the nonlinear problem to a nonlinear algebraic system of equations, which is then resolved using the Mathematica®12 program. The development of effective computational methods (D-ECM) has been applied to solve the nonlinear Jeffery-Hamel flow problem, then a comparison between the methods has been shown. Furthermore, the maximum
... Show MoreThe research involves preparing gold nanoparticles (AuNPs) and studying the factors that influence the shape, sizes and distribution ratio of the prepared particles according to Turkevich method. These factors include (reaction temperature, initial heating, concentration of gold ions, concentration and quantity of added citrate, reaction time and order of reactant addition). Gold nanoparticles prepared were characterized by the following measurements: UV-Visible spectroscopy, X-ray diffraction and scanning electron microscopy. The average size of gold nanoparticles was formed in the range (20 -35) nm. The amount of added citrate was changed and studied. In addition, the concentration of added gold ions was changed and the calibration cur
... Show MoreMultivariate Non-Parametric control charts were used to monitoring the data that generated by using the simulation, whether they are within control limits or not. Since that non-parametric methods do not require any assumptions about the distribution of the data. This research aims to apply the multivariate non-parametric quality control methods, which are Multivariate Wilcoxon Signed-Rank ( ) , kernel principal component analysis (KPCA) and k-nearest neighbor ( −
This paper presents a new transform method to solve partial differential equations, for finding suitable accurate solutions in a wider domain. It can be used to solve the problems without resorting to the frequency domain. The new transform is combined with the homotopy perturbation method in order to solve three dimensional second order partial differential equations with initial condition, and the convergence of the solution to the exact form is proved. The implementation of the suggested method demonstrates the usefulness in finding exact solutions. The practical implications show the effectiveness of approach and it is easily implemented in finding exact solutions.
Finally, all algori
... Show More