Algorithms using the second order of B -splines [B (x)] and the third order of B -splines [B,3(x)] are derived to solve 1' , 2nd and 3rd linear Fredholm integro-differential equations (F1DEs). These new procedures have all the useful properties of B -spline function and can be used comparatively greater computational ease and efficiency.The results of these algorithms are compared with the cubic spline function.Two numerical examples are given for conciliated the results of this method.
The differential cross section for the Rhodium and Tantalum has been calculated by using the Cross Section Calculations (CSC) in range of energy(1keV-1MeV) . This calculations based on the programming of the Klein-Nashina and Rayleigh Equations. Atomic form factors as well as the coherent functions in Fortran90 language Machine proved very fast an accurate results and the possibility of application of such model to obtain the total coefficient for any elements or compounds.
Our aim of this research is to find the results of numerical solution of Volterra linear integral equation of the second kind using numerical methods such that Trapezoidal and Simpson's rule. That is to derive some statistical properties expected value, the variance and the correlation coefficient between the numerical and exact solutionâ–¡
The purpose behind building the linear regression model is to describe the real linear relation between any explanatory variable in the model and the dependent one, on the basis of the fact that the dependent variable is a linear function of the explanatory variables and one can use it for prediction and control. This purpose does not cometrue without getting significant, stable and reasonable estimatros for the parameters of the model, specifically regression-coefficients. The researcher found that "RUF" the criterian that he had suggested accurate and sufficient to accomplish that purpose when multicollinearity exists provided that the adequate model that satisfies the standard assumpitions of the error-term can be assigned. It
... Show MoreThe theory of probabilistic programming may be conceived in several different ways. As a method of programming it analyses the implications of probabilistic variations in the parameter space of linear or nonlinear programming model. The generating mechanism of such probabilistic variations in the economic models may be due to incomplete information about changes in demand, production and technology, specification errors about the econometric relations presumed for different economic agents, uncertainty of various sorts and the consequences of imperfect aggregation or disaggregating of economic variables. In this Research we discuss the probabilistic programming problem when the coefficient bi is random variable
... Show MoreIn this paper, we designed a new efficient stream cipher cryptosystem that depend on a chaotic map to encrypt (decrypt) different types of digital images. The designed encryption system passed all basic efficiency criteria (like Randomness, MSE, PSNR, Histogram Analysis, and Key Space) that were applied to the key extracted from the random generator as well as to the digital images after completing the encryption process.
Abstract :
The study aims at building a mathematical model for the aggregate production planning for Baghdad soft drinks company. The study is based on a set of aggregate planning strategies (Control of working hours, storage level control strategy) for the purpose of exploiting the resources and productive capacities available in an optimal manner and minimizing production costs by using (Matlab) program. The most important finding of the research is the importance of exploiting during the available time of production capacity. In the months when the demand is less than the production capacity available for investment. In the subsequent months when the demand exceeds the available energy and to minimize the use of overti
... Show MoreA new algorithm is proposed to compress speech signals using wavelet transform and linear predictive coding. Signal compression based on the concept of selecting a small number of approximation coefficients after they are compressed by the wavelet decomposition (Haar and db4) at a suitable chosen level and ignored details coefficients, and then approximation coefficients are windowed by a rectangular window and fed to the linear predictor. Levinson Durbin algorithm is used to compute LP coefficients, reflection coefficients and predictor error. The compress files contain LP coefficients and previous sample. These files are very small in size compared to the size of the original signals. Compression ratio is calculated from the size of th
... Show MoreThis study focuses on studying an oscillation of a second-order delay differential equation. Start work, the equation is introduced here with adequate provisions. All the previous is braced by theorems and examplesthat interpret the applicability and the firmness of the acquired provisions