Polyaniline nanofibers (PAni-NFs) have been synthesized under various concentrations (0.12, 0.16, and 0.2 g/l) of aniline and different times (2h and 3 h) by hydrothermal method at 90°C. Was conducted with the use of X-ray diffraction (XRD), Fourier Transform Infrared spectra (FTIR), Ultraviolet-Visible (UV-VIS) absorption spectra, Thermogravimetric Analysis (TGA), and Field Emission-Scanning Electron Microscopy (FE-SEM). The X-ray diffraction patterns revealed the amorphous nature of all the produced samples. FE-SEM demonstrated that Polyaniline has a nanofiber-like structure. The observed typical peaks of PAni were (1580, 1300-1240, and 821 cm-1 ), analyzed by the chemical bonding of the formed PAni through FTIR spectroscopy. Also, tests indicated the promotion of the thermal stability of polyaniline nano-composite at temperatures above 600°C. Still, the PAni-0.12 g/l sample was better than the other samples, and the optical parameters manifested a decrease in the band gap (Eg) bandgap. The observed TGA test findings also promoted Polyaniline's thermal stability at temperatures reaching 600°C.
In present work examined the oxidation desulfurization in batch system for model fuels with 2250 ppm sulfur content using air as the oxidant and ZnO/AC composite prepared by thermal co-precipitation method. Different factors were studied such as composite loading 1, 1.5 and 2.5 g, temperature 25 oC, 30 oC and 40 oC and reaction time 30, 45 and 60 minutes. The optimum condition is obtained by using Tauguchi experiential design for oxidation desulfurization of model fuel. the highest percent sulfur removal is about 33 at optimum conditions. The kinetic and effect of internal mass transfer were studied for oxidation desulfurization of model fuel, also an empirical kinetic model was calculated for model fuels
... Show MoreThe techniques of fractional calculus are applied successfully in many branches of science and engineering, one of the techniques is the Elzaki Adomian decomposition method (EADM), which researchers did not study with the fractional derivative of Caputo Fabrizio. This work aims to study the Elzaki Adomian decomposition method (EADM) to solve fractional differential equations with the Caputo-Fabrizio derivative. We presented the algorithm of this method with the CF operator and discussed its convergence by using the method of the Cauchy series then, the method has applied to solve Burger, heat-like, and, couped Burger equations with the Caputo -Fabrizio operator. To conclude the method was convergent and effective for solving this type of
... Show MoreWe study one example of hyperbolic problems it's Initial-boundary string problem with two ends. In fact we look for the solution in weak sense in some sobolev spaces. Also we use energy technic with Galerkin's method to study some properties for our problem as existence and uniqueness
The limitations of wireless sensor nodes are power, computational capabilities, and memory. This paper suggests a method to reduce the power consumption by a sensor node. This work is based on the analogy of the routing problem to distribute an electrical field in a physical media with a given density of charges. From this analogy a set of partial differential equations (Poisson's equation) is obtained. A finite difference method is utilized to solve this set numerically. Then a parallel implementation is presented. The parallel implementation is based on domain decomposition, where the original calculation domain is decomposed into several blocks, each of which given to a processing element. All nodes then execute computations in parall
... Show More In this research, an adaptive Canny algorithm using fast Otsu multithresholding method is presented, in which fast Otsu multithresholding method is used to calculate the optimum maximum and minimum hysteresis values and used as automatic thresholding for the fourth stage of the Canny algorithm. The new adaptive Canny algorithm and the standard Canny algorithm (manual hysteresis value) was tested on standard image (Lena) and satellite image. The results approved the validity and accuracy of the new algorithm to find the images edges for personal and satellite images as pre-step for image segmentation.
Multipole mixing ratios for gamma transition populated in from reaction have been studied by least square fitting method also transition strength ] for pure gamma transitions have been calculated taking into account the mean life time for these levels .
Cox regression model have been used to estimate proportion hazard model for patients with hepatitis disease recorded in Gastrointestinal and Hepatic diseases Hospital in Iraq for (2002 -2005). Data consists of (age, gender, survival time terminal stat). A Kaplan-Meier method has been applied to estimate survival function and hazerd function.
In this paper, we introduce and discuss an algorithm for the numerical solution of two- dimensional fractional dispersion equation. The algorithm for the numerical solution of this equation is based on explicit finite difference approximation. Consistency, conditional stability, and convergence of this numerical method are described. Finally, numerical example is presented to show the dispersion behavior according to the order of the fractional derivative and we demonstrate that our explicit finite difference approximation is a computationally efficient method for solving two-dimensional fractional dispersion equation
The equation of Kepler is used to solve different problems associated with celestial mechanics and the dynamics of the orbit. It is an exact explanation for the movement of any two bodies in space under the effect of gravity. This equation represents the body in space in terms of polar coordinates; thus, it can also specify the time required for the body to complete its period along the orbit around another body. This paper is a review for previously published papers related to solve Kepler’s equation and eccentric anomaly. It aims to collect and assess changed iterative initial values for eccentric anomaly for forty previous years. Those initial values are tested to select the finest one based on the number of iterations, as well as the
... Show MoreThe Dagum Regression Model, introduced to address limitations in traditional econometric models, provides enhanced flexibility for analyzing data characterized by heavy tails and asymmetry, which is common in income and wealth distributions. This paper develops and applies the Dagum model, demonstrating its advantages over other distributions such as the Log-Normal and Gamma distributions. The model's parameters are estimated using Maximum Likelihood Estimation (MLE) and the Method of Moments (MoM). A simulation study evaluates both methods' performance across various sample sizes, showing that MoM tends to offer more robust and precise estimates, particularly in small samples. These findings provide valuable insights into the ana
... Show More