Utilizing the Turbo C programming language, the atmospheric earth model is created from sea level to 86 km. This model has been used to determine atmospheric Earth parameters in this study. Analytical derivations of these parameters are made using the balancing forces theory and the hydrostatic equation. The effects of altitude on density, pressure, temperature, gravitational acceleration, sound speed, scale height, and molecular weight are examined. The mass of the atmosphere is equal to about 50% between sea level and 5.5 km. g is equal to 9.65 m/s2 at 50 km altitude, which is 9% lower than 9.8 m/s2 at sea level. However, at 86 km altitude, g is close to 9.51 m/s2, which is close to 15% smaller than 9.8 m/s2. These results have been compared with an international standard atmosphere. The presumed atmosphere model differs significantly from the actual atmosphere because weather fluctuations are not taken into consideration in this model.
This paper discusses reliability R of the (2+1) Cascade model of inverse Weibull distribution. Reliability is to be found when strength-stress distributed is inverse Weibull random variables with unknown scale parameter and known shape parameter. Six estimation methods (Maximum likelihood, Moment, Least Square, Weighted Least Square, Regression and Percentile) are used to estimate reliability. There is a comparison between six different estimation methods by the simulation study by MATLAB 2016, using two statistical criteria Mean square error and Mean Absolute Percentage Error, where it is found that best estimator between the six estimators is Maximum likelihood estimation method.
This study aims to estimate the accuracy of digital elevation models (DEM) which are created with exploitation of open source Google Earth data and comparing with the widely available DEM datasets, Shuttle Radar Topography Mission (SRTM), version 3, and Advanced Spaceborne Thermal Emission and Reflection Radiometer Global Digital Elevation Model (ASTER GDEM), version 2. The GPS technique is used in this study to produce digital elevation raster with a high level of accuracy, as reference raster, compared to the DEM datasets. Baghdad University, Al Jadriya campus, is selected as a study area. Besides, 151 reference points were created within the study area to evaluate the results based on the values of RMS.Furthermore, th
... Show MoreRESRAD is a computer model designed to estimate risks and radiation doses from residual radioactive materials in soil. Thirty seven soil samples were collected from the area around the berms of Al-Tuwaitha site and two samples as background taken from an area about 3 km north of the site. The samples were measured by gamma-ray spectrometry system using high purity germanium (HPGe) detector. The results of samples measurements showed that three contaminated area with 238U and 235U found in the study area. Two scenarios were applied for each contaminated area to estimate the dose using RESRAD (onsite) version 7.0 code. The total dose of resident farmer scenario for area A, B and C are 0.854, 0.033 and 2.15×10-3 mSv.yr-1, respectively. Whi
... Show MorePermeability data has major importance work that should be handled in all reservoir simulation studies. The importance of permeability data increases in mature oil and gas fields due to its sensitivity for the requirements of some specific improved recoveries. However, the industry has a huge source of data of air permeability measurements against little number of liquid permeability values. This is due to the relatively high cost of special core analysis.
The current study suggests a correlation to convert air permeability data that are conventionally measured during laboratory core analysis into liquid permeability. This correlation introduces a feasible estimation in cases of data loose and poorly consolidated formations, or in cas
A new approach presented in this study to determine the optimal edge detection threshold value. This approach is base on extracting small homogenous blocks from unequal mean targets. Then, from these blocks we generate small image with known edges (edges represent the lines between the contacted blocks). So, these simulated edges can be assumed as true edges .The true simulated edges, compared with the detected edges in the small generated image is done by using different thresholding values. The comparison based on computing mean square errors between the simulated edge image and the produced edge image from edge detector methods. The mean square error computed for the total edge image (Er), for edge regio
... Show MoreIn this paper, we propose a method using continuous wavelets to study the multivariate fractional Brownian motion through the deviations of the transformed random process to find an efficient estimate of Hurst exponent using eigenvalue regression of the covariance matrix. The results of simulations experiments shown that the performance of the proposed estimator was efficient in bias but the variance get increase as signal change from short to long memory the MASE increase relatively. The estimation process was made by calculating the eigenvalues for the variance-covariance matrix of Meyer’s continuous wavelet details coefficients.
In this paper, we propose a method using continuous wavelets to study the multivariate fractional Brownian motion through the deviations of the transformed random process to find an efficient estimate of Hurst exponent using eigenvalue regression of the covariance matrix. The results of simulations experiments shown that the performance of the proposed estimator was efficient in bias but the variance get increase as signal change from short to long memory the MASE increase relatively. The estimation process was made by calculating the eigenvalues for the variance-covariance matrix of Meyer’s continuous wavelet details coefficients.
Estimating an individual's age from a photograph of their face is critical in many applications, including intelligence and defense, border security and human-machine interaction, as well as soft biometric recognition. There has been recent progress in this discipline that focuses on the idea of deep learning. These solutions need the creation and training of deep neural networks for the sole purpose of resolving this issue. In addition, pre-trained deep neural networks are utilized in the research process for the purpose of facial recognition and fine-tuning for accurate outcomes. The purpose of this study was to offer a method for estimating human ages from the frontal view of the face in a manner that is as accurate as possible and takes
... Show MoreFor modeling a photovoltaic module, it is necessary to calculate the basic parameters which control the current-voltage characteristic curves, that is not provided by the manufacturer. Generally, for mono crystalline silicon module, the shunt resistance is generally high, and it is neglected in this model. In this study, three methods are presented for four parameters model. Explicit simplified method based on an analytical solution, slope method based on manufacturer data, and iterative method based on a numerical resolution. The results obtained for these methods were compared with experimental measured data. The iterative method was more accurate than the other two methods but more complexity. The average deviation of
... Show MoreThe Dagum Regression Model, introduced to address limitations in traditional econometric models, provides enhanced flexibility for analyzing data characterized by heavy tails and asymmetry, which is common in income and wealth distributions. This paper develops and applies the Dagum model, demonstrating its advantages over other distributions such as the Log-Normal and Gamma distributions. The model's parameters are estimated using Maximum Likelihood Estimation (MLE) and the Method of Moments (MoM). A simulation study evaluates both methods' performance across various sample sizes, showing that MoM tends to offer more robust and precise estimates, particularly in small samples. These findings provide valuable insights into the ana
... Show More