Carbon monoxide (CO) plays an important indirect greenhouse gases due to its influences on the budgets of hydroxyl radicals (OH) and Ozone (O3). The atmospheric carbon monoxide (CO) observations can only be made on global and continental scales by remote sensing instruments situated in space. One of instrument is the Measurements of Pollution in the Troposphere (MOPITT), which is designed to measure troposphere CO and CH4 by use of a nadir-viewing geometry and was launched aboard the Earth Observing System (EOS) Terra spacecraft on 18 December 1999. Results from the analysis of the retrieved monthly (1ºх1º) spatial grid resolution, from the MOPITT data were utilized to analyze the distribution of CO surface mixing ratio in Iraq for the year 2010. The analysis shows the seasonal variations in the CO surface fluctuate considerably observed between winter and summer. The mean and the standard deviation of monthly CO was (172.076 ± 62.026 ppbv) for the entire study period. The CO value in winter was higher than its values in summer season and its values over Industrial and congested urban zones higher than its values in the rest of regions throughout the year. Maximum values occurred in the northern region (234.105 ppbv) on February at Erbil, were attributed to the increased human activity, geographic nature of the areas and climatic variations. The elevation of CO values on the south-eastern region during the June - November period was due to the emissions from the oil extraction and the burning of agricultural residues in the paddy fields. A greater draws down of the CO occurs over pristine desert environment in the western region (110.047 ppbv) on July at Al Anbar (41.5°log. × 32.5°lat.). The monthly CO surface VMR maps for 2010 were generated using kriging algorithm technique. The MOPITT data and the Satellite measurements are able to measure the increase of the atmosphere CO concentrations over different regions.
The present work aims to study the effect of using an automatic thresholding technique to convert the features edges of the images to binary images in order to split the object from its background, where the features edges of the sampled images obtained from first-order edge detection operators (Roberts, Prewitt and Sobel) and second-order edge detection operators (Laplacian operators). The optimum automatic threshold are calculated using fast Otsu method. The study is applied on a personal image (Roben) and a satellite image to study the compatibility of this procedure with two different kinds of images. The obtained results are discussed.
Faces blurring is one of the important complex processes that is considered one of the advanced computer vision fields. The face blurring processes generally have two main steps to be done. The first step has detected the faces that appear in the frames while the second step is tracking the detected faces which based on the information extracted during the detection step. In the proposed method, an image is captured by the camera in real time, then the Viola Jones algorithm used for the purpose of detecting multiple faces in the captured image and for the purpose of reducing the time consumed to handle the entire captured image, the image background is removed and only the motion areas are processe
... Show MoreNoor oil field is one of smallest fields in Missan province. Twelve well penetrates the Mishrif Formation in Noor field and eight of them were selected for this study. Mishrif formation is one of the most important reservoirs in Noor field and it consists of one anticline dome and bounded by the Khasib formation at the top and the Rumaila formation at the bottom. The reservoir was divided into eight units separated by isolated units according to partition taken by a rounding fields.
In this paper histograms frequency distribution of the porosity, permeability, and water saturation were plotted for MA unit of Mishrif formation in Noor field, and then transformed to the normal distribution by applying the Box-Cox transformation alg
... Show MoreData scarcity is a major challenge when training deep learning (DL) models. DL demands a large amount of data to achieve exceptional performance. Unfortunately, many applications have small or inadequate data to train DL frameworks. Usually, manual labeling is needed to provide labeled data, which typically involves human annotators with a vast background of knowledge. This annotation process is costly, time-consuming, and error-prone. Usually, every DL framework is fed by a significant amount of labeled data to automatically learn representations. Ultimately, a larger amount of data would generate a better DL model and its performance is also application dependent. This issue is the main barrier for
In this research want to make analysis for some indicators and it's classifications that related with the teaching process and the scientific level for graduate studies in the university by using analysis of variance for ranked data for repeated measurements instead of the ordinary analysis of variance . We reach many conclusions for the
important classifications for each indicator that has affected on the teaching process. &nb
... Show MoreThis research aims to study the methods of reduction of dimensions that overcome the problem curse of dimensionality when traditional methods fail to provide a good estimation of the parameters So this problem must be dealt with directly . Two methods were used to solve the problem of high dimensional data, The first method is the non-classical method Slice inverse regression ( SIR ) method and the proposed weight standard Sir (WSIR) method and principal components (PCA) which is the general method used in reducing dimensions, (SIR ) and (PCA) is based on the work of linear combinations of a subset of the original explanatory variables, which may suffer from the problem of heterogeneity and the problem of linear
... Show MoreIn recent years, due to the economic benefits and technical advances of cloud
computing, huge amounts of data have been outsourced in the cloud. To protect the
privacy of their sensitive data, data owners have to encrypt their data prior
outsourcing it to the untrusted cloud servers. To facilitate searching over encrypted
data, several approaches have been provided. However, the majority of these
approaches handle Boolean search but not ranked search; a widely accepted
technique in the current information retrieval (IR) systems to retrieve only the top–k
relevant files. In this paper, propose a distributed secure ranked search scheme over
the encrypted cloud servers. Such scheme allows for the authorized user to