Machine learning has a significant advantage for many difficulties in the oil and gas industry, especially when it comes to resolving complex challenges in reservoir characterization. Permeability is one of the most difficult petrophysical parameters to predict using conventional logging techniques. Clarifications of the work flow methodology are presented alongside comprehensive models in this study. The purpose of this study is to provide a more robust technique for predicting permeability; previous studies on the Bazirgan field have attempted to do so, but their estimates have been vague, and the methods they give are obsolete and do not make any concessions to the real or rigid in order to solve the permeability computation. To verify the reliability of training data for zone-by-zone modeling, we split the scenario into two scenarios and applied them to seven wells' worth of data. Moreover, all wellbore intervals were processed, for instance, all five units of Mishrif formation. According to the findings, the more information we have, the more accurate our forecasting model becomes. Multi-resolution graph-based clustering has demonstrated its forecasting stability in two instances by comparing it to the other five machine learning models.
The Electric Discharge (EDM) method is a novel thermoelectric manufacturing technique in which materials are removed by a controlled spark erosion process between two electrodes immersed in a dielectric medium. Because of the difficulties of EDM, determining the optimum cutting parameters to improve cutting performance is extremely tough. As a result, optimizing operating parameters is a critical processing step, particularly for non-traditional machining process like EDM. Adequate selection of processing parameters for the EDM process does not provide ideal conditions, due to the unpredictable processing time required for a given function. Models of Multiple Regression and Genetic Algorithm are considered as effective methods for determ
... Show MoreOne of the most important virulence factors in Pseudomonas aeruginosa is biofilm formation, as it works as a barrier for entering antibiotics into the bacterial cell. Different environmental and nutritional conditions were used to optimize biofilm formation using microtitre plate assay by P. aeruginosa. The low nutrient level of the medium represented by tryptic soy broth (TSB) was better in biofilm formation than the high nutrient level of the medium with Luria Broth (LB). The optimized condition for biofilm production at room temperature (25 °C) is better than at host temperature (37 °C). Moreover, the staining with 0.1% crystal violet and reading the biofilm with wavelength 360 are considered essential factors in
... Show MoreToday, problems of spatial data integration have been further complicated by the rapid development in communication technologies and the increasing amount of available data sources on the World Wide Web. Thus, web-based geospatial data sources can be managed by different communities and the data themselves can vary in respect to quality, coverage, and purpose. Integrating such multiple geospatial datasets remains a challenge for geospatial data consumers. This paper concentrates on the integration of geometric and classification schemes for official data, such as Ordnance Survey (OS) national mapping data, with volunteered geographic information (VGI) data, such as the data derived from the OpenStreetMap (OSM) project. Useful descriptions o
... Show MoreWere arranged this study on two sections, which included first section comparison between markets proposed through the use of transport models and the use of the program QSB for less costs , dependant the optimal solution to chose the suggested market to locate new market that achieve lower costs in the transport of goods from factories (ALRasheed ,ALAmeen , AlMaamun ) to points of sale, but the second part has included comparison of all methods of transport (The least cost method ,Vogels method , Results Approximations method , Total method) depending on the agenda of transport, which includes the market proposed selected from the first section and choose the way in which check the solution first best suited in terms
... Show MoreUrban land uses of all kinds are the constituent elements of the urban spatial structure. Because of the influence of economic and social factors, cities in general are characterized by the dynamic state of their elements over time. Urban functions occur in a certain way with different spatial patterns. Hence, urban planners and the relevant urban management teams should understand the future spatial pattern of these changes by resorting to quantitative models in spatial planning. This is to ensure that future predictions are made with a high level of accuracy so that appropriate strategies can be used to address the problems arising from such changes. The Markov chain method is one of the quantitative models used in spatial planning to ana
... Show More
The bit record is a part from the daily drilling report which is contain information about the type and the number of the bit that is used to drill the well, also contain data about the used weight on bit WOB ,revolution per minute RPM , rate of penetration ROP, pump pressure ,footage drilled and bit dull grade. Generally we can say that the bit record is a rich brief about the bit life in the hole. The main purpose of this research is to select the suitable bit to drill the next oil wells because the right bit selection avoid us more than one problems, on the other hand, the wrong bit selection cause more than one problem. Many methods are related to bit selection, this research is familiar with four of thos
... Show MoreData scarcity is a major challenge when training deep learning (DL) models. DL demands a large amount of data to achieve exceptional performance. Unfortunately, many applications have small or inadequate data to train DL frameworks. Usually, manual labeling is needed to provide labeled data, which typically involves human annotators with a vast background of knowledge. This annotation process is costly, time-consuming, and error-prone. Usually, every DL framework is fed by a significant amount of labeled data to automatically learn representations. Ultimately, a larger amount of data would generate a better DL model and its performance is also application dependent. This issue is the main barrier for