In the last two decades, arid and semi-arid regions of China suffered rapid changes in the Land Use/Cover Change (LUCC) due to increasing demand on food, resulting from growing population. In the process of this study, we established the land use/cover classification in addition to remote sensing characteristics. This was done by analysis of the dynamics of (LUCC) in Zhengzhou area for the period 1988-2006. Interpretation of a laminar extraction technique was implied in the identification of typical attributes of land use/cover types. A prominent result of the study indicates a gradual development in urbanization giving a gradual reduction in crop field area, due to the progressive economy in Zhengzhou. The results also reflect degradation of land quality inferred from the decline in yield capacity and significant degeneration. Developing land types are Barren land and urban areas (8.02%, and 246.65%). Shrinking land types are water, forest, crop, and grass areas (5.98, 11.52%, 7.09%, and 20.02% respectively). Such changes are the results of physical and anthropogenic factors. The results are expected to provide very useful information for the local government in its future planning.
Poverty phenomenon is very substantial topic that determines the future of societies and governments and the way that they deals with education, health and economy. Sometimes poverty takes multidimensional trends through education and health. The research aims at studying multidimensional poverty in Iraq by using panelized regression methods, to analyze Big Data sets from demographical surveys collected by the Central Statistical Organization in Iraq. We choose classical penalized regression method represented by The Ridge Regression, Moreover; we choose another penalized method which is the Smooth Integration of Counting and Absolute Deviation (SICA) to analyze Big Data sets related to the different poverty forms in Iraq. Euclidian Distanc
... Show MoreCommunication is one of the vast and rapidly growing fields of engineering, where
increasing the efficiency of communication by overcoming the external
electromagnetic sources and noise is considered a challenging task. To achieve
confidentiality for color image transmission over the noisy communication channels
a proposed algorithm is presented for image encryption using AES algorithm. This
algorithm combined with error detections using Cyclic Redundancy Check (CRC) to
preserve the integrity of the encrypted data. This paper presents an error detection
method uses Cyclic Redundancy Check (CRC), the CRC value can be generated by
two methods: Serial and Parallel CRC Implementation. The proposed algorithm for
the
Unconfined Compressive Strength is considered the most important parameter of rock strength properties affecting the rock failure criteria. Various research have developed rock strength for specific lithology to estimate high-accuracy value without a core. Previous analyses did not account for the formation's numerous lithologies and interbedded layers. The main aim of the present study is to select the suitable correlation to predict the UCS for hole depth of formation without separating the lithology. Furthermore, the second aim is to detect an adequate input parameter among set wireline to determine the UCS by using data of three wells along ten formations (Tanuma, Khasib, Mishrif, Rumaila, Ahmady, Maudud, Nahr Um
... Show MoreIn this research, a qualitative seismic processing and interpretation is made up
through using 3D-seismic reflection data of East-Baghdad oil field in the central part
of Iraq. We used the new technique, this technique is used for the direct hydrocarbons
indicators (DHI) called Amplitude Versus Offset or Angle (AVO or AVA) technique.
For this purposes a cube of 3D seismic data (Pre-stack) was chosen in addition to the
available data of wells Z-2 and Z-24. These data were processed and interpreted by
utilizing the programs of the HRS-9* software where we have studied and analyzed
the AVO within Zubair Formation. Many AVO processing operations were carried
out which include AVO processing (Pre-conditioning for gathe
The physical and elastic characteristics of rocks determine rock strengths in general. Rock strength is frequently assessed using porosity well logs such as neutron and sonic logs. The essential criteria for estimating rock mechanic parameters in petroleum engineering research are uniaxial compressive strength and elastic modulus. Indirect estimation using well-log data is necessary to measure these variables. This study attempts to create a single regression model that can accurately forecast rock mechanic characteristics for the Harth Carbonate Formation in the Fauqi oil field. According to the findings of this study, petrophysical parameters are reliable indexes for determining rock mechanical properties having good performance p
... Show MoreToday, there are large amounts of geospatial data available on the web such as Google Map (GM), OpenStreetMap (OSM), Flickr service, Wikimapia and others. All of these services called open source geospatial data. Geospatial data from different sources often has variable accuracy due to different data collection methods; therefore data accuracy may not meet the user requirement in varying organization. This paper aims to develop a tool to assess the quality of GM data by comparing it with formal data such as spatial data from Mayoralty of Baghdad (MB). This tool developed by Visual Basic language, and validated on two different study areas in Baghdad / Iraq (Al-Karada and Al- Kadhumiyah). The positional accuracy was asses
... Show MoreData scarcity is a major challenge when training deep learning (DL) models. DL demands a large amount of data to achieve exceptional performance. Unfortunately, many applications have small or inadequate data to train DL frameworks. Usually, manual labeling is needed to provide labeled data, which typically involves human annotators with a vast background of knowledge. This annotation process is costly, time-consuming, and error-prone. Usually, every DL framework is fed by a significant amount of labeled data to automatically learn representations. Ultimately, a larger amount of data would generate a better DL model and its performance is also application dependent. This issue is the main barrier for