n this research, several estimators concerning the estimation are introduced. These estimators are closely related to the hazard function by using one of the nonparametric methods namely the kernel function for censored data type with varying bandwidth and kernel boundary. Two types of bandwidth are used: local bandwidth and global bandwidth. Moreover, four types of boundary kernel are used namely: Rectangle, Epanechnikov, Biquadratic and Triquadratic and the proposed function was employed with all kernel functions. Two different simulation techniques are also used for two experiments to compare these estimators. In most of the cases, the results have proved that the local bandwidth is the best for all the types of the kernel boundary func
... Show MoreThe analysis, behavior of two-phase flow incompressible fluid in T-juction is done by using "A Computational Fluid Dynamic (CFD) model" that application division of different in industries. The level set method was based in “Finite Element method”. In our search the behavior of two phase flow (oil and water) was studed. The two-phase flow is taken to simulate by using comsol software 4.3. The multivariable was studying such as velocity distribution, share rate, pressure and the fraction of volume at various times. The velocity was employed at the inlet (0.2633, 0.1316, 0.0547 and 0.0283 m/s) for water and (0.1316 m/s) for oil, over and above the pressure set at outlet as a boundary condition. It was observed through the program
... Show MoreLand Use / Land Cover (LULC) classification is considered one of the basic tasks that decision makers and map makers rely on to evaluate the infrastructure, using different types of satellite data, despite the large spectral difference or overlap in the spectra in the same land cover in addition to the problem of aberration and the degree of inclination of the images that may be negatively affect rating performance. The main objective of this study is to develop a working method for classifying the land cover using high-resolution satellite images using object based method. Maximum likelihood pixel based supervised as well as object approaches were examined on QuickBird satellite image in Karbala, Iraq. This study illustrated that
... Show MoreLong memory analysis is one of the most active areas in econometrics and time series where various methods have been introduced to identify and estimate the long memory parameter in partially integrated time series. One of the most common models used to represent time series that have a long memory is the ARFIMA (Auto Regressive Fractional Integration Moving Average Model) which diffs are a fractional number called the fractional parameter. To analyze and determine the ARFIMA model, the fractal parameter must be estimated. There are many methods for fractional parameter estimation. In this research, the estimation methods were divided into indirect methods, where the Hurst parameter is estimated fir
... Show MoreToday, problems of spatial data integration have been further complicated by the rapid development in communication technologies and the increasing amount of available data sources on the World Wide Web. Thus, web-based geospatial data sources can be managed by different communities and the data themselves can vary in respect to quality, coverage, and purpose. Integrating such multiple geospatial datasets remains a challenge for geospatial data consumers. This paper concentrates on the integration of geometric and classification schemes for official data, such as Ordnance Survey (OS) national mapping data, with volunteered geographic information (VGI) data, such as the data derived from the OpenStreetMap (OSM) project. Useful descriptions o
... Show MoreThis research include design and implementation of an Iraqi cities database using spatial data structure for storing data in two or more dimension called k-d tree .The proposed system should allow records to be inserted, deleted and searched by name or coordinate. All the programming of the proposed system written using Delphi ver. 7 and performed on personal computer (Intel core i3).
Big data analysis is essential for modern applications in areas such as healthcare, assistive technology, intelligent transportation, environment and climate monitoring. Traditional algorithms in data mining and machine learning do not scale well with data size. Mining and learning from big data need time and memory efficient techniques, albeit the cost of possible loss in accuracy. We have developed a data aggregation structure to summarize data with large number of instances and data generated from multiple data sources. Data are aggregated at multiple resolutions and resolution provides a trade-off between efficiency and accuracy. The structure is built once, updated incrementally, and serves as a common data input for multiple mining an
... Show More