Remote sensing techniques used in many studies for classfying and measuring of wildfires. Satellite Landsat8(OLI) imagery is used in the presented work. The satellite is considered as a near-polar orbit, with a high multispectral resolution for covering Wollemi National Park in Australia. The work aims to study and measure wildfire natural resources prior to and throughout fire breakout which occurred in Wollemi National Park in Australia for a year (October, 2019), as well as analyzing the harm resulting from such wildfires and their effects on earth and environment through recognizing satellite images for studied region prior to and throughout wildfires. A discussion of methods for computing the affecred area is covered regarding each one of the classes and lessening or limiting the quickly-spreading wildfires damage. This paper propose a 2-phases techniques: training and classifying. In the training phase, the number of clustering is computed by using C# Programming Language and feature extracted and clustered as a group and stored in the dataset. The classification used the moments with (K-Means) classification approach in RS (Remote Sensing) for classified image. The results of classification showed 5 distinctive classes (trees, rivers, bare earth, buildings with no trees, and buildings with trees) in which it might be indicates that the region is secured via each one of the classes prior to and throughout wildfires as well as the changed pixels with regard to all the classes. Also, the classification experimental methods results indicate an excellent performance recision with a good classifying and result analysis about the harms caused by fires in the study area.
Recently, the development and application of the hydrological models based on Geographical Information System (GIS) has increased around the world. One of the most important applications of GIS is mapping the Curve Number (CN) of a catchment. In this research, three softwares, such as an ArcView GIS 9.3 with ArcInfo, Arc Hydro Tool and Geospatial Hydrologic Modeling Extension (Hec-GeoHMS) model for ArcView GIS 9.3, were used to calculate CN of (19210 ha) Salt Creek watershed (SC) which is located in Osage County, Oklahoma, USA. Multi layers were combined and examined using the Environmental Systems Research Institute (ESRI) ArcMap 2009. These layers are soil layer (Soil Survey Geographic SSURGO), 30 m x 30 m resolution of Digital Elevati
... Show MoreIn this study, a fast block matching search algorithm based on blocks' descriptors and multilevel blocks filtering is introduced. The used descriptors are the mean and a set of centralized low order moments. Hierarchal filtering and MAE similarity measure were adopted to nominate the best similar blocks lay within the pool of neighbor blocks. As next step to blocks nomination the similarity of the mean and moments is used to classify the nominated blocks and put them in one of three sub-pools, each one represents certain nomination priority level (i.e., most, less & least level). The main reason of the introducing nomination and classification steps is a significant reduction in the number of matching instances of the pixels belong to the c
... Show MorePlagiarism is described as using someone else's ideas or work without their permission. Using lexical and semantic text similarity notions, this paper presents a plagiarism detection system for examining suspicious texts against available sources on the Web. The user can upload suspicious files in pdf or docx formats. The system will search three popular search engines for the source text (Google, Bing, and Yahoo) and try to identify the top five results for each search engine on the first retrieved page. The corpus is made up of the downloaded files and scraped web page text of the search engines' results. The corpus text and suspicious documents will then be encoded as vectors. For lexical plagiarism detection, the system will
... Show MoreThe goal of this research is to develop a numerical model that can be used to simulate the sedimentation process under two scenarios: first, the flocculation unit is on duty, and second, the flocculation unit is out of commission. The general equation of flow and sediment transport were solved using the finite difference method, then coded using Matlab software. The result of this study was: the difference in removal efficiency between the coded model and operational model for each particle size dataset was very close, with a difference value of +3.01%, indicating that the model can be used to predict the removal efficiency of a rectangular sedimentation basin. The study also revealed
The aim of this research is to compare traditional and modern methods to obtain the optimal solution using dynamic programming and intelligent algorithms to solve the problems of project management.
It shows the possible ways in which these problems can be addressed, drawing on a schedule of interrelated and sequential activities And clarifies the relationships between the activities to determine the beginning and end of each activity and determine the duration and cost of the total project and estimate the times used by each activity and determine the objectives sought by the project through planning, implementation and monitoring to maintain the budget assessed
... Show More<span>Digital audio is required to transmit large sizes of audio information through the most common communication systems; in turn this leads to more challenges in both storage and archieving. In this paper, an efficient audio compressive scheme is proposed, it depends on combined transform coding scheme; it is consist of i) bi-orthogonal (tab 9/7) wavelet transform to decompose the audio signal into low & multi high sub-bands, ii) then the produced sub-bands passed through DCT to de-correlate the signal, iii) the product of the combined transform stage is passed through progressive hierarchical quantization, then traditional run-length encoding (RLE), iv) and finally LZW coding to generate the output mate bitstream.
... Show MoreNano- particles (Ag NPs) are synthesized by using plasma Jet argon gas. The prepared Ag NPs are characterized by Atomic Absorption Spectroscopy (AAS) The measure was performed for different time exposuring 15,30,45 and 60 sec. The results shows the low concentration of nano-silver time expose (15 sec) and very) and high concentration at 60 sec. The UV-VIS spectrometer for nano-silver different time exsposuring to plasma, shows the Surface Plasmon Resonance (SPR) appeared around 419 nm, and the energy gab is 4.1 eV for the 15 second exposure and 1.6eV for 60 second exposure. The Scanning Probe Microscope (SPM) is used to identify the characterization of silver nanoparticles, the average diameter of nano-silver for 15 second exp
... Show MoreThis work deals with the separation of benzene and toluene from a BTX fraction. The separation was carried out using adsorption by molecular sieve zeolite 13X in a fixed bed. The concentration of benzene and toluene in the influent streams was measured using gas chromatography. The effect of flow rate in the range 0.77 – 2.0 cm3/min on the benzene and toluene extraction from BTX fraction was studied. The flow rate increasing decreases the breakthrough and saturation times. The effect of bed height in the range 31.6 – 63.3 cm on benzene and toluene adsorption from BTX fraction was studied. The increase of bed height increasing increases the break point values. The effect of the concentration of benzene in the range 0.0559 – 0.2625g/
... Show MoreA new algorithm is proposed to compress speech signals using wavelet transform and linear predictive coding. Signal compression based on the concept of selecting a small number of approximation coefficients after they are compressed by the wavelet decomposition (Haar and db4) at a suitable chosen level and ignored details coefficients, and then approximation coefficients are windowed by a rectangular window and fed to the linear predictor. Levinson Durbin algorithm is used to compute LP coefficients, reflection coefficients and predictor error. The compress files contain LP coefficients and previous sample. These files are very small in size compared to the size of the original signals. Compression ratio is calculated from the size of th
... Show More