Geomorphology is concerned with the topographic units that make up the Earth's surface. These take many forms, such as mountains and rivers, and include many dangers such as landslides, landslides and erosion. Many studies appeared in this field to analyze its effects and risks resulting from it, including urban studies, to determine the trends of optimal urban expansion and its geomorphological interactions. The results showed that the city of Kut originated and expanded near the course of the Tigris River and its branches, and it suffers from unbalanced urban expansion, due to the high rate of population growth, and overcrowding in housing units with the growth of urban land uses in it, which prompted the city to extend horizontally and vertically and use land New at the expense of the lands and areas surrounding the city of Kut. To do so, this research dealt with determining the trends of current and future urban expansion of the city of Kut, and the detection of geomorphological controls that determine that expansion through the geographical characteristics of the city. The research relied on the descriptive, analytical, historical and quantitative method, collecting data through field studies and relevant government institutions, and using satellite visuals and GIS techniques in analyzing data and drawing conclusions. It has become clear from the research that there are natural determinants represented (rivers, marshes, Sabkha, natural resources) and they outweigh the effect of human determinants represented (orchards and agricultural lands, industrial areas, government and military structures, landfills, quarries and brick factories), which is due to poor planning. The override on the city’s base map scheme, and therefore these determinants restrict that expansion or increase its cost, determine its direction and reduce the city’s absorptive capacity. As for the best available directions for the future urban expansion of the city, they are towards the northwest along the (Kut-Baghdad) road, and towards the southeast along the (Kut-Nasiriya) road, because there are no geomorphic or human determinants that impede the spatial expansion of the city towards it. Thus, defining and measuring the trend of urban expansion will be faced with the various natural-geomorphological determinants, which must be considered among the priorities of any strategic plan for developing urban areas, and protecting them from geomorphological risks.
A super pixel can be defined as a group of pixels, which have similar characteristics, which can be very helpful for image segmentation. It is generally color based segmentation as well as other features like texture, statistics…etc .There are many algorithms available to segment super pixels like Simple Linear Iterative Clustering (SLIC) super pixels and Density-Based Spatial Clustering of Application with Noise (DBSCAN). SLIC algorithm essentially relay on choosing N random or regular seeds points covering the used image for segmentation. In this paper Split and Merge algorithm was used instead to overcome determination the seed point's location and numbers as well as other used parameters. The overall results were better from the SL
... Show MoreThis paper is focusing on reducing the time for text processing operations by taking the advantage of enumerating each string using the multi hashing methodology. Text analysis is an important subject for any system that deals with strings (sequences of characters from an alphabet) and text processing (e.g., word-processor, text editor and other text manipulation systems). Many problems have been arisen when dealing with string operations which consist of an unfixed number of characters (e.g., the execution time); this due to the overhead embedded-operations (like, symbols matching and conversion operations). The execution time largely depends on the string characteristics; especially its length (i.e., the number of characters consisting
... Show MoreA new algorithm is proposed to compress speech signals using wavelet transform and linear predictive coding. Signal compression based on the concept of selecting a small number of approximation coefficients after they are compressed by the wavelet decomposition (Haar and db4) at a suitable chosen level and ignored details coefficients, and then approximation coefficients are windowed by a rectangular window and fed to the linear predictor. Levinson Durbin algorithm is used to compute LP coefficients, reflection coefficients and predictor error. The compress files contain LP coefficients and previous sample. These files are very small in size compared to the size of the original signals. Compression ratio is calculated from the size of th
... Show MoreIn this paper, the combined source coding with Multi Carrier Code Division Multiple Access (MC-CDMA) system is proposed, where the transmission of the compressed image produced from source coding through Additive White Gaussian Noise (AWGN) channel for a single user and multi users. In which the (MC-CDMA) system removes Inter Symbol Interference (ISI) and Inter Carrier Interference (ICI). The hybrid compression system of Discrete Cosine Transform (DCT) and predictive coding (PC) technique are integrated as a source coding. The simulation results indicates that the transmission system of a single user was much better than the transmission system of multi users. When the number of users increased, the Bit Error Rate (BER) increased. For a
... Show MoreThis work deals with the separation of benzene and toluene from a BTX fraction. The separation was carried out using adsorption by molecular sieve zeolite 13X in a fixed bed. The concentration of benzene and toluene in the influent streams was measured using gas chromatography. The effect of flow rate in the range 0.77 – 2.0 cm3/min on the benzene and toluene extraction from BTX fraction was studied. The flow rate increasing decreases the breakthrough and saturation times. The effect of bed height in the range 31.6 – 63.3 cm on benzene and toluene adsorption from BTX fraction was studied. The increase of bed height increasing increases the break point values. The effect of the concentration of benzene in the range 0.0559 – 0.2625g/
... Show MoreWhen images are customized to identify changes that have occurred using techniques such as spectral signature, which can be used to extract features, they can be of great value. In this paper, it was proposed to use the spectral signature to extract information from satellite images and then classify them into four categories. Here it is based on a set of data from the Kaggle satellite imagery website that represents different categories such as clouds, deserts, water, and green areas. After preprocessing these images, the data is transformed into a spectral signature using the Fast Fourier Transform (FFT) algorithm. Then the data of each image is reduced by selecting the top 20 features and transforming them from a two-dimensiona
... Show MoreIn this study, a fast block matching search algorithm based on blocks' descriptors and multilevel blocks filtering is introduced. The used descriptors are the mean and a set of centralized low order moments. Hierarchal filtering and MAE similarity measure were adopted to nominate the best similar blocks lay within the pool of neighbor blocks. As next step to blocks nomination the similarity of the mean and moments is used to classify the nominated blocks and put them in one of three sub-pools, each one represents certain nomination priority level (i.e., most, less & least level). The main reason of the introducing nomination and classification steps is a significant reduction in the number of matching instances of the pixels belong to the c
... Show MorePlagiarism is described as using someone else's ideas or work without their permission. Using lexical and semantic text similarity notions, this paper presents a plagiarism detection system for examining suspicious texts against available sources on the Web. The user can upload suspicious files in pdf or docx formats. The system will search three popular search engines for the source text (Google, Bing, and Yahoo) and try to identify the top five results for each search engine on the first retrieved page. The corpus is made up of the downloaded files and scraped web page text of the search engines' results. The corpus text and suspicious documents will then be encoded as vectors. For lexical plagiarism detection, the system will
... Show MoreThe goal of this research is to develop a numerical model that can be used to simulate the sedimentation process under two scenarios: first, the flocculation unit is on duty, and second, the flocculation unit is out of commission. The general equation of flow and sediment transport were solved using the finite difference method, then coded using Matlab software. The result of this study was: the difference in removal efficiency between the coded model and operational model for each particle size dataset was very close, with a difference value of +3.01%, indicating that the model can be used to predict the removal efficiency of a rectangular sedimentation basin. The study also revealed