With the fast progress of information technology and the computer networks, it becomes very easy to reproduce and share the geospatial data due to its digital styles. Therefore, the usage of geospatial data suffers from various problems such as data authentication, ownership proffering, and illegal copying ,etc. These problems can represent the big challenge to future uses of the geospatial data. This paper introduces a new watermarking scheme to ensure the copyright protection of the digital vector map. The main idea of proposed scheme is based on transforming the digital map to frequently domain using the Singular Value Decomposition (SVD) in order to determine suitable areas to insert the watermark data. The digital map is separated into the isolated parts.Watermark data are embedded within the nominated magnitudes in each part when satisfied the definite criteria. The efficiency of proposed watermarking scheme is assessed within statistical measures based on two factors which are fidelity and robustness. Experimental results demonstrate the proposed watermarking scheme representing ideal trade off for disagreement issue between distortion amount and robustness. Also, the proposed scheme shows robust resistance for many kinds of attacks.
The High Power Amplifiers (HPAs), which are used in wireless communication, are distinctly characterized by nonlinear properties. The linearity of the HPA can be accomplished by retreating an HPA to put it in a linear region on account of power performance loss. Meanwhile the Orthogonal Frequency Division Multiplex signal is very rough. Therefore, it will be required a large undo to the linear action area that leads to a vital loss in power efficiency. Thereby, back-off is not a positive solution. A Simplicial Canonical Piecewise-Linear (SCPWL) model based digital predistorters are widely employed to compensating the nonlinear distortion that introduced by a HPA component in OFDM technology. In this paper, the genetic al
... Show MoreAssessment the actual accuracy of laboratory devices prior to first use is very important to know the capabilities of such devices and employ them in multiple domains. As the manual of the device provides information and values in laboratory conditions for the accuracy of these devices, thus the actual evaluation process is necessary.
In this paper, the accuracy of laser scanner (stonex X-300) cameras were evaluated, so that those cameras attached to the device and lead supporting role in it. This is particularly because the device manual did not contain sufficient information about those cameras.
To know the accuracy when using these cameras in close range photogrammetry, laser scanning (stonex X-300) de
... Show MoreWhen sites of new communication occurs which represents the merit of the development of communication technology which is characterized by the services of ( facebook-twiter-corapora-youtube-mass space-friendster-flicker-willnecked in addition to the direct services for viber-whatsup-telgram-and chat on) play important role in changing the infrastructure of Arabic societies which are consideredas closed and not changeable societies during near period and the significance of this study comes from the importounce of this subject which is considered as anew trend of the age on the field of media and public response and acceptance inspite of what is known about Arabic society-it doesn’t accept change-this occurance is associated with terms
... Show MoreToday the Genetic Algorithm (GA) tops all the standard algorithms in solving complex nonlinear equations based on the laws of nature. However, permute convergence is considered one of the most significant drawbacks of GA, which is known as increasing the number of iterations needed to achieve a global optimum. To address this shortcoming, this paper proposes a new GA based on chaotic systems. In GA processes, we use the logistic map and the Linear Feedback Shift Register (LFSR) to generate chaotic values to use instead of each step requiring random values. The Chaos Genetic Algorithm (CGA) avoids local convergence more frequently than the traditional GA due to its diversity. The concept is using chaotic sequences with LFSR to gene
... Show MoreIn this paper, certain types of regularity of topological spaces have been highlighted, which fall within the study of generalizations of separation axioms. One of the important axioms of separation is what is called regularity, and the spaces that have this property are not few, and the most important of these spaces are Euclidean spaces. Therefore, limiting this important concept to topology is within a narrow framework, which necessitates the use of generalized open sets to obtain more good characteristics and preserve the properties achieved in general topology. Perhaps the reader will realize through the research that our generalization preserved most of the characteristics, the most important of which is the hereditary property. Two t
... Show MoreThis research aims to choose the appropriate probability distribution to the reliability analysis for an item through collected data for operating and stoppage time of the case study.
Appropriate choice for .probability distribution is when the data look to be on or close the form fitting line for probability plot and test the data for goodness of fit .
Minitab’s 17 software was used for this purpose after arranging collected data and setting it in the the program.
&nb
... Show MoreIn this paper we present the theoretical foundation of forward error analysis of numerical algorithms under;• Approximations in "built-in" functions.• Rounding errors in arithmetic floating-point operations.• Perturbations of data.The error analysis is based on linearization method. The fundamental tools of the forward error analysis are system of linear absolute and relative a prior and a posteriori error equations and associated condition numbers constituting optimal of possible cumulative round – off errors. The condition numbers enable simple general, quantitative bounds definitions of numerical stability. The theoretical results have been applied a Gaussian elimination, and have proved to be very effective means of both a prior
... Show MoreWith the freedom offered by the Deep Web, people have the opportunity to express themselves freely and discretely, and sadly, this is one of the reasons why people carry out illicit activities there. In this work, a novel dataset for Dark Web active domains known as crawler-DB is presented. To build the crawler-DB, the Onion Routing Network (Tor) was sampled, and then a web crawler capable of crawling into links was built. The link addresses that are gathered by the crawler are then classified automatically into five classes. The algorithm built in this study demonstrated good performance as it achieved an accuracy of 85%. A popular text representation method was used with the proposed crawler-DB crossed by two different supervise
... Show MoreDue to the increased of information existing on the World Wide Web (WWW), the subject of how to extract new and useful knowledge from the log file has gained big interest among researchers in data mining and knowledge discovery topics.
Web miming, which is a subset of data mining divided into three particular ways, web content mining, web structure mining, web usage mining. This paper is interested in server log file, which is belonging to the third category (web usage mining). This file will be analyzed according to the suggested algorithm to extract the behavior of the user. Knowing the behavior is coming from knowing the complete path which is taken from the specific user.
Extracting these types of knowledge required many of KDD
Steganography is a technique of concealing secret data within other quotidian files of the same or different types. Hiding data has been essential to digital information security. This work aims to design a stego method that can effectively hide a message inside the images of the video file. In this work, a video steganography model has been proposed through training a model to hiding video (or images) within another video using convolutional neural networks (CNN). By using a CNN in this approach, two main goals can be achieved for any steganographic methods which are, increasing security (hardness to observed and broken by used steganalysis program), this was achieved in this work as the weights and architecture are randomized. Thus,
... Show More