Data hiding is the process of encoding extra information in an image by making small modification to its pixels. To be practical, the hidden data must be perceptually invisible yet robust to common signal processing operations. This paper introduces a scheme for hiding a signature image that could be as much as 25% of the host image data and hence could be used both in digital watermarking as well as image/data hiding. The proposed algorithm uses orthogonal discrete wavelet transforms with two zero moments and with improved time localization called discrete slantlet transform for both host and signature image. A scaling factor ? in frequency domain control the quality of the watermarked images. Experimental results of signature image recovery after applying JPEG coding to the watermarking image are included.
In real situations all observations and measurements are not exact numbers but more or less non-exact, also called fuzzy. So, in this paper, we use approximate non-Bayesian computational methods to estimate inverse Weibull parameters and reliability function with fuzzy data. The maximum likelihood and moment estimations are obtained as non-Bayesian estimation. The maximum likelihood estimators have been derived numerically based on two iterative techniques namely “Newton-Raphson†and the “Expectation-Maximization†techniques. In addition, we provide compared numerically through Monte-Carlo simulation study to obtained estimates of the parameters and reliability function i
... Show MoreIn this paper a method to determine whether an image is forged (spliced) or not is presented. The proposed method is based on a classification model to determine the authenticity of a tested image. Image splicing causes many sharp edges (high frequencies) and discontinuities to appear in the spliced image. Capturing these high frequencies in the wavelet domain rather than in the spatial domain is investigated in this paper. Correlation between high-frequency sub-bands coefficients of Discrete Wavelet Transform (DWT) is also described using co-occurrence matrix. This matrix was an input feature vector to a classifier. The best accuracy of 92.79% and 94.56% on Casia v1.0 and Casia v2.0 datasets respectively was achieved. This pe
... Show MoreElzaki Transform Adomian decomposition technique (ETADM), which an elegant combine, has been employed in this work to solve non-linear Riccati matrix differential equations. Solutions are presented to demonstrate the relevance of the current approach. With the use of figures, the results of the proposed strategy are displayed and evaluated. It is demonstrated that the suggested approach is effective, dependable, and simple to apply to a range of related scientific and technical problems.
There is a great deal of systems dealing with image processing that are being used and developed on a daily basis. Those systems need the deployment of some basic operations such as detecting the Regions of Interest and matching those regions, in addition to the description of their properties. Those operations play a significant role in decision making which is necessary for the next operations depending on the assigned task. In order to accomplish those tasks, various algorithms have been introduced throughout years. One of the most popular algorithms is the Scale Invariant Feature Transform (SIFT). The efficiency of this algorithm is its performance in the process of detection and property description, and that is due to the fact that
... Show More Today, the use of iris recognition is expanding globally as the most accurate and reliable biometric feature in terms of uniqueness and robustness. The motivation for the reduction or compression of the large databases of iris images becomes an urgent requirement. In general, image compression is the process to remove the insignificant or redundant information from the image details, that implicitly makes efficient use of redundancy embedded within the image itself. In addition, it may exploit human vision or perception limitations to reduce the imperceptible information.
This paper deals with reducing the size of image, namely reducing the number of bits required in representing the
Twitter data analysis is an emerging field of research that utilizes data collected from Twitter to address many issues such as disaster response, sentiment analysis, and demographic studies. The success of data analysis relies on collecting accurate and representative data of the studied group or phenomena to get the best results. Various twitter analysis applications rely on collecting the locations of the users sending the tweets, but this information is not always available. There are several attempts at estimating location based aspects of a tweet. However, there is a lack of attempts on investigating the data collection methods that are focused on location. In this paper, we investigate the two methods for obtaining location-based dat
... Show More