Background: image processing of medical images is major method to increase reliability of cancer diagnosis.
Methods: The proposed system proceeded into two stages: First, enhancement stage which was performed using of median filter to reduce the noise and artifacts that present in a CT image of a human lung with a cancer, Second: implementation of k-means clustering algorithm.
Results: the result image of k-means algorithm compared with the image resulted from implementation of fuzzy c-means (FCM) algorithm.
Conclusion: We found that the time required for k-means algorithm implementation is less than that of FCM algorithm.MATLAB package (version 7.3) was used in writing the programming code of our work.
Recently, Image enhancement techniques can be represented as one of the most significant topics in the field of digital image processing. The basic problem in the enhancement method is how to remove noise or improve digital image details. In the current research a method for digital image de-noising and its detail sharpening/highlighted was proposed. The proposed approach uses fuzzy logic technique to process each pixel inside entire image, and then take the decision if it is noisy or need more processing for highlighting. This issue is performed by examining the degree of association with neighboring elements based on fuzzy algorithm. The proposed de-noising approach was evaluated by some standard images after corrupting them with impulse
... Show MoreVarious document types play an influential role in a lot of our lives activities today; hence preserving their integrity is an important matter. Such documents have various forms, including texts, videos, sounds, and images. The latter types' authentication will be our concern here in this paper. Images can be handled spatially by doing the proper modification directly on their pixel values or spectrally through conducting some adjustments to some of the addressed coefficients. Due to spectral (frequency) domain flexibility in handling data, the domain coefficients are utilized for the watermark embedding purpose. The integer wavelet transform (IWT), which is a wavelet transform based on the lifting scheme,
... Show MoreRecently, digital communication has become a critical necessity and so the Internet has become the most used medium and most efficient for digital communication. At the same time, data transmitted through the Internet are becoming more vulnerable. Therefore, the issue of maintaining secrecy of data is very important, especially if the data is personal or confidential. Steganography has provided a reliable method for solving such problems. Steganography is an effective technique in secret communication in digital worlds where data sharing and transfer is increasing through the Internet, emails and other ways. The main challenges of steganography methods are the undetectability and the imperceptibility of con
... Show MoreIn this paper, an algorithm through which we can embed more data than the
regular methods under spatial domain is introduced. We compressed the secret data
using Huffman coding and then this compressed data is embedded using laplacian
sharpening method.
We used Laplace filters to determine the effective hiding places, then based on
threshold value we found the places with the highest values acquired from these filters
for embedding the watermark. In this work our aim is increasing the capacity of
information which is to be embedded by using Huffman code and at the same time
increasing the security of the algorithm by hiding data in the places that have highest
values of edges and less noticeable.
The perform
With the rapid development of computers and network technologies, the security of information in the internet becomes compromise and many threats may affect the integrity of such information. Many researches are focused theirs works on providing solution to this threat. Machine learning and data mining are widely used in anomaly-detection schemes to decide whether or not a malicious activity is taking place on a network. In this paper a hierarchical classification for anomaly based intrusion detection system is proposed. Two levels of features selection and classification are used. In the first level, the global feature vector for detection the basic attacks (DoS, U2R, R2L and Probe) is selected. In the second level, four local feature vect
... Show MoreScience, technology and many other fields are use clustering algorithm widely for many applications, this paper presents a new hybrid algorithm called KDBSCAN that work on improving k-mean algorithm and solve two of its
problems, the first problem is number of cluster, when it`s must be entered by user, this problem solved by using DBSCAN algorithm for estimating number of cluster, and the second problem is randomly initial centroid problem that has been dealt with by choosing the centroid in steady method and removing randomly choosing for a better results, this work used DUC 2002 dataset to obtain the results of KDBSCAN algorithm, it`s work in many application fields such as electronics libraries,
In recent years images have been used widely by online social networks providers or numerous organizations such as governments, police departments, colleges, universities, and private companies. It held in vast databases. Thus, efficient storage of such images is advantageous and its compression is an appealing application. Image compression generally represents the significant image information compactly with a smaller size of bytes while insignificant image information (redundancy) already been removed for this reason image compression has an important role in data transfer and storage especially due to the data explosion that is increasing significantly. It is a challenging task since there are highly complex unknown correlat
... Show MoreIris recognition occupies an important rank among the biometric types of approaches as a result of its accuracy and efficiency. The aim of this paper is to suggest a developed system for iris identification based on the fusion of scale invariant feature transforms (SIFT) along with local binary patterns of features extraction. Several steps have been applied. Firstly, any image type was converted to grayscale. Secondly, localization of the iris was achieved using circular Hough transform. Thirdly, the normalization to convert the polar value to Cartesian using Daugman’s rubber sheet models, followed by histogram equalization to enhance the iris region. Finally, the features were extracted by utilizing the scale invariant feature
... Show More