Medical image segmentation is one of the most actively studied fields in the past few decades, as the development of modern imaging modalities such as magnetic resonance imaging (MRI) and computed tomography (CT), physicians and technicians nowadays have to process the increasing number and size of medical images. Therefore, efficient and accurate computational segmentation algorithms become necessary to extract the desired information from these large data sets. Moreover, sophisticated segmentation algorithms can help the physicians delineate better the anatomical structures presented in the input images, enhance the accuracy of medical diagnosis and facilitate the best treatment planning. Many of the proposed algorithms could perform well in certain medical image applications.The aim of this paper is to change the medical image into something that is more meaningful and easier to analyze and recognize features that helps the doctors to diagnoses the diseases .This paper views selected medical image and segmentation method that have been proposed, which are suitable for processing medical images by use the modification of the traditional interactive threshold technique. This method gave good results,and these results are testedaccordingto the measure of quality (PSNR).
Document analysis of images snapped by camera is a growing challenge. These photos are often poor-quality compound images, composed of various objects and text; this makes automatic analysis complicated. OCR is one of the image processing techniques which is used to perform automatic identification of texts. Existing image processing techniques need to manage many parameters in order to clearly recognize the text in such pictures. Segmentation is regarded one of these essential parameters. This paper discusses the accuracy of segmentation process and its effect over the recognition process. According to the proposed method, the images were firstly filtered using the wiener filter then the active contour algorithm could b
... Show MoreA new technique for embedding image data into another BMP image data is presented. The image data to be embedded is referred to as signature image, while the image into which the signature image is embedded is referred as host image. The host and the signature images are first partitioned into 8x8 blocks, discrete cosine transformed “DCT”, only significant coefficients are retained, the retained coefficients then inserted in the transformed block in a forward and backward zigzag scan direction. The result then inversely transformed and presented as a BMP image file. The peak signal-to-noise ratio (PSNR) is exploited to evaluate the objective visual quality of the host image compared with the original image.
Heat transfer around a flat plate fin integrated with piezoelectric actuator used as oscillated fin in laminar flow has been studied experimentally utilizing thermal image camera. This study is performed
for fixed and oscillated single and triple fins. Different substrate-fin models have been tested, using fins of (35mm and 50mm) height, two sets of triple fins of (3mm and 6mm) spacing and three frequencies
applied to piezoelectric actuator (5, 30 and 50HZ). All tests are carried out for (0.5 m/s and 3m/s) in subsonic open type wind tunnel to evaluate temperature distribution, local and average Nusselt number (Nu) along the fin. It is observed, that the heat transfer enhancement with oscillation is significant compared to without o
This paper presents a combination of enhancement techniques for fingerprint images affected by different type of noise. These techniques were applied to improve image quality and come up with an acceptable image contrast. The proposed method included five different enhancement techniques: Normalization, Histogram Equalization, Binarization, Skeletonization and Fusion. The Normalization process standardized the pixel intensity which facilitated the processing of subsequent image enhancement stages. Subsequently, the Histogram Equalization technique increased the contrast of the images. Furthermore, the Binarization and Skeletonization techniques were implemented to differentiate between the ridge and valley structures and to obtain one
... Show MoreA frequently used approach for denoising is the shrinkage of coefficients of the noisy signal representation in a transform domain. This paper proposes an algorithm based on hybrid transform (stationary wavelet transform proceeding by slantlet transform); The slantlet transform is applied to the approximation subband of the stationary wavelet transform. BlockShrink thresholding technique is applied to the hybrid transform coefficients. This technique can decide the optimal block size and thresholding for every wavelet subband by risk estimate (SURE). The proposed algorithm was executed by using MATLAB R2010aminimizing Stein’s unbiased with natural images contaminated by white Gaussian noise. Numerical results show that our algorithm co
... Show MoreThe image of television dominates the cognitive and artistic motivations. It is the formulation of ideas and visions along with its documentary ability. It is the main element in television work as it is a story that is narrated in pictures. Therefore, attention to image building is a major point of gravity in the work structure as a whole. On the image is the element carrying all aesthetic and expressive values of news and information directly to the hints that work to stimulate and stir the imagination of the recipient to evoke mental images added to the visual images to deepen the meanings.
All visual arts carry elements and components that follow in a particular pattern to give special meanings and specific connotations. However,
Semantic segmentation realization and understanding is a stringent task not just for computer vision but also in the researches of the sciences of earth, semantic segmentation decompose compound architectures in one elements, the most mutual object in a civil outside or inside senses must classified then reinforced with information meaning of all object, it’s a method for labeling and clustering point cloud automatically. Three dimensions natural scenes classification need a point cloud dataset to representation data format as input, many challenge appeared with working of 3d data like: little number, resolution and accurate of three Dimensional dataset . Deep learning now is the po