Kidney tumors are of different types having different characteristics and also remain challenging in the field of biomedicine. It becomes very important to detect the tumor and classify it at the early stage so that appropriate treatment can be planned. Accurate estimation of kidney tumor volume is essential for clinical diagnoses and therapeutic decisions related to renal diseases. The main objective of this research is to use the Computer-Aided Diagnosis (CAD) algorithms to help the early detection of kidney tumors that addresses the challenges of accurate kidney tumor volume estimation caused by extensive variations in kidney shape, size and orientation across subjects.
In this paper, have tried to implement an automated segmentation method of gray level CT images. The segmentation process is performed by using the Fuzzy C-Means (FCM) clustering method to detect and segment kidney CT images for the kidney region. The propose method is started with pre-processing of the kidney CT image to separate the kidney from the abdomen CT and to enhance its contrast and removing the undesired noise in order to make the image suitable for further processing. The resulted segmented CT images, then used to extract the tumor region from kidney image defining the tumor volume (size) is not an easy task, because the 2D tumor shape in the CT slices are not regular. To overcome the problem of calculating the area of the convex shape of the hull of the tumor in each slice, we have used the Frustum model for the fragmented data.
In the image processing’s field and computer vision it’s important to represent the image by its information. Image information comes from the image’s features that extracted from it using feature detection/extraction techniques and features description. Features in computer vision define informative data. For human eye its perfect to extract information from raw image, but computer cannot recognize image information. This is why various feature extraction techniques have been presented and progressed rapidly. This paper presents a general overview of the feature extraction categories for image.
Image compression is a serious issue in computer storage and transmission, that simply makes efficient use of redundancy embedded within an image itself; in addition, it may exploit human vision or perception limitations to reduce the imperceivable information Polynomial coding is a modern image compression technique based on modelling concept to remove the spatial redundancy embedded within the image effectively that composed of two parts, the mathematical model and the residual. In this paper, two stages proposed technqies adopted, that starts by utilizing the lossy predictor model along with multiresolution base and thresholding techniques corresponding to first stage. Latter by incorporating the near lossless com
... Show MoreSteganography is the art of secret communication. Its purpose is to hide the presence of information, using, for example, images as covers. The frequency domain is well suited for embedding in image, since hiding in this frequency domain coefficients is robust to many attacks. This paper proposed hiding a secret image of size equal to quarter of the cover one. Set Partitioning in Hierarchal Trees (SPIHT) codec is used to code the secret image to achieve security. The proposed method applies Discrete Multiwavelet Transform (DMWT) for cover image. The coded bit stream of the secret image is embedded in the high frequency subbands of the transformed cover one. A scaling factors ? and ? in frequency domain control the quality of the stego
... Show MoreEnhancing quality image fusion was proposed using new algorithms in auto-focus image fusion. The first algorithm is based on determining the standard deviation to combine two images. The second algorithm concentrates on the contrast at edge points and correlation method as the criteria parameter for the resulted image quality. This algorithm considers three blocks with different sizes at the homogenous region and moves it 10 pixels within the same homogenous region. These blocks examine the statistical properties of the block and decide automatically the next step. The resulted combined image is better in the contras
... Show MoreIn the present work, pattern recognition is carried out by the contrast and relative variance of clouds. The K-mean clustering process is then applied to classify the cloud type; also, texture analysis being adopted to extract the textural features and using them in cloud classification process. The test image used in the classification process is the Meteosat-7 image for the D3 region.The K-mean method is adopted as an unsupervised classification. This method depends on the initial chosen seeds of cluster. Since, the initial seeds are chosen randomly, the user supply a set of means, or cluster centers in the n-dimensional space.The K-mean cluster has been applied on two bands (IR2 band) and (water vapour band).The textural analysis is used
... Show MoreBackground: The COVID-19 infection is a more recent pandemic disease all over the world and studying the pulmonary findings on survivors of this disease has lately commenced.
Objective: We aimed to estimate the cumulative percentage of whole radiological resolution after 3 months from recovery and to define the residual chest CT findings and exploring the relevant affecting factors.
Subjects and Methods: Patients who had been previously diagnosed with COVID-19 pneumonia confirmed by RT-PCR test and had radiological evidence of pulmonary involvement by Chest CT during the acute illness were included in the present study. The radiol
... Show MoreThis research deals with the use of a number of statistical methods, such as the kernel method, watershed, histogram and cubic spline, to improve the contrast of digital images. The results obtained according to the RSME and NCC standards have proven that the spline method is the most accurate in the results compared to other statistical methods
Image is an important digital information that used in many internet of things (IoT) applications such as transport, healthcare, agriculture, military, vehicles and wildlife. etc. Also, any image has very important characteristic such as large size, strong correlation and huge redundancy, therefore, encrypting it by using single key Advanced Encryption Standard (AES) through IoT communication technologies makes it vulnerable to many threats, thus, the pixels that have the same values will be encrypted to another pixels that have same values when they use the same key. The contribution of this work is to increase the security of transferred image. This paper proposed multiple key AES algorithm (MECCAES) to improve the security of the tran
... Show MoreDeepfake is a type of artificial intelligence used to create convincing images, audio, and video hoaxes and it concerns celebrities and everyone because they are easy to manufacture. Deepfake are hard to recognize by people and current approaches, especially high-quality ones. As a defense against Deepfake techniques, various methods to detect Deepfake in images have been suggested. Most of them had limitations, like only working with one face in an image. The face has to be facing forward, with both eyes and the mouth open, depending on what part of the face they worked on. Other than that, a few focus on the impact of pre-processing steps on the detection accuracy of the models. This paper introduces a framework design focused on this asp
... Show MoreA content-based image retrieval (CBIR) is a technique used to retrieve images from an image database. However, the CBIR process suffers from less accuracy to retrieve images from an extensive image database and ensure the privacy of images. This paper aims to address the issues of accuracy utilizing deep learning techniques as the CNN method. Also, it provides the necessary privacy for images using fully homomorphic encryption methods by Cheon, Kim, Kim, and Song (CKKS). To achieve these aims, a system has been proposed, namely RCNN_CKKS, that includes two parts. The first part (offline processing) extracts automated high-level features based on a flatting layer in a convolutional neural network (CNN) and then stores these features in a
... Show More