Skull image separation is one of the initial procedures used to detect brain abnormalities. In an MRI image of the brain, this process involves distinguishing the tissue that makes up the brain from the tissue that does not make up the brain. Even for experienced radiologists, separating the brain from the skull is a difficult task, and the accuracy of the results can vary quite a little from one individual to the next. Therefore, skull stripping in brain magnetic resonance volume has become increasingly popular due to the requirement for a dependable, accurate, and thorough method for processing brain datasets. Furthermore, skull stripping must be performed accurately for neuroimaging diagnostic systems since neither non-brain tissues nor the removal of brain sections can be addressed in the subsequent steps, resulting in an unfixed mistake during further analysis. Therefore, accurate skull stripping is necessary for neuroimaging diagnostic systems. This paper proposes a system based on deep learning and Image processing, an innovative method for converting a pre-trained model into another type of pre-trainer using pre-processing operations and the CLAHE filter as a critical phase. The global IBSR data set was used as a test and training set. For the system's efficacy, work was performed based on the principle of three dimensions and three sections of MR images and two-dimensional images, and the results were 99.9% accurate.
In today's world, digital image storage and transmission play an essential role,where images are mainly involved in data transfer. Digital images usually take large storage space and bandwidth for transmission, so image compression is important in data communication. This paper discusses a unique and novel lossy image compression approach. Exactly 50% of image pixels are encoded, and other 50% pixels are excluded. The method uses a block approach. Pixels of the block are transformed with a novel transform. Pixel nibbles are mapped as a single bit in a transform table generating more zeros, which helps achieve compression. Later, inverse transform is applied in reconstruction, and a single bit value from the table is rem
... Show MoreEmbedding an identifying data into digital media such as video, audio or image is known as digital watermarking. In this paper, a non-blind watermarking algorithm based on Berkeley Wavelet Transform is proposed. Firstly, the embedded image is scrambled by using Arnold transform for higher security, and then the embedding process is applied in transform domain of the host image. The experimental results show that this algorithm is invisible and has good robustness for some common image processing operations.
Improving the performance of visual computing systems is achieved by removing unwanted reflections from a picture captured in front of a glass. Reflection and transmission layers are superimposed in a linear form at the reflected photographs. Decomposing an image into these layers is often a difficult task. Plentiful classical separation methods are available in the literature which either works on a single image or requires multiple images. The major step in reflection removal is the detection of reflection and background edges. Separation of the background and reflection layers is depended on edge categorization results. In this paper a wavelet transform is used as a prior estimation of background edges to sepa
... Show MoreThis paper presents a new and effective procedure to extract shadow regions of high- resolution color images. The method applies this process on modulation the equations of the band space a component of the C1-C2-C3 which represent RGB color, to discrimination the region of shadow, by using the detection equations in two ways, the first by applying Laplace filter, the second by using a Kernel Laplace filter, as well as make comparing the two results for these ways with each other's. The proposed method has been successfully tested on many images Google Earth Ikonos and Quickbird images acquired under different lighting conditions and covering both urban, roads. Experimental results show that this algorithm which is simple and effective t
... Show MoreA new approach presented in this study to determine the optimal edge detection threshold value. This approach is base on extracting small homogenous blocks from unequal mean targets. Then, from these blocks we generate small image with known edges (edges represent the lines between the contacted blocks). So, these simulated edges can be assumed as true edges .The true simulated edges, compared with the detected edges in the small generated image is done by using different thresholding values. The comparison based on computing mean square errors between the simulated edge image and the produced edge image from edge detector methods. The mean square error computed for the total edge image (Er), for edge regio
... Show MoreIn this study, a chaotic method is proposed that generates S-boxes similar to AES S-boxes with the help of a private key belonging to
In this study, dynamic encryption techniques are explored as an image cipher method to generate S-boxes similar to AES S-boxes with the help of a private key belonging to the user and enable images to be encrypted or decrypted using S-boxes. This study consists of two stages: the dynamic generation of the S-box method and the encryption-decryption method. S-boxes should have a non-linear structure, and for this reason, K/DSA (Knutt Durstenfeld Shuffle Algorithm), which is one of the pseudo-random techniques, is used to generate S-boxes dynamically. The biggest advantage of this approach is the produ
... Show MoreDigital images are open to several manipulations and dropped cost of compact cameras and mobile phones due to the robust image editing tools. Image credibility is therefore become doubtful, particularly where photos have power, for instance, news reports and insurance claims in a criminal court. Images forensic methods therefore measure the integrity of image by apply different highly technical methods established in literatures. The present work deals with copy move forgery images of Media Integration and Communication Center Forgery (MICC-F2000) dataset for detecting and revealing the areas that have been tampered portion in the image, the image is sectioned into non overlapping blocks using Simple
... Show MoreExperimental programs based test results has been used as a means to find out the response of individual elements of structure. In the present study involves investigated behavior of five reinforced concrete deep beams of dimension (length 1200 x height 300 x width150mm) under two points concentrated load with shear span to depth ratio of (1.52), four of these beams with hallow core and
retrofit with carbon fiber reinforced polymer CFRP (with single or double or sides Strips). Two shapes of hallow are investigated (circle and square section) to evaluated the response of beams in case experimental behavior. Test on simply supported beam was performed in the laboratory & loaddeflection, strain of concrete data and crack pattern of
Malaria is a curative disease, with therapeutics available for patients, such as drugs that can prevent future malaria infections in countries vulnerable to malaria. Though, there is no effective malaria vaccine until now, although it is an interesting research area in medicine. Local descriptors of blood smear image are exploited in this paper to solve parasitized malaria infection detection problem. Swarm intelligence is used to separate the red blood cells from the background of the blood slide image in adaptive manner. After that, the effective corner points are detected and localized using Harris corner detection method. Two types of local descriptors are generated from the local regions of the effective corners which are Gabor based f
... Show MoreIn this paper, we used four classification methods to classify objects and compareamong these methods, these are K Nearest Neighbor's (KNN), Stochastic Gradient Descentlearning (SGD), Logistic Regression Algorithm(LR), and Multi-Layer Perceptron (MLP). Weused MCOCO dataset for classification and detection the objects, these dataset image wererandomly divided into training and testing datasets at a ratio of 7:3, respectively. In randomlyselect training and testing dataset images, converted the color images to the gray level, thenenhancement these gray images using the histogram equalization method, resize (20 x 20) fordataset image. Principal component analysis (PCA) was used for feature extraction, andfinally apply four classification metho
... Show More