Preferred Language
Articles
/
0BcqP48BVTCNdQwCwGXS
Skull Stripping Based on the Segmentation Models
...Show More Authors

Skull image separation is one of the initial procedures used to detect brain abnormalities. In an MRI image of the brain, this process involves distinguishing the tissue that makes up the brain from the tissue that does not make up the brain. Even for experienced radiologists, separating the brain from the skull is a difficult task, and the accuracy of the results can vary quite a little from one individual to the next. Therefore, skull stripping in brain magnetic resonance volume has become increasingly popular due to the requirement for a dependable, accurate, and thorough method for processing brain datasets. Furthermore, skull stripping must be performed accurately for neuroimaging diagnostic systems since neither non-brain tissues nor the removal of brain sections can be addressed in the subsequent steps, resulting in an unfixed mistake during further analysis. Therefore, accurate skull stripping is necessary for neuroimaging diagnostic systems. This paper proposes a system based on deep learning and Image processing, an innovative method for converting a pre-trained model into another type of pre-trainer using pre-processing operations and the CLAHE filter as a critical phase. The global IBSR data set was used as a test and training set. For the system's efficacy, work was performed based on the principle of three dimensions and three sections of MR images and two-dimensional images, and the results were 99.9% accurate.

Publication Date
Sun Jun 01 2014
Journal Name
Baghdad Science Journal
Multifocus Images Fusion Based On Homogenity and Edges Measures
...Show More Authors

Image fusion is one of the most important techniques in digital image processing, includes the development of software to make the integration of multiple sets of data for the same location; It is one of the new fields adopted in solve the problems of the digital image, and produce high-quality images contains on more information for the purposes of interpretation, classification, segmentation and compression, etc. In this research, there is a solution of problems faced by different digital images such as multi focus images through a simulation process using the camera to the work of the fuse of various digital images based on previously adopted fusion techniques such as arithmetic techniques (BT, CNT and MLT), statistical techniques (LMM,

... Show More
View Publication Preview PDF
Crossref
Publication Date
Sat Aug 01 2015
Journal Name
International Journal Of Computer Science And Mobile Computing
Image Compression based on Non-Linear Polynomial Prediction Model
...Show More Authors

Publication Date
Wed Mar 20 2019
Journal Name
Journal Of Legal Sciences
Sale Based on an Open Price – A Comparative Study
...Show More Authors

The ultimate goal of any sale contract is to maximize the combined returns of the parties, knowing that these returns are not realized (in long-term contracts) except in the final stages of the contract. Therefore, this requires the parties to the contract to leave some elements open, including the price, because the adoption of a fixed price and inflexible will not be appropriate to meet their desires when contracting, especially with ignorance of matters beyond their will and may affect the market conditions, and the possibility of modifying the fixed price through The elimination is very limited, especially when the parties to the contract are equally in terms of economic strength. Hence, in order to respond to market uncertainties, the

... Show More
View Publication
Publication Date
Mon Sep 01 2025
Journal Name
Journal Of Information Hiding And Multimedia Signal Processing
Steganography Based on Image Compression Using a Hybrid Technique
...Show More Authors

Information security is a crucial factor when communicating sensitive information between two parties. Steganography is one of the most techniques used for this purpose. This paper aims to enhance the capacity and robustness of hiding information by compressing image data to a small size while maintaining high quality so that the secret information remains invisible and only the sender and recipient can recognize the transmission. Three techniques are employed to conceal color and gray images, the Wavelet Color Process Technique (WCPT), Wavelet Gray Process Technique (WGPT), and Hybrid Gray Process Technique (HGPT). A comparison between the first and second techniques according to quality metrics, Root-Mean-Square Error (RMSE), Compression-

... Show More
View Publication
Publication Date
Wed May 06 2015
Journal Name
16th Conference In Natural Science And Mathematics
Efficient digital Image filtering method based on fuzzy algorithm
...Show More Authors

Recently, Image enhancement techniques can be represented as one of the most significant topics in the field of digital image processing. The basic problem in the enhancement method is how to remove noise or improve digital image details. In the current research a method for digital image de-noising and its detail sharpening/highlighted was proposed. The proposed approach uses fuzzy logic technique to process each pixel inside entire image, and then take the decision if it is noisy or need more processing for highlighting. This issue is performed by examining the degree of association with neighboring elements based on fuzzy algorithm. The proposed de-noising approach was evaluated by some standard images after corrupting them with impulse

... Show More
View Publication
Publication Date
Wed Jan 01 2020
Journal Name
Aip Conference Proceedings
Developing a lightweight cryptographic algorithm based on DNA computing
...Show More Authors

This work aims to develop a secure lightweight cipher algorithm for constrained devices. A secure communication among constrained devices is a critical issue during the data transmission from the client to the server devices. Lightweight cipher algorithms are defined as a secure solution for constrained devices that require low computational functions and small memory. In contrast, most lightweight algorithms suffer from the trade-off between complexity and speed in order to produce robust cipher algorithm. The PRESENT cipher has been successfully experimented on as a lightweight cryptography algorithm, which transcends other ciphers in terms of its computational processing that required low complexity operations. The mathematical model of

... Show More
Crossref (7)
Crossref
Publication Date
Sat Aug 01 2015
Journal Name
International Journal Of Advanced Research In Computer Science And Software Engineering
Partial Encryption for Colored Images Based on Face Detection
...Show More Authors

Publication Date
Sat Jul 01 2023
Journal Name
International Journal Of Computing And Digital Systems
Human Identification Based on SIFT Features of Hand Image
...Show More Authors

View Publication
Scopus (3)
Scopus Crossref
Publication Date
Mon Feb 04 2019
Journal Name
Journal Of The College Of Education For Women
Image Watermarking based on Huffman Coding and Laplace Sharpening
...Show More Authors

In this paper, an algorithm through which we can embed more data than the
regular methods under spatial domain is introduced. We compressed the secret data
using Huffman coding and then this compressed data is embedded using laplacian
sharpening method.
We used Laplace filters to determine the effective hiding places, then based on
threshold value we found the places with the highest values acquired from these filters
for embedding the watermark. In this work our aim is increasing the capacity of
information which is to be embedded by using Huffman code and at the same time
increasing the security of the algorithm by hiding data in the places that have highest
values of edges and less noticeable.
The perform

... Show More
View Publication Preview PDF
Publication Date
Fri May 17 2013
Journal Name
International Journal Of Computer Applications
Fast Lossless Compression of Medical Images based on Polynomial
...Show More Authors

In this paper, a fast lossless image compression method is introduced for compressing medical images, it is based on splitting the image blocks according to its nature along with using the polynomial approximation to decompose image signal followed by applying run length coding on the residue part of the image, which represents the error caused by applying polynomial approximation. Then, Huffman coding is applied as a last stage to encode the polynomial coefficients and run length coding. The test results indicate that the suggested method can lead to promising performance.

View Publication Preview PDF
Crossref (7)
Crossref