Preferred Language
Articles
/
joe-2095
Implementation of Digital Image processing in Calculating Normal Approach for Spherical Indenter Considering Elastic/Plastic Contact
...Show More Authors

In this work a study and calculation of the normal approach between two bodies, spherical and rough flat surface, had been conducted by the aid of image processing technique. Four kinds of metals of different work hardening index had been used as a surface specimens and by capturing images of resolution of 0.006565 mm/pixel a good estimate of the normal approach may be obtained the compression tests had been done in strength of material laboratory in mechanical engineering department, a Monsanto tensometer had been used to conduct the indentation tests.
A light section measuring equipment microscope BK 70x50 was used to calculate the surface parameters of the texture profile like standard deviation of asperity peak heights, centre line average, asperity density and the radius of asperities.
A Gaussian distribution of asperity peak height was assumed in calculating the theoretical value of the normal approach in the elastic and plastic regions and where compared with those obtained experimentally to verify the obtained results.

Crossref
View Publication Preview PDF
Quick Preview PDF
Publication Date
Sat Aug 01 2015
Journal Name
International Journal Of Computer Science And Mobile Computing
Image Compression based on Non-Linear Polynomial Prediction Model
...Show More Authors

Publication Date
Fri Apr 30 2010
Journal Name
Journal Of Applied Computer Science & Mathematics
Image Hiding Using Magnitude Modulation on the DCT Coefficients
...Show More Authors

In this paper, we introduce a DCT based steganographic method for gray scale images. The embedding approach is designed to reach efficient tradeoff among the three conflicting goals; maximizing the amount of hidden message, minimizing distortion between the cover image and stego-image,and maximizing the robustness of embedding. The main idea of the method is to create a safe embedding area in the middle and high frequency region of the DCT domain using a magnitude modulation technique. The magnitude modulation is applied using uniform quantization with magnitude Adder/Subtractor modules. The conducted test results indicated that the proposed method satisfy high capacity, high preservation of perceptual and statistical properties of the steg

... Show More
View Publication Preview PDF
Publication Date
Mon Sep 01 2025
Journal Name
Journal Of Information Hiding And Multimedia Signal Processing
Steganography Based on Image Compression Using a Hybrid Technique
...Show More Authors

Information security is a crucial factor when communicating sensitive information between two parties. Steganography is one of the most techniques used for this purpose. This paper aims to enhance the capacity and robustness of hiding information by compressing image data to a small size while maintaining high quality so that the secret information remains invisible and only the sender and recipient can recognize the transmission. Three techniques are employed to conceal color and gray images, the Wavelet Color Process Technique (WCPT), Wavelet Gray Process Technique (WGPT), and Hybrid Gray Process Technique (HGPT). A comparison between the first and second techniques according to quality metrics, Root-Mean-Square Error (RMSE), Compression-

... Show More
View Publication
Publication Date
Mon Feb 04 2019
Journal Name
Journal Of The College Of Education For Women
Image Watermarking based on Huffman Coding and Laplace Sharpening
...Show More Authors

In this paper, an algorithm through which we can embed more data than the
regular methods under spatial domain is introduced. We compressed the secret data
using Huffman coding and then this compressed data is embedded using laplacian
sharpening method.
We used Laplace filters to determine the effective hiding places, then based on
threshold value we found the places with the highest values acquired from these filters
for embedding the watermark. In this work our aim is increasing the capacity of
information which is to be embedded by using Huffman code and at the same time
increasing the security of the algorithm by hiding data in the places that have highest
values of edges and less noticeable.
The perform

... Show More
View Publication Preview PDF
Publication Date
Sun Dec 03 2017
Journal Name
Ibn Al-haitham Journal For Pure And Applied Science
Studying Hueckel edge detector using binary step edge image
...Show More Authors

Publication Date
Sat Oct 01 2011
Journal Name
Journal Of Engineering
IMPROVED IMAGE COMPRESSION BASED WAVELET TRANSFORM AND THRESHOLD ENTROPY
...Show More Authors

In this paper, a method is proposed to increase the compression ratio for the color images by
dividing the image into non-overlapping blocks and applying different compression ratio for these
blocks depending on the importance information of the block. In the region that contain important
information the compression ratio is reduced to prevent loss of the information, while in the
smoothness region which has not important information, high compression ratio is used .The
proposed method shows better results when compared with classical methods(wavelet and DCT).

View Publication Preview PDF
Crossref
Publication Date
Tue May 20 2008
Journal Name
Journal Of Planner And Development
Estimating Water Quality from Satellite Image and Reflectance Data
...Show More Authors

The useful of remote sensing techniques in Environmental Engineering and another science is to save time, Coast and efforts, also to collect more accurate information under monitoring mechanism. In this research a number of statistical models were used for determining the best relationships between each water quality parameter and the mean reflectance values generated for different channels of radiometer operate simulated to the thematic Mappar satellite image. Among these models are the regression models which enable us to as certain and utilize a relation between a variable of interest. Called a dependent variable; and one or more independent variables

View Publication Preview PDF
Publication Date
Wed Mar 23 2011
Journal Name
Ibn Al- Haitham J. For Pure & Appl. Sci.
Image Compression Using Proposed Enhanced Run Length Encoding Algorithm
...Show More Authors

In this paper, we will present proposed enhance process of image compression by using RLE algorithm. This proposed yield to decrease the size of compressing image, but the original method used primarily for compressing a binary images [1].Which will yield increasing the size of an original image mostly when used for color images. The test of an enhanced algorithm is performed on sample consists of ten BMP 24-bit true color images, building an application by using visual basic 6.0 to show the size after and before compression process and computing the compression ratio for RLE and for the enhanced RLE algorithm.

Preview PDF
Publication Date
Wed Mar 10 2021
Journal Name
Baghdad Science Journal
Merge Operation Effect On Image Compression Using Fractal Technique
...Show More Authors

Fractal image compression gives some desirable properties like fast decoding image, and very good rate-distortion curves, but suffers from a high encoding time. In fractal image compression a partitioning of the image into ranges is required. In this work, we introduced good partitioning process by means of merge approach, since some ranges are connected to the others. This paper presents a method to reduce the encoding time of this technique by reducing the number of range blocks based on the computing the statistical measures between them . Experimental results on standard images show that the proposed method yields minimize (decrease) the encoding time and remain the quality results passable visually.

View Publication Preview PDF
Publication Date
Tue Dec 31 2024
Journal Name
Journal Of Soft Computing And Computer Applications
Enhancing Image Classification Using a Convolutional Neural Network Model
...Show More Authors

In recent years, with the rapid development of the current classification system in digital content identification, automatic classification of images has become the most challenging task in the field of computer vision. As can be seen, vision is quite challenging for a system to automatically understand and analyze images, as compared to the vision of humans. Some research papers have been done to address the issue in the low-level current classification system, but the output was restricted only to basic image features. However, similarly, the approaches fail to accurately classify images. For the results expected in this field, such as computer vision, this study proposes a deep learning approach that utilizes a deep learning algorithm.

... Show More
View Publication
Crossref