Background: Determination of sex and estimation of stature from the skeleton is vital to medicolegal investigations. Skull is composed of hard tissue and is the best preserved part of skeleton after death, hence, in many cases it is the only available part for forensic examination. Lateral cephalogram is ideal for the skull examination as it gives details of various anatomical points in a single radiograph. This study was undertaken to evaluate the accuracy of digital cephalometric system as quick, easy and reproducible supplement tool in sex determination in Iraqi samples in different age range using certain linear and angular craniofacial measurements in predicting sex. Materials and Method The sample consisted of 113of true lateral cephalometric radiographs for adults with age range from 22-43 years old (51 males, 62 females), using certain linear and angular craniofacial measurements with the aid of computer program “AutoCAD 2007” Results: The eleven parameters measured for males and females when compared are statistically significantly different. All cranio-cephalometric measurements gave overall predictive accuracy of sex determination by discriminant analysis (86.7%). The stepwise selection method gave overall predictive accuracy of sex determination by discriminant analysis (85.8%). Age showed no statistical difference among the studied age range except for the distance from Mastoid to Frankfort plane. Conclusion: The lateral cephalometric measurements of craniofacial bones are useful to support sex determination of Iraqi population in forensic radiographic medicine.
In this paper, image compression technique is presented based on the Zonal transform method. The DCT, Walsh, and Hadamard transform techniques are also implements. These different transforms are applied on SAR images using Different block size. The effects of implementing these different transforms are investigated. The main shortcoming associated with this radar imagery system is the presence of the speckle noise, which affected the compression results.
The searching process using a binary codebook of combined Block Truncation Coding (BTC) method and Vector Quantization (VQ), i.e. a full codebook search for each input image vector to find the best matched code word in the codebook, requires a long time. Therefore, in this paper, after designing a small binary codebook, we adopted a new method by rotating each binary code word in this codebook into 900 to 2700 step 900 directions. Then, we systematized each code word depending on its angle to involve four types of binary code books (i.e. Pour when , Flat when , Vertical when, or Zigzag). The proposed scheme was used for decreasing the time of the coding procedure, with very small distortion per block, by designing s
... Show MoreText based-image clustering (TBIC) is an insufficient approach for clustering related web images. It is a challenging task to abstract the visual features of images with the support of textual information in a database. In content-based image clustering (CBIC), image data are clustered on the foundation of specific features like texture, colors, boundaries, shapes. In this paper, an effective CBIC) technique is presented, which uses texture and statistical features of the images. The statistical features or moments of colors (mean, skewness, standard deviation, kurtosis, and variance) are extracted from the images. These features are collected in a one dimension array, and then genetic algorithm (GA) is applied for image clustering.
... Show MoreThe recent emergence of sophisticated Large Language Models (LLMs) such as GPT-4, Bard, and Bing has revolutionized the domain of scientific inquiry, particularly in the realm of large pre-trained vision-language models. This pivotal transformation is driving new frontiers in various fields, including image processing and digital media verification. In the heart of this evolution, our research focuses on the rapidly growing area of image authenticity verification, a field gaining immense relevance in the digital era. The study is specifically geared towards addressing the emerging challenge of distinguishing between authentic images and deep fakes – a task that has become critically important in a world increasingly reliant on digital med
... Show MoreAn oil spill is a leakage of pipelines, vessels, oil rigs, or tankers that leads to the release of petroleum products into the marine environment or on land that happened naturally or due to human action, which resulted in severe damages and financial loss. Satellite imagery is one of the powerful tools currently utilized for capturing and getting vital information from the Earth's surface. But the complexity and the vast amount of data make it challenging and time-consuming for humans to process. However, with the advancement of deep learning techniques, the processes are now computerized for finding vital information using real-time satellite images. This paper applied three deep-learning algorithms for satellite image classification
... Show Moreconventional FCM algorithm does not fully utilize the spatial information in the image. In this research, we use a FCM algorithm that incorporates spatial information into the membership function for clustering. The spatial function is the summation of the membership functions in the neighborhood of each pixel under consideration. The advantages of the method are that it is less
sensitive to noise than other techniques, and it yields regions more homogeneous than those of other methods. This technique is a powerful method for noisy image segmentation.
In this paper, a discussion of the principles of stereoscopy is presented, and the phases
of 3D image production of which is based on the Waterfall model. Also, the results are based
on one of the 3D technology which is Anaglyph and it's known to be of two colors (red and
cyan).
A 3D anaglyph image and visualization technologies will appear as a threedimensional
by using a classes (red/cyan) as considered part of other technologies used and
implemented for production of 3D videos (movies). And by using model to produce a
software to process anaglyph video, comes very important; for that, our proposed work is
implemented an anaglyph in Waterfall model to produced a 3D image which extracted from a
video.
The denoising of a natural image corrupted by Gaussian noise is a problem in signal or image processing. Much work has been done in the field of wavelet thresholding but most of it was focused on statistical modeling of wavelet coefficients and the optimal choice of thresholds. This paper describes a new method for the suppression of noise in image by fusing the stationary wavelet denoising technique with adaptive wiener filter. The wiener filter is applied to the reconstructed image for the approximation coefficients only, while the thresholding technique is applied to the details coefficients of the transform, then get the final denoised image is obtained by combining the two results. The proposed method was applied by usin
... Show More