With the rapid development of smart devices, people's lives have become easier, especially for visually disabled or special-needs people. The new achievements in the fields of machine learning and deep learning let people identify and recognise the surrounding environment. In this study, the efficiency and high performance of deep learning architecture are used to build an image classification system in both indoor and outdoor environments. The proposed methodology starts with collecting two datasets (indoor and outdoor) from different separate datasets. In the second step, the collected dataset is split into training, validation, and test sets. The pre-trained GoogleNet and MobileNet-V2 models are trained using the indoor and outdoor se
... Show MoreText based-image clustering (TBIC) is an insufficient approach for clustering related web images. It is a challenging task to abstract the visual features of images with the support of textual information in a database. In content-based image clustering (CBIC), image data are clustered on the foundation of specific features like texture, colors, boundaries, shapes. In this paper, an effective CBIC) technique is presented, which uses texture and statistical features of the images. The statistical features or moments of colors (mean, skewness, standard deviation, kurtosis, and variance) are extracted from the images. These features are collected in a one dimension array, and then genetic algorithm (GA) is applied for image clustering.
... Show MoreThe increased size of grayscale images or upscale plays a central role in various fields such as medicine, satellite imagery, and photography. This paper presents a technique for improving upscaling gray images using a new mixing wavelet generation by tensor product. The proposed technique employs a multi-resolution analysis provided by a new mixing wavelet transform algorithm to decompose the input image into different frequency components. After processing, the low-resolution input image is effectively transformed into a higher-resolution representation by adding a zeroes matrix. Discrete wavelets transform (Daubechies wavelet Haar) as a 2D matrix is used but is mixed using tensor product with another wavelet matrix’s size. MATLAB R2021
... Show Moreobjective: To evaluate the influence of monolithic zirconia brand, thickness, and substrate color on color matching accuracy when optically coupled to abutment substrates. Methods: A total of 180 samples of two brands of monolithic zirconia [Prettau Anterior (PA), Ceramill Zolid FX Multicolor (CZ)] were prepared in three different thicknesses (0.8 mm, 1.5 mm, and 2 mm) with a standardized 10 mm diameter. Color properties of the samples were assessed using spectrophotometry at baseline and after coupling to three substrate types: standard dentin, discolored dentin, and titanium. Color differences (ΔE) were calculated and statistically analyzed by 3-way ANOVA and pairwise comparison ( α=0.05). Results: The brand and material thickness, at
... Show MoreImage compression is a serious issue in computer storage and transmission, that simply makes efficient use of redundancy embedded within an image itself; in addition, it may exploit human vision or perception limitations to reduce the imperceivable information Polynomial coding is a modern image compression technique based on modelling concept to remove the spatial redundancy embedded within the image effectively that composed of two parts, the mathematical model and the residual. In this paper, two stages proposed technqies adopted, that starts by utilizing the lossy predictor model along with multiresolution base and thresholding techniques corresponding to first stage. Latter by incorporating the near lossless com
... Show MoreComputer-aided diagnosis (CAD) has proved to be an effective and accurate method for diagnostic prediction over the years. This article focuses on the development of an automated CAD system with the intent to perform diagnosis as accurately as possible. Deep learning methods have been able to produce impressive results on medical image datasets. This study employs deep learning methods in conjunction with meta-heuristic algorithms and supervised machine-learning algorithms to perform an accurate diagnosis. Pre-trained convolutional neural networks (CNNs) or auto-encoder are used for feature extraction, whereas feature selection is performed using an ant colony optimization (ACO) algorithm. Ant colony optimization helps to search for the bes
... Show MoreBackground: Obesity tends to appear in modern societies and constitutes a significant public health problem with an increased risk of cardiovascular diseases.
Objective: This study aims to determine the agreement between actual and perceived body image in the general population.
Methods: A descriptive cross-sectional study design was conducted with a sample size of 300. The data were collected from eight major populated areas of Northern district of Karachi Sindh with a period of six months (10th January 2020 to 21st June 2020). The Figure rating questionnaire scale (FRS) was applied to collect the demographic data and perception about body weight. Body mass index (BMI) used for ass
... Show Moresignificant bits either in the spatial domain or frequency domain sequentially or pseudo
randomly through the cover media (Based on this fact) statistical Steganalysis use different
techniques to detect the hidden message, A proposed method is suggested of a stenographic
scheme a hidden message is embedded through the second least significant bits in the
frequency domain of the cover media to avoid detection of the hidden message through the
known statistical Steganalysis techniques.
DeepFake is a concern for celebrities and everyone because it is simple to create. DeepFake images, especially high-quality ones, are difficult to detect using people, local descriptors, and current approaches. On the other hand, video manipulation detection is more accessible than an image, which many state-of-the-art systems offer. Moreover, the detection of video manipulation depends entirely on its detection through images. Many worked on DeepFake detection in images, but they had complex mathematical calculations in preprocessing steps, and many limitations, including that the face must be in front, the eyes have to be open, and the mouth should be open with the appearance of teeth, etc. Also, the accuracy of their counterfeit detectio
... Show More