Reflections are ubiquitous effects in photos taken through transparent glass mediums, and represent a big problem in photography that impacts severely the performance of computer vision algorithms. Reflection removal is widely needed in daily lives with the prevalence of camera-equipped smart phones, and it is important, but it is a hard problem. This paper addresses the problem of reflection separation from two images taken from different viewpoints in front of a transparent glass medium, and proposes algorithm that exploits the natural image prior (gradient sparsity prior), and robust regression method to remove reflections. The proposed algorithm is tested on real world images, and the quantitative and visual quality comparisons were proved the better performance of the proposed algorithm on an average of 0.3% improvement on the blind referenceless image spatial quality (brisque) error metric than state of art algorithm.
Most of today’s techniques encrypt all of the image data, which consumes a tremendous amount of time and computational payload. This work introduces a selective image encryption technique that encrypts predetermined bulks of the original image data in order to reduce the encryption/decryption time and the
computational complexity of processing the huge image data. This technique is applying a compression algorithm based on Discrete Cosine Transform (DCT). Two approaches are implemented based on color space conversion as a preprocessing for the compression phases YCbCr and RGB, where the resultant compressed sequence is selectively encrypted using randomly generated combined secret key.
The results showed a significant reduct
In this paper, an adaptive polynomial compression technique is introduced of hard and soft thresholding of transformed residual image that efficiently exploited both the spatial and frequency domains, where the technique starts by applying the polynomial coding in the spatial domain and then followed by the frequency domain of discrete wavelet transform (DWT) that utilized to decompose the residual image of hard and soft thresholding base. The results showed the improvement of adaptive techniques compared to the traditional polynomial coding technique.
Identifying people by their ear has recently received import attention in the literature. The accurate segmentation of the ear region is vital in order to make successful person identification decisions. This paper presents an effective approach for ear region segmentation from color ear images. Firstly, the RGB color model was converted to the HSV color model. Secondly, thresholding was utilized to segment the ear region. Finally, the morphological operations were applied to remove small islands and fill the gaps. The proposed method was tested on a database which consisted of 105 ear images taken from the right sides of 105 subjects. The experimental results of the proposed approach on a variety of ear images revealed that this approac
... Show MoreThis paper presents a new and effective procedure to extract shadow regions of high- resolution color images. The method applies this process on modulation the equations of the band space a component of the C1-C2-C3 which represent RGB color, to discrimination the region of shadow, by using the detection equations in two ways, the first by applying Laplace filter, the second by using a Kernel Laplace filter, as well as make comparing the two results for these ways with each other's. The proposed method has been successfully tested on many images Google Earth Ikonos and Quickbird images acquired under different lighting conditions and covering both urban, roads. Experimental results show that this algorithm which is simple and effective t
... Show MoreMerging images is one of the most important technologies in remote sensing applications and geographic information systems. In this study, a simulation process using a camera for fused images by using resizing image for interpolation methods (nearest, bilinear and bicubic). Statistical techniques have been used as an efficient merging technique in the images integration process employing different models namely Local Mean Matching (LMM) and Regression Variable Substitution (RVS), and apply spatial frequency techniques include high pass filter additive method (HPFA). Thus, in the current research, statistical measures have been used to check the quality of the merged images. This has been carried out by calculating the correlation a
... Show MoreWith the rapid development of smart devices, people's lives have become easier, especially for visually disabled or special-needs people. The new achievements in the fields of machine learning and deep learning let people identify and recognise the surrounding environment. In this study, the efficiency and high performance of deep learning architecture are used to build an image classification system in both indoor and outdoor environments. The proposed methodology starts with collecting two datasets (indoor and outdoor) from different separate datasets. In the second step, the collected dataset is split into training, validation, and test sets. The pre-trained GoogleNet and MobileNet-V2 models are trained using the indoor and outdoor se
... Show MoreA new approach presented in this study to determine the optimal edge detection threshold value. This approach is base on extracting small homogenous blocks from unequal mean targets. Then, from these blocks we generate small image with known edges (edges represent the lines between the contacted blocks). So, these simulated edges can be assumed as true edges .The true simulated edges, compared with the detected edges in the small generated image is done by using different thresholding values. The comparison based on computing mean square errors between the simulated edge image and the produced edge image from edge detector methods. The mean square error computed for the total edge image (Er), for edge regio
... Show MoreThe recent emergence of sophisticated Large Language Models (LLMs) such as GPT-4, Bard, and Bing has revolutionized the domain of scientific inquiry, particularly in the realm of large pre-trained vision-language models. This pivotal transformation is driving new frontiers in various fields, including image processing and digital media verification. In the heart of this evolution, our research focuses on the rapidly growing area of image authenticity verification, a field gaining immense relevance in the digital era. The study is specifically geared towards addressing the emerging challenge of distinguishing between authentic images and deep fakes – a task that has become critically important in a world increasingly reliant on digital med
... Show MoreObjective of the research This study aimed to manufacture an innovative device that enables the player to walk after the operation and improves functional efficiency through improvement in the range of motion as well as improvement in the size of the muscles working on the knee joint Imposing research There are statistically significant differences between the pre and posttests of the experimental and control groups, there are Statistically significant differences between the post-tests between the experimental group and the control group in favor of the experimental group of the research sample. The researchers used the experimental approach by designing the control and experimental groups with a test (pre-post) for the suitabili
... Show MoreThis study aims to develop a recommendation engine methodology to enhance the model’s effectiveness and efficiency. The proposed model is commonly used to assign or propose a limited number of developers with the required skills and expertise to address and resolve a bug report. Managing collections within bug repositories is the responsibility of software engineers in addressing specific defects. Identifying the optimal allocation of personnel to activities is challenging when dealing with software defects, which necessitates a substantial workforce of developers. Analyzing new scientific methodologies to enhance comprehension of the results is the purpose of this analysis. Additionally, developer priorities were discussed, especially th
... Show More