The principal goal guiding any designed encryption algorithm must be security against unauthorized attackers. Within the last decade, there has been a vast increase in the communication of digital computer data in both the private and public sectors. Much of this information has a significant value; therefore it does require the protection by design strength algorithm to cipher it. This algorithm defines the mathematical steps required to transform data into a cryptographic cipher and also to transform the cipher back to the original form. The Performance and security level is the main characteristics that differentiate one encryption algorithm from another. In this paper suggested a new technique to enhance the performance of the Data Encryption Standard (DES) algorithm by generate the key of this algorithm from random bitmaps images depending on the increasing of the randomness of the pixel colour, which lead to generate a (clipped) key has a very high randomness according to the know randomness tests and adds a new level of protection strength and more robustness against breaking methods.
Machine learning-based techniques are used widely for the classification of images into various categories. The advancement of Convolutional Neural Network (CNN) affects the field of computer vision on a large scale. It has been applied to classify and localize objects in images. Among the fields of applications of CNN, it has been applied to understand huge unstructured astronomical data being collected every second. Galaxies have diverse and complex shapes and their morphology carries fundamental information about the whole universe. Studying these galaxies has been a tremendous task for the researchers around the world. Researchers have already applied some basic CNN models to predict the morphological classes
... Show MoreDigital image is widely used in computer applications. This paper introduces a proposed method of image zooming based upon inverse slantlet transform and image scaling. Slantlet transform (SLT) is based on the principle of designing different filters for different scales.
First we apply SLT on color image, the idea of transform color image into slant, where large coefficients are mainly the signal and smaller one represent the noise. By suitably modifying these coefficients , using scaling up image by box and Bartlett filters so that the image scales up to 2X2 and then inverse slantlet transform from modifying coefficients using to the reconstructed image .
&nbs
... Show MoreEven though image retrieval is considered as one of the most important research areas in the last two decades, there is still room for improvement since it is still not satisfying for many users. Two of the major problems which need to be improved are the accuracy and the speed of the image retrieval system, in order to achieve user satisfaction and also to make the image retrieval system suitable for all platforms. In this work, the proposed retrieval system uses features with spatial information to analyze the visual content of the image. Then, the feature extraction process is followed by applying the fuzzy c-means (FCM) clustering algorithm to reduce the search space and speed up the retrieval process. The experimental results show t
... Show MoreSteganography is a mean of hiding information within a more obvious form of
communication. It exploits the use of host data to hide a piece of information in such a way
that it is imperceptible to human observer. The major goals of effective Steganography are
High Embedding Capacity, Imperceptibility and Robustness. This paper introduces a scheme
for hiding secret images that could be as much as 25% of the host image data. The proposed
algorithm uses orthogonal discrete cosine transform for host image. A scaling factor (a) in
frequency domain controls the quality of the stego images. Experimented results of secret
image recovery after applying JPEG coding to the stego-images are included.
The aim of robot path planning is to search for a safe path for the mobile robot. Even though there exist various path planning algorithms for mobile robots, yet only a few are optimized. The optimized algorithms include the Particle Swarm Optimization (PSO) that finds the optimal path with respect to avoiding the obstacles while ensuring safety. In PSO, the sub-optimal solution takes place frequently while finding a solution to the optimal path problem. This paper proposes an enhanced PSO algorithm that contains an improved particle velocity. Experimental results show that the proposed Enhanced PSO performs better than the standard PSO in terms of solution’s quality. Hence, a mobile robot implementing the proposed algorithm opera
... Show MoreThe aim of this paper is to construct the analysis mathematical model for stream cipher cryptosystems in order to be cryptanalysis using the cryptanalysis tools based on plaintext attack (or part from it) or ciphertext only attack, choosing Brüer generator as study case of nonlinear stream cipher system.
The constructing process includes constructing the linear (or non-linear) equations system of the attacked nonlinear generator. The attacking of stream cipher cryptosystem means solving the equations system and that means finding the initial key values for each combined LFSR.
The paper aims to propose Teaching Learning based Optimization (TLBO) algorithm to solve 3-D packing problem in containers. The objective which can be presented in a mathematical model is optimizing the space usage in a container. Besides the interaction effect between students and teacher, this algorithm also observes the learning process between students in the classroom which does not need any control parameters. Thus, TLBO provides the teachers phase and students phase as its main updating process to find the best solution. More precisely, to validate the algorithm effectiveness, it was implemented in three sample cases. There was small data which had 5 size-types of items with 12 units, medium data which had 10 size-types of items w
... Show MoreIn this study, an analysis of re-using the JPEG lossy algorithm on the quality of satellite imagery is presented. The standard JPEG compression algorithm is adopted and applied using Irfan view program, the rang of JPEG quality that used is 50-100.Depending on the calculated satellite image quality variation, the maximum number of the re-use of the JPEG lossy algorithm adopted in this study is 50 times. The image quality degradation to the JPEG quality factor and the number of re-use of the JPEG algorithm to store the satellite image is analyzed.
The computer vision branch of the artificial intelligence field is concerned with developing algorithms for analyzing video image content. Extracting edge information, which is the essential process in most pictorial pattern recognition problems. A new method of edge detection technique has been introduces in this research, for detecting boundaries.
Selection of typical lossy techniques for encoding edge video images are also discussed in this research. The concentration is devoted to discuss the Block-Truncation coding technique and Discrete Cosine Transform (DCT) coding technique. In order to reduce the volume of pictorial data which one may need to store or transmit,
... Show MoreThe computer vision branch of the artificial intelligence field is concerned with
developing algorithms for analyzing image content. Data may be compressed by
reducing the redundancy in the original data, but this makes the data have more
errors. In this paper image compression based on a new method that has been
created for image compression which is called Five Modulus Method (FMM). The
new method consists of converting each pixel value in an (4x4, 8×8,16x16) block
into a multiple of 5 for each of the R, G and B arrays. After that, the new values
could be divided by 5 to get new values which are 6-bit length for each pixel and it
is less in storage space than the original value which is 8-bits.