There is a great deal of systems dealing with image processing that are being used and developed on a daily basis. Those systems need the deployment of some basic operations such as detecting the Regions of Interest and matching those regions, in addition to the description of their properties. Those operations play a significant role in decision making which is necessary for the next operations depending on the assigned task. In order to accomplish those tasks, various algorithms have been introduced throughout years. One of the most popular algorithms is the Scale Invariant Feature Transform (SIFT). The efficiency of this algorithm is its performance in the process of detection and property description, and that is due to the fact that it operates on a big number of key-points, the only drawback it has is that it is rather time consuming. In the suggested approach, the system deploys SIFT to perform its basic tasks of matching and description is focused on minimizing the number of key-points which is performed via applying Fast Approximate Nearest Neighbor algorithm, which will reduce the redundancy of matching leading to speeding up the process. The proposed application has been evaluated in terms of two criteria which are time and accuracy, and has accomplished a percentage of accuracy of up to 100%, in addition to speeding up the processes of matching and description.
The current research aimed to analyze the importance, correlation and the effect of independent variables represented by marketing variables on the dependent variable represented by local brand, through taking ENIEM as a model for this study, which represents a sensitive sector for the Algerian consumer. The results of the study evinced that the Algerian consumer has a positive image toward the brand ENIEM given marketing variables which has acquired considerable importance to this consumer. Also, the results of this study showed a statistically significant correlation between marketing variables and good perception toward the brand ENIEM, at the same time, the existence of a statistically significant effect for each of these variables o
... Show MoreCompression for color image is now necessary for transmission and storage in the data bases since the color gives a pleasing nature and natural for any object, so three composite techniques based color image compression is implemented to achieve image with high compression, no loss in original image, better performance and good image quality. These techniques are composite stationary wavelet technique (S), composite wavelet technique (W) and composite multi-wavelet technique (M). For the high energy sub-band of the 3rd level of each composite transform in each composite technique, the compression parameters are calculated. The best composite transform among the 27 types is the three levels of multi-wavelet
... Show MoreA content-based image retrieval (CBIR) is a technique used to retrieve images from an image database. However, the CBIR process suffers from less accuracy to retrieve images from an extensive image database and ensure the privacy of images. This paper aims to address the issues of accuracy utilizing deep learning techniques as the CNN method. Also, it provides the necessary privacy for images using fully homomorphic encryption methods by Cheon, Kim, Kim, and Song (CKKS). To achieve these aims, a system has been proposed, namely RCNN_CKKS, that includes two parts. The first part (offline processing) extracts automated high-level features based on a flatting layer in a convolutional neural network (CNN) and then stores these features in a
... Show MoreThe concept of the active contour model has been extensively utilized in the segmentation and analysis of images. This technology has been effectively employed in identifying the contours in object recognition, computer graphics and vision, biomedical processing of images that is normal images or medical images such as Magnetic Resonance Images (MRI), X-rays, plus Ultrasound imaging. Three colleagues, Kass, Witkin and Terzopoulos developed this energy, lessening “Active Contour Models” (equally identified as Snake) back in 1987. Being curved in nature, snakes are characterized in an image field and are capable of being set in motion by external and internal forces within image data and the curve itself in that order. The present s
... Show MoreCyber-attacks keep growing. Because of that, we need stronger ways to protect pictures. This paper talks about DGEN, a Dynamic Generative Encryption Network. It mixes Generative Adversarial Networks with a key system that can change with context. The method may potentially mean it can adjust itself when new threats appear, instead of a fixed lock like AES. It tries to block brute‑force, statistical tricks, or quantum attacks. The design adds randomness, uses learning, and makes keys that depend on each image. That should give very good security, some flexibility, and keep compute cost low. Tests still ran on several public image sets. Results show DGEN beats AES, chaos tricks, and other GAN ideas. Entropy reached 7.99 bits per pix
... Show MoreLocalization is an essential demand in wireless sensor networks (WSNs). It relies on several types of measurements. This paper focuses on positioning in 3-D space using time-of-arrival- (TOA-) based distance measurements between the target node and a number of anchor nodes. Central localization is assumed and either RF, acoustic or UWB signals are used for distance measurements. This problem is treated by using iterative gradient descent (GD), and an iterative GD-based algorithm for localization of moving sensors in a WSN has been proposed. To localize a node in 3-D space, at least four anchors are needed. In this work, however, five anchors are used to get better accuracy. In GD localization of a moving sensor, the algo
... Show MoreMetaheuristics under the swarm intelligence (SI) class have proven to be efficient and have become popular methods for solving different optimization problems. Based on the usage of memory, metaheuristics can be classified into algorithms with memory and without memory (memory-less). The absence of memory in some metaheuristics will lead to the loss of the information gained in previous iterations. The metaheuristics tend to divert from promising areas of solutions search spaces which will lead to non-optimal solutions. This paper aims to review memory usage and its effect on the performance of the main SI-based metaheuristics. Investigation has been performed on SI metaheuristics, memory usage and memory-less metaheuristics, memory char
... Show MoreThe combination of wavelet theory and neural networks has lead to the development of wavelet networks. Wavelet networks are feed-forward neural networks using wavelets as activation function. Wavelets networks have been used in classification and identification problems with some success.
In this work we proposed a fuzzy wavenet network (FWN), which learns by common back-propagation algorithm to classify medical images. The library of medical image has been analyzed, first. Second, Two experimental tables’ rules provide an excellent opportunity to test the ability of fuzzy wavenet network due to the high level of information variability often experienced with this type of images.
&n
... Show MoreThe optimization of artificial gas lift techniques plays a crucial role in the advancement of oil field development. This study focuses on investigating the impact of gas lift design and optimization on production outcomes within the Mishrif formation of the Halfaya oil field. A comprehensive production network nodal analysis model was formulated using a PIPESIM Optimizer-based Genetic Algorithm and meticulously calibrated utilizing field-collected data from a network comprising seven wells. This well group encompasses three directional wells currently employing gas lift and four naturally producing vertical wells. To augment productivity and optimize network performance, a novel gas lift design strategy was proposed. The optimization of
... Show MoreThis paper presents a hybrid approach for solving null values problem; it hybridizes rough set theory with intelligent swarm algorithm. The proposed approach is a supervised learning model. A large set of complete data called learning data is used to find the decision rule sets that then have been used in solving the incomplete data problem. The intelligent swarm algorithm is used for feature selection which represents bees algorithm as heuristic search algorithm combined with rough set theory as evaluation function. Also another feature selection algorithm called ID3 is presented, it works as statistical algorithm instead of intelligent algorithm. A comparison between those two approaches is made in their performance for null values estima
... Show More