Background/Objectives: The purpose of current research aims to a modified image representation framework for Content-Based Image Retrieval (CBIR) through gray scale input image, Zernike Moments (ZMs) properties, Local Binary Pattern (LBP), Y Color Space, Slantlet Transform (SLT), and Discrete Wavelet Transform (DWT). Methods/Statistical analysis: This study surveyed and analysed three standard datasets WANG V1.0, WANG V2.0, and Caltech 101. The features an image of objects in this sets that belong to 101 classes-with approximately 40-800 images for every category. The suggested infrastructure within the study seeks to present a description and operationalization of the CBIR system through automated attribute extraction system premised on CNN infrastructure. Findings: The results acquired through the investigated CBIR system alongside the benchmarked results have clearly indicated that the suggested technique had the best performance with the overall accuracy at 88.29% as opposed to the other sets of data adopted in the experiments. The outstanding results indicate clearly that the suggested method was effective for all the sets of data. Improvements/Applications: As a result of this study, it was found the revealed that the multiple image representation was redundant for extraction accuracy, and the findings from the study indicated that automatically retrieved features are capable and reliable in generating accurate outcomes.
Milling process is a common machining operation that is used in the manufacturing of complex surfaces. Machining-induced residual stresses (RS) have a great impact on the performance of machined components and the surface quality in face milling operations with parameter cutting. The properties of engineering material as well as structural components, specifically fatigue life, deformation, impact resistance, corrosion resistance, and brittle fracture, can all be significantly influenced by residual stresses. Accordingly, controlling the distribution of residual stresses is indeed important to protect the piece and avoid failure. Most of the previous works inspected the material properties, tool parameters, or cutting parameters, bu
... Show MoreAccording to the importance of the conveyor systems in various industrial and service lines, it is very desirable to make these systems as efficient as possible in their work. In this paper, the speed of a conveyor belt (which is in our study a part of an integrated training robotic system) is controlled using one of the artificial intelligence methods, which is the Artificial Neural Network (ANN). A visions sensor will be responsible for gathering information about the status of the conveyor belt and parts over it, where, according to this information, an intelligent decision about the belt speed will be taken by the ANN controller. ANN will control the alteration in speed in a way that gives the optimized energy efficiency through
... Show MoreImage compression plays an important role in reducing the size and storage of data while increasing the speed of its transmission through the Internet significantly. Image compression is an important research topic for several decades and recently, with the great successes achieved by deep learning in many areas of image processing, especially image compression, and its use is increasing Gradually in the field of image compression. The deep learning neural network has also achieved great success in the field of processing and compressing various images of different sizes. In this paper, we present a structure for image compression based on the use of a Convolutional AutoEncoder (CAE) for deep learning, inspired by the diversity of human eye
... Show Moreorder to increase the level of security, as this system encrypts the secret image before sending it through the internet to the recipient (by the Blowfish method). As The Blowfish method is known for its efficient security; nevertheless, the encrypting time is long. In this research we try to apply the smoothing filter on the secret image which decreases its size and consequently the encrypting and decrypting time are decreased. The secret image is hidden after encrypting it into another image called the cover image, by the use of one of these two methods" Two-LSB" or" Hiding most bits in blue pixels". Eventually we compare the results of the two methods to determine which one is better to be used according to the PSNR measurs
In this paper, we focus on designing feed forward neural network (FFNN) for solving Mixed Volterra – Fredholm Integral Equations (MVFIEs) of second kind in 2–dimensions. in our method, we present a multi – layers model consisting of a hidden layer which has five hidden units (neurons) and one linear output unit. Transfer function (Log – sigmoid) and training algorithm (Levenberg – Marquardt) are used as a sigmoid activation of each unit. A comparison between the results of numerical experiment and the analytic solution of some examples has been carried out in order to justify the efficiency and the accuracy of our method.
... Show More
In this paper, we have investigated some of the most recent energy efficient routing protocols for wireless body area networks. This technology has seen advancements in recent times where wireless sensors are injected in the human body to sense and measure body parameters like temperature, heartbeat and glucose level. These tiny wireless sensors gather body data information and send it over a wireless network to the base station. The data measurements are examined by the doctor or physician and the suitable cure is suggested. The whole communication is done through routing protocols in a network environment. Routing protocol consumes energy while helping non-stop communic
... Show MoreGroupwise non-rigid image alignment is a difficult non-linear optimization problem involving many parameters and often large datasets. Previous methods have explored various metrics and optimization strategies. Good results have been previously achieved with simple metrics, requiring complex optimization, often with many unintuitive parameters that require careful tuning for each dataset. In this chapter, the problem is restructured to use a simpler, iterative optimization algorithm, with very few free parameters. The warps are refined using an iterative Levenberg-Marquardt minimization to the mean, based on updating the locations of a small number of points and incorporating a stiffness constraint. This optimization approach is eff
... Show More