The consensus algorithm is the core mechanism of blockchain and is used to ensure data consistency among blockchain nodes. The PBFT consensus algorithm is widely used in alliance chains because it is resistant to Byzantine errors. However, the present PBFT (Practical Byzantine Fault Tolerance) still has issues with master node selection that is random and complicated communication. The IBFT consensus technique, which is enhanced, is proposed in this study and is based on node trust value and BLS (Boneh-Lynn-Shacham) aggregate signature. In IBFT, multi-level indicators are used to calculate the trust value of each node, and some nodes are selected to take part in network consensus as a result of this calculation. The master node is chosen from among them based on which node has the highest trust value, it transforms the BLS signature process into the information interaction process between nodes. Consequently, communication complexity is reduced, and node-to-node information exchange remains secure. The simulation experiment findings demonstrate that the IBFT consensus method enhances transaction throughput rate by 61% and reduces latency by 13% when compared to the PBFT algorithm.
In many video and image processing applications, the frames are partitioned into blocks, which are extracted and processed sequentially. In this paper, we propose a fast algorithm for calculation of features of overlapping image blocks. We assume the features are projections of the block on separable 2D basis functions (usually orthogonal polynomials) where we benefit from the symmetry with respect to spatial variables. The main idea is based on a construction of auxiliary matrices that virtually extends the original image and makes it possible to avoid a time-consuming computation in loops. These matrices can be pre-calculated, stored and used repeatedly since they are independent of the image itself. We validated experimentally th
... Show MoreIn this study, a genetic algorithm (GA) is used to detect damage in curved beam model, stiffness as well as mass matrices of the curved beam elements is formulated using Hamilton's principle. Each node of the curved beam element possesses seven degrees of freedom including the warping degree of freedom. The curved beam element had been derived based on the Kang and Yoo’s thin-walled curved beam theory. The identification of damage is formulated as an optimization problem, binary and continuous genetic algorithms
(BGA, CGA) are used to detect and locate the damage using two objective functions (change in natural frequencies, Modal Assurance Criterion MAC). The results show the objective function based on change in natural frequency i
In this study, an analysis of re-using the JPEG lossy algorithm on the quality of satellite imagery is presented. The standard JPEG compression algorithm is adopted and applied using Irfan view program, the rang of JPEG quality that used is 50-100.Depending on the calculated satellite image quality variation, the maximum number of the re-use of the JPEG lossy algorithm adopted in this study is 50 times. The image quality degradation to the JPEG quality factor and the number of re-use of the JPEG algorithm to store the satellite image is analyzed.
In this paper, an algorithm for binary codebook design has been used in vector quantization technique, which is used to improve the acceptability of the absolute moment block truncation coding (AMBTC) method. Vector quantization (VQ) method is used to compress the bitmap (the output proposed from the first method (AMBTC)). In this paper, the binary codebook can be engender for many images depending on randomly chosen to the code vectors from a set of binary images vectors, and this codebook is then used to compress all bitmaps of these images. The chosen of the bitmap of image in order to compress it by using this codebook based on the criterion of the average bitmap replacement error (ABPRE). This paper is suitable to reduce bit rates
... Show MoreIn the lifetime process in some systems, most data cannot belong to one single population. In fact, it can represent several subpopulations. In such a case, the known distribution cannot be used to model data. Instead, a mixture of distribution is used to modulate the data and classify them into several subgroups. The mixture of Rayleigh distribution is best to be used with the lifetime process. This paper aims to infer model parameters by the expectation-maximization (EM) algorithm through the maximum likelihood function. The technique is applied to simulated data by following several scenarios. The accuracy of estimation has been examined by the average mean square error (AMSE) and the average classification success rate (ACSR). T
... Show MoreBackground and Aim: due to the rapid growth of data communication and multimedia system applications, security becomes a critical issue in the communication and storage of images. This study aims to improve encryption and decryption for various types of images by decreasing time consumption and strengthening security. Methodology: An algorithm is proposed for encrypting images based on the Carlisle Adams and Stafford Tavares CAST block cipher algorithm with 3D and 2D logistic maps. A chaotic function that increases the randomness in the encrypted data and images, thereby breaking the relation sequence through the encryption procedure, is introduced. The time is decreased by using three secure and private S-Boxes rather than using si
... Show MoreThe aim of this paper is to present a method for solving high order ordinary differential equations with two point's boundary condition, we propose semi-analytic technique using two-point oscillatory interpolation to construct polynomial solution. The original problem is concerned using two-point oscillatory interpolation with the fit equal numbers of derivatives at the end points of an interval [0 , 1] . Also, many examples are presented to demonstrate the applicability, accuracy and efficiency of the method by comparing with conventional methods.
The performance evaluation process requires a set of criteria and for the purpose of measuring the level of performance achieved by the Unit and the actual level of development of its activities, and in view of the changes and of rapid and continuous variables surrounding the Performance is a reflection of the unit's ability to achieve its objectives, as these units are designed to achieve the objectives of exploiting a range of economic resources available to it, and the performance evaluation process is a form of censorship, focusing on the analysis of the results obtained from the achievement All its activities with a view to determining the extent to which the Unit has achieved its objectives using the resources available to it and h
... Show MoreThere is various human biometrics used nowadays, one of the most important of these biometrics is the face. Many techniques have been suggested for face recognition, but they still face a variety of challenges for recognizing faces in images captured in the uncontrolled environment, and for real-life applications. Some of these challenges are pose variation, occlusion, facial expression, illumination, bad lighting, and image quality. New techniques are updating continuously. In this paper, the singular value decomposition is used to extract the features matrix for face recognition and classification. The input color image is converted into a grayscale image and then transformed into a local ternary pattern before splitting the image into
... Show MoreThe aim of this paper is to present a method for solving high order ordinary differential equations with two point's boundary condition, we propose semi-analytic technique using two-point oscillatory interpolation to construct polynomial solution. The original problem is concerned using two-point oscillatory interpolation with the fit equal numbers of derivatives at the end points of an interval [0 , 1] . Also, many examples are presented to demonstrate the applicability, accuracy and efficiency of the method by comparing with conventional methods.