Microservice architecture offers many advantages, especially for business applications, due to its flexibility, expandability, and loosely coupled structure for ease of maintenance. However, there are several disadvantages that stem from the features of microservices, such as the fact that microservices are independent in nature can hinder meaningful communication and make data synchronization more challenging. This paper addresses the issues by proposing a containerized microservices in an asynchronous event-driven architecture. This architecture encloses microservices in containers and implements an event manager to keep track of all the events in an event log to reduce errors in the application. Experiment results show a decline in response time compared to two other benchmark architectures, as well as a lessening in error rate.
A reliability system of the multi-component stress-strength model R(s,k) will be considered in the present paper ,when the stress and strength are independent and non-identically distribution have the Exponentiated Family Distribution(FED) with the unknown shape parameter α and known scale parameter λ equal to two and parameter θ equal to three. Different estimation methods of R(s,k) were introduced corresponding to Maximum likelihood and Shrinkage estimators. Comparisons among the suggested estimators were prepared depending on simulation established on mean squared error (MSE) criteria.
This paper deals with constructing mixed probability distribution from exponential with scale parameter (β) and also Gamma distribution with (2,β), and the mixed proportions are ( .first of all, the probability density function (p.d.f) and also cumulative distribution function (c.d.f) and also the reliability function are obtained. The parameters of mixed distribution, ( ,β) are estimated by three different methods, which are maximum likelihood, and Moments method,as well proposed method (Differential Least Square Method)(DLSM).The comparison is done using simulation procedure, and all the results are explained in tables.
This paper is concerned with preliminary test single stage shrinkage estimators for the mean (q) of normal distribution with known variance s2 when a prior estimate (q0) of the actule value (q) is available, using specifying shrinkage weight factor y( ) as well as pre-test region (R). Expressions for the Bias, Mean Squared Error [MSE( )] and Relative Efficiency [R.Eff.( )] of proposed estimators are derived. Numerical results and conclusions are drawn about selection different constants including in these expressions. Comparisons between suggested estimators with respect to usual estimators in the sense of Relative Efficiency are given. Furthermore, comparisons with the earlier existi
... Show MoreThe estimation of the parameters of linear regression is based on the usual Least Square method, as this method is based on the estimation of several basic assumptions. Therefore, the accuracy of estimating the parameters of the model depends on the validity of these hypotheses. The most successful technique was the robust estimation method which is minimizing maximum likelihood estimator (MM-estimator) that proved its efficiency in this purpose. However, the use of the model becomes unrealistic and one of these assumptions is the uniformity of the variance and the normal distribution of the error. These assumptions are not achievable in the case of studying a specific problem that may include complex data of more than one model. To
... Show MoreIn this paper, an algorithm for binary codebook design has been used in vector quantization technique, which is used to improve the acceptability of the absolute moment block truncation coding (AMBTC) method. Vector quantization (VQ) method is used to compress the bitmap (the output proposed from the first method (AMBTC)). In this paper, the binary codebook can be engender for many images depending on randomly chosen to the code vectors from a set of binary images vectors, and this codebook is then used to compress all bitmaps of these images. The chosen of the bitmap of image in order to compress it by using this codebook based on the criterion of the average bitmap replacement error (ABPRE). This paper is suitable to reduce bit rates
... Show MoreA content-based image retrieval (CBIR) is a technique used to retrieve images from an image database. However, the CBIR process suffers from less accuracy to retrieve images from an extensive image database and ensure the privacy of images. This paper aims to address the issues of accuracy utilizing deep learning techniques as the CNN method. Also, it provides the necessary privacy for images using fully homomorphic encryption methods by Cheon, Kim, Kim, and Song (CKKS). To achieve these aims, a system has been proposed, namely RCNN_CKKS, that includes two parts. The first part (offline processing) extracts automated high-level features based on a flatting layer in a convolutional neural network (CNN) and then stores these features in a
... Show MoreThe contemporary business environment is witnessing increasing calls for modifications to the traditional cost system, and a trend towards adopting cost management techniques to provide appropriate financial and non-financial information for senior and executive departments, including the Resource Consumption Accounting (RCA) technique in question, which classifies costs into fixed and variable to support the decision-making process. Moreover, (RCA) combines two approaches to cost estimation, the first based on activity-based cost accounting (ABC) and the second on the German cost accounting method (GPK). The research aims to provide a conceptual vision for resource consumption accounting, after Considering it as an accounting te
... Show MoreThe main aim of image compression is to reduce the its size to be able for transforming and storage, therefore many methods appeared to compress the image, one of these methods is "Multilayer Perceptron ". Multilayer Perceptron (MLP) method which is artificial neural network based on the Back-Propagation algorithm for compressing the image. In case this algorithm depends upon the number of neurons in the hidden layer only the above mentioned will not be quite enough to reach the desired results, then we have to take into consideration the standards which the compression process depend on to get the best results. We have trained a group of TIFF images with the size of (256*256) in our research, compressed them by using MLP for each
... Show MoreThe problem of frequency estimation of a single sinusoid observed in colored noise is addressed. Our estimator is based on the operation of the sinusoidal digital phase-locked loop (SDPLL) which carries the frequency information in its phase error after the noisy sinusoid has been acquired by the SDPLL. We show by computer simulations that this frequency estimator beats the Cramer-Rao bound (CRB) on the frequency error variance for moderate and high SNRs when the colored noise has a general low-pass filtered (LPF) characteristic, thereby outperforming, in terms of frequency error variance, several existing techniques some of which are, in addition, computationally demanding. Moreover, the present approach generalizes on existing work tha
... Show MoreA watermark is a pattern or image defined in a paper that seems as different shades of light/darkness when viewed by the transmitted light which used for improving the robustness and security. There are many ways to work Watermark, including the addition of an image or text to the original image, but in this paper was proposed another type of watermark is add curves, line or forms have been drawn by interpolation, which produces watermark difficult to falsify and manipulate it. Our work suggests new techniques of watermark images which is embedding Cubic-spline interpolation inside the image using Bit Plane Slicing. The Peak to Signal Noise Ratio (PSNR) and Mean Square Error (MSE) value is calculated so that the quality of the original i
... Show More