Blockchain has garnered the most attention as the most important new technology that supports recent digital transactions via e-government. The most critical challenge for public e-government systems is reducing bureaucracy and increasing the efficiency and performance of administrative processes in these systems since blockchain technology can play a role in a decentralized environment and execute a high level of security transactions and transparency. So, the main objectives of this work are to survey different proposed models for e-government system architecture based on blockchain technology implementation and how these models are validated. This work studies and analyzes some research trends focused on blockchain and the future of e-governance for connecting citizens and smart cities, such as Using Blockchain in the Notarial Office (NO), using blockchain technology in the management of police complaints, exploring blockchain-enabled end-to-end security e-voting, using blockchain and smart contract technology for trusted and safe property registration and record management, and using a blockchain-enabled Vehicle Certification (BVC) framework. Finally, this work has proven that the issue of transforming governments from the traditional system to electronic governments based on blockchain technology has become an inevitable matter of time.
Copula modeling is widely used in modern statistics. The boundary bias problem is one of the problems faced when estimating by nonparametric methods, as kernel estimators are the most common in nonparametric estimation. In this paper, the copula density function was estimated using the probit transformation nonparametric method in order to get rid of the boundary bias problem that the kernel estimators suffer from. Using simulation for three nonparametric methods to estimate the copula density function and we proposed a new method that is better than the rest of the methods by five types of copulas with different sample sizes and different levels of correlation between the copula variables and the different parameters for the function. The
... Show MoreFace Identification system is an active research area in these years. However, the accuracy and its dependency in real life systems are still questionable. Earlier research in face identification systems demonstrated that LBP based face recognition systems are preferred than others and give adequate accuracy. It is robust against illumination changes and considered as a high-speed algorithm. Performance metrics for such systems are calculated from time delay and accuracy. This paper introduces an improved face recognition system that is build using C++ programming language with the help of OpenCV library. Accuracy can be increased if a filter or combinations of filters are applied to the images. The accuracy increases from 95.5% (without ap
... Show MoreSoftware testing is a vital part of the software development life cycle. In many cases, the system under test has more than one input making the testing efforts for every exhaustive combination impossible (i.e. the time of execution of the test case can be outrageously long). Combinatorial testing offers an alternative to exhaustive testing via considering the interaction of input values for every t-way combination between parameters. Combinatorial testing can be divided into three types which are uniform strength interaction, variable strength interaction and input-output based relation (IOR). IOR combinatorial testing only tests for the important combinations selected by the tester. Most of the researches in combinatorial testing appli
... Show MoreWith the rapid development of smart devices, people's lives have become easier, especially for visually disabled or special-needs people. The new achievements in the fields of machine learning and deep learning let people identify and recognise the surrounding environment. In this study, the efficiency and high performance of deep learning architecture are used to build an image classification system in both indoor and outdoor environments. The proposed methodology starts with collecting two datasets (indoor and outdoor) from different separate datasets. In the second step, the collected dataset is split into training, validation, and test sets. The pre-trained GoogleNet and MobileNet-V2 models are trained using the indoor and outdoor se
... Show MoreText based-image clustering (TBIC) is an insufficient approach for clustering related web images. It is a challenging task to abstract the visual features of images with the support of textual information in a database. In content-based image clustering (CBIC), image data are clustered on the foundation of specific features like texture, colors, boundaries, shapes. In this paper, an effective CBIC) technique is presented, which uses texture and statistical features of the images. The statistical features or moments of colors (mean, skewness, standard deviation, kurtosis, and variance) are extracted from the images. These features are collected in a one dimension array, and then genetic algorithm (GA) is applied for image clustering.
... Show MoreNowadays, still images are used everywhere in the digital world. The shortages of storage capacity and transmission bandwidth make efficient compression solutions essential. A revolutionary mathematics tool, wavelet transform, has already shown its power in image processing. The major topic of this paper, is improve the compresses of still images by Multiwavelet based on estimation the high Multiwavelet coefficients in high frequencies sub band by interpolation instead of sending all Multiwavelet coefficients. When comparing the proposed approach with other compression methods Good result obtained
Image quality plays a vital role in improving and assessing image compression performance. Image compression represents big image data to a new image with a smaller size suitable for storage and transmission. This paper aims to evaluate the implementation of the hybrid techniques-based tensor product mixed transform. Compression and quality metrics such as compression-ratio (CR), rate-distortion (RD), peak signal-to-noise ratio (PSNR), and Structural Content (SC) are utilized for evaluating the hybrid techniques. Then, a comparison between techniques is achieved according to these metrics to estimate the best technique. The main contribution is to improve the hybrid techniques. The proposed hybrid techniques are consisting of discrete wavel
... Show MoreWith the rapid development of computers and network technologies, the security of information in the internet becomes compromise and many threats may affect the integrity of such information. Many researches are focused theirs works on providing solution to this threat. Machine learning and data mining are widely used in anomaly-detection schemes to decide whether or not a malicious activity is taking place on a network. In this paper a hierarchical classification for anomaly based intrusion detection system is proposed. Two levels of features selection and classification are used. In the first level, the global feature vector for detection the basic attacks (DoS, U2R, R2L and Probe) is selected. In the second level, four local feature vect
... Show MoreArtificial pancreas is simulated to handle Type I diabetic patients under intensive care by automatically controlling the insulin infusion rate. A Backstepping technique is used to apply the effect of PID controller to blood glucose level since there is no direct relation between insulin infusion (the manipulated variable) and glucose level in Bergman’s system model subjected to an oral glucose tolerance test by applying a meal translated into a disturbance. Backstepping technique is usually recommended to stabilize and control the states of Bergman's class of nonlinear systems. The results showed a very satisfactory behavior of glucose deviation to a sudden rise represented by the meal that increase the blood glucose
... Show More