Face recognition is required in various applications, and major progress has been witnessed in this area. Many face recognition algorithms have been proposed thus far; however, achieving high recognition accuracy and low execution time remains a challenge. In this work, a new scheme for face recognition is presented using hybrid orthogonal polynomials to extract features. The embedded image kernel technique is used to decrease the complexity of feature extraction, then a support vector machine is adopted to classify these features. Moreover, a fast-overlapping block processing algorithm for feature extraction is used to reduce the computation time. Extensive evaluation of the proposed method was carried out on two different face image datasets, ORL and FEI. Different state-of-the-art face recognition methods were compared with the proposed method in order to evaluate its accuracy. We demonstrate that the proposed method achieves the highest recognition rate in different considered scenarios. Based on the obtained results, it can be seen that the proposed method is robust against noise and significantly outperforms previous approaches in terms of speed.
Television white spaces (TVWSs) refer to the unused part of the spectrum under the very high frequency (VHF) and ultra-high frequency (UHF) bands. TVWS are frequencies under licenced primary users (PUs) that are not being used and are available for secondary users (SUs). There are several ways of implementing TVWS in communications, one of which is the use of TVWS database (TVWSDB). The primary purpose of TVWSDB is to protect PUs from interference with SUs. There are several geolocation databases available for this purpose. However, it is unclear if those databases have the prediction feature that gives TVWSDB the capability of decreasing the number of inquiries from SUs. With this in mind, the authors present a reinforcement learning-ba
... Show MoreAdvances in digital technology and the World Wide Web has led to the increase of digital documents that are used for various purposes such as publishing and digital library. This phenomenon raises awareness for the requirement of effective techniques that can help during the search and retrieval of text. One of the most needed tasks is clustering, which categorizes documents automatically into meaningful groups. Clustering is an important task in data mining and machine learning. The accuracy of clustering depends tightly on the selection of the text representation method. Traditional methods of text representation model documents as bags of words using term-frequency index document frequency (TFIDF). This method ignores the relationship an
... Show MoreThe feature extraction step plays major role for proper object classification and recognition, this step depends mainly on correct object detection in the given scene, the object detection algorithms may result with some noises that affect the final object shape, a novel approach is introduced in this paper for filling the holes in that object for better object detection and for correct feature extraction, this method is based on the hole definition which is the black pixel surrounded by a connected boundary region, and hence trying to find a connected contour region that surrounds the background pixel using roadmap racing algorithm, the method shows a good results in 2D space objects.
Keywords: object filling, object detection, objec
The agent-based modeling is currently utilized extensively to analyze complex systems. It supported such growth, because it was able to convey distinct levels of interaction in a complex detailed environment. Meanwhile, agent-based models incline to be progressively complex. Thus, powerful modeling and simulation techniques are needed to address this rise in complexity. In recent years, a number of platforms for developing agent-based models have been developed. Actually, in most of the agents, often discrete representation of the environment, and one level of interaction are presented, where two or three are regarded hardly in various agent-based models. The key issue is that modellers work in these areas is not assisted by simulation plat
... Show MoreThe consensus algorithm is the core mechanism of blockchain and is used to ensure data consistency among blockchain nodes. The PBFT consensus algorithm is widely used in alliance chains because it is resistant to Byzantine errors. However, the present PBFT (Practical Byzantine Fault Tolerance) still has issues with master node selection that is random and complicated communication. The IBFT consensus technique, which is enhanced, is proposed in this study and is based on node trust value and BLS (Boneh-Lynn-Shacham) aggregate signature. In IBFT, multi-level indicators are used to calculate the trust value of each node, and some nodes are selected to take part in network consensus as a result of this calculation. The master node is chosen
... Show MoreBP algorithm is the most widely used supervised training algorithms for multi-layered feedforward neural net works. However, BP takes long time to converge and quite sensitive to the initial weights of a network. In this paper, a modified cuckoo search algorithm is used to get the optimal set of initial weights that will be used by BP algorithm. And changing the value of BP learning rate to improve the error convergence. The performance of the proposed hybrid algorithm is compared with the stan dard BP using simple data sets. The simulation result show that the proposed algorithm has improved the BP training in terms of quick convergence of the solution depending on the slope of the error graph.
Abstract:
The main objective of the research is to build an optimal investment portfolio of stocks’ listed at the Iraqi Stock Exchange after employing the multi-objective genetic algorithm within the period of time between 1/1/2006 and 1/6/2018 in the light of closing prices (43) companies after the completion of their data and met the conditions of the inspection, as the literature review has supported the diagnosis of the knowledge gap and the identification of deficiencies in the level of experimentation was the current direction of research was to reflect the aspects of the unseen and untreated by other researchers in particular, the missing data and non-reversed pieces the reality of trading at the level of compani
... Show More