Merging biometrics with cryptography has become more familiar and a great scientific field was born for researchers. Biometrics adds distinctive property to the security systems, due biometrics is unique and individual features for every person. In this study, a new method is presented for ciphering data based on fingerprint features. This research is done by addressing plaintext message based on positions of extracted minutiae from fingerprint into a generated random text file regardless the size of data. The proposed method can be explained in three scenarios. In the first scenario the message was used inside random text directly at positions of minutiae in the second scenario the message was encrypted with a choosen word before ciphering inside random text. In the third scenario the encryption process insures a correct restoration of original message. Experimental results show that the proposed cryptosystem works well and secure due to the huge number of fingerprints may be used by attacker to attempt message extraction where all fingerprints but one will give incorrect results and the message will not represent original plain-text, also this method ensures that any intended tamper or simple damage will be discovered due to failure in extracting proper message even if the correct fingerprint are used.
In this article, we design an optimal neural network based on new LM training algorithm. The traditional algorithm of LM required high memory, storage and computational overhead because of it required the updated of Hessian approximations in each iteration. The suggested design implemented to converts the original problem into a minimization problem using feed forward type to solve non-linear 3D - PDEs. Also, optimal design is obtained by computing the parameters of learning with highly precise. Examples are provided to portray the efficiency and applicability of this technique. Comparisons with other designs are also conducted to demonstrate the accuracy of the proposed design.
Research on the automated extraction of essential data from an electrocardiography (ECG) recording has been a significant topic for a long time. The main focus of digital processing processes is to measure fiducial points that determine the beginning and end of the P, QRS, and T waves based on their waveform properties. The presence of unavoidable noise during ECG data collection and inherent physiological differences among individuals make it challenging to accurately identify these reference points, resulting in suboptimal performance. This is done through several primary stages that rely on the idea of preliminary processing of the ECG electrical signal through a set of steps (preparing raw data and converting them into files tha
... Show MoreWithin the framework of big data, energy issues are highly significant. Despite the significance of energy, theoretical studies focusing primarily on the issue of energy within big data analytics in relation to computational intelligent algorithms are scarce. The purpose of this study is to explore the theoretical aspects of energy issues in big data analytics in relation to computational intelligent algorithms since this is critical in exploring the emperica aspects of big data. In this chapter, we present a theoretical study of energy issues related to applications of computational intelligent algorithms in big data analytics. This work highlights that big data analytics using computational intelligent algorithms generates a very high amo
... Show MoreCarbonate reservoirs are an essential source of hydrocarbons worldwide, and their petrophysical properties play a crucial role in hydrocarbon production. Carbonate reservoirs' most critical petrophysical properties are porosity, permeability, and water saturation. A tight reservoir refers to a reservoir with low porosity and permeability, which means it is difficult for fluids to move from one side to another. This study's primary goal is to evaluate reservoir properties and lithological identification of the SADI Formation in the Halfaya oil field. It is considered one of Iraq's most significant oilfields, 35 km south of Amarah. The Sadi formation consists of four units: A, B1, B2, and B3. Sadi A was excluded as it was not filled with h
... Show MoreVarious of 2,5- disubstituted 1,3,4-oxadiazole (Schiff base, ?- lactam and azo) were synthesized from 2,5-di (4,4?-amino-1,3,4-oxadiazole which usequently synth-esized from mixture of 4- amino benzoic acid and hydrazine arch of polyphosphorus acid. The synthesized compounds were cherecterized by using some spectral data (UV, FT-IR , and 1H-NMR)
Four new copolymers were synthesized from reaction of bis acid monomer 3-((4-carboxyphenyl) diazenyl)-5-chloro-2-hydroxybenzoic acid with five diacidhydrazide in presence of poly phosphoric acid. The resulted monomers and copolymers have been characterized by FT-IR, 1H-NMR, 13C-NMR spectroscopy as well as EIMs technique. The number averages of molecular weights of the copolymers are between 4822 and 9144, and their polydispersity indexes are between 1.02 and 2.15. All the copolymers show good thermal stability with the temperatures higher than 305.86 C when losing 10% weight under nitrogen. The cyclic voltammetry (CV) measurement and the electrochemical band gaps (Eg) of these copolymers are found below 2.00 ev.
In the current worldwide health crisis produced by coronavirus disease (COVID-19), researchers and medical specialists began looking for new ways to tackle the epidemic. According to recent studies, Machine Learning (ML) has been effectively deployed in the health sector. Medical imaging sources (radiography and computed tomography) have aided in the development of artificial intelligence(AI) strategies to tackle the coronavirus outbreak. As a result, a classical machine learning approach for coronavirus detection from Computerized Tomography (CT) images was developed. In this study, the convolutional neural network (CNN) model for feature extraction and support vector machine (SVM) for the classification of axial
... Show MoreArab Food's Security
The influx of data in bioinformatics is primarily in the form of DNA, RNA, and protein sequences. This condition places a significant burden on scientists and computers. Some genomics studies depend on clustering techniques to group similarly expressed genes into one cluster. Clustering is a type of unsupervised learning that can be used to divide unknown cluster data into clusters. The k-means and fuzzy c-means (FCM) algorithms are examples of algorithms that can be used for clustering. Consequently, clustering is a common approach that divides an input space into several homogeneous zones; it can be achieved using a variety of algorithms. This study used three models to cluster a brain tumor dataset. The first model uses FCM, whic
... Show More