Merging biometrics with cryptography has become more familiar and a great scientific field was born for researchers. Biometrics adds distinctive property to the security systems, due biometrics is unique and individual features for every person. In this study, a new method is presented for ciphering data based on fingerprint features. This research is done by addressing plaintext message based on positions of extracted minutiae from fingerprint into a generated random text file regardless the size of data. The proposed method can be explained in three scenarios. In the first scenario the message was used inside random text directly at positions of minutiae in the second scenario the message was encrypted with a choosen word before ciphering inside random text. In the third scenario the encryption process insures a correct restoration of original message. Experimental results show that the proposed cryptosystem works well and secure due to the huge number of fingerprints may be used by attacker to attempt message extraction where all fingerprints but one will give incorrect results and the message will not represent original plain-text, also this method ensures that any intended tamper or simple damage will be discovered due to failure in extracting proper message even if the correct fingerprint are used.
Social networking sites represent one of the modern communication technologies that have contributed to the expression of public opinion trends towards various events and crises of which security crisis is most important being characterized by its ability to influence the community life of the public. In order to recognize its role in shaping opinions of the educated class of the public that is characterized by a high level of knowledge, culture and having experience in dealing with the media. Its advantage is that they have an active audience by expressing their views on the situations, events, and news published on them as well as expressing their attitudes and sympathy with the events. So a number of questions are included in the ques
... Show MoreThe use of real-time machine learning to optimize passport control procedures at airports can greatly improve both the efficiency and security of the processes. To automate and optimize these procedures, AI algorithms such as character recognition, facial recognition, predictive algorithms and automatic data processing can be implemented. The proposed method is to use the R-CNN object detection model to detect passport objects in real-time images collected by passport control cameras. This paper describes the step-by-step process of the proposed approach, which includes pre-processing, training and testing the R-CNN model, integrating it into the passport control system, and evaluating its accuracy and speed for efficient passenger flow
... Show MoreThe majority of systems dealing with natural language processing (NLP) and artificial intelligence (AI) can assist in making automated and automatically-supported decisions. However, these systems may face challenges and difficulties or find it confusing to identify the required information (characterization) for eliciting a decision by extracting or summarizing relevant information from large text documents or colossal content. When obtaining these documents online, for instance from social networking or social media, these sites undergo a remarkable increase in the textual content. The main objective of the present study is to conduct a survey and show the latest developments about the implementation of text-mining techniqu
... Show MoreThis research study Blur groups (Fuzzy Sets) which is the perception of the most modern in the application in various practical and theoretical areas and in various fields of life, was addressed to the fuzzy random variable whose value is not real, but the numbers Millbh because it expresses the mysterious phenomena or uncertain with measurements are not assertive. Fuzzy data were presented for binocular test and analysis of variance method of random Fuzzy variables , where this method depends on a number of assumptions, which is a problem that prevents the use of this method in the case of non-realized.
The dramatic decrease in the cost of genome sequencing over the last two decades has led to an abundance of genomic data. This data has been used in research related to the discovery of genetic diseases and the production of medicines. At the same time, the huge space for storing the genome (2–3 GB) has led to it being considered one of the most important sources of big data, which has prompted research centers concerned with genetic research to take advantage of the cloud and its services in storing and managing this data. The cloud is a shared storage environment, which makes data stored in it vulnerable to unwanted tampering or disclosure. This leads to serious concerns about securing such data from tampering and unauthoriz
... Show MoreAbstract
The study seeks to use one of the techniques (Data mining) a (Logic regression) on the inherited risk through the use of style financial ratios technical analysis and then apply for financial fraud indicators,Since higher scandals exposed companies and the failure of the audit process has shocked the community and affected the integrity of the auditor and the reason is financial fraud practiced by the companies and not to the discovery of the fraud by the auditor, and this fraud involves intentional act aimed to achieve personal and harm the interests of to others, and doing (administration, staff) we can say that all frauds carried out through the presence of the motives and factors that help th
... Show MoreEvaluation study was conducted for seismic interpretation using two-dimensional seismic data for Subba oil field, which is located in the southern Iraq. The Subba oil field was discovered in 1973 through the results of the seismic surveys and the digging of the first exploratory well SU-1 in 1975 to the south of the Subba oil field. The entire length of the field is 35 km and its width is about 10 km. The Subba oil field contains 15 wells most of them distributed in the central of the field.
This study is dealing with the field data and how to process it for the purpose of interpretation; the processes included conversion of field data format, compensation of lost data and noise disposal, as well as the a
... Show More