Face recognition and identity verification are now critical components of current security and verification technology. The main objective of this review is to identify the most important deep learning techniques that have contributed to the improvement in the accuracy and reliability of facial recognition systems, as well as highlighting existing problems and potential future research areas. An extensive literature review was conducted with the assistance of leading scientific databases such as IEEE Xplore, ScienceDirect, and SpringerLink and covered studies from the period 2015 to 2024. The studies of interest were related to the application of deep neural networks, i.e., CNN, Siamese, and Transformer-based models, in face recognition and identity verification systems. Deep learning-based approaches have been shown through cross-sectional studies to improve recognition accuracy under diverse environmental and demographic conditions. Anti-counterfeiting (Anti-Spoofing) and real presence detection features integrated into systems have likewise enhanced system security against advanced attacks such as 3D masks, false images and videos, and Deepfake technology. Future trends point to the need to develop deep, multi-sensory and interpretable learning models, and adopt learning strategies based on limited data, while adhering to legal and ethical frameworks to ensure fairness andtransparency.
The open hole well log data (Resistivity, Sonic, and Gamma Ray) of well X in Euphrates subzone within the Mesopotamian basin are applied to detect the total organic carbon (TOC) of Zubair Formation in the south part of Iraq. The mathematical interpretation of the logs parameters helped in detecting the TOC and source rock productivity. As well, the quantitative interpretation of the logs data leads to assigning to the organic content and source rock intervals identification. The reactions of logs in relation to the increasing of TOC can be detected through logs parameters. By this way, the TOC can be predicted with an increase in gamma-ray, sonic, neutron, and resistivity, as well as a decrease in the density log
... Show MorePlagiarism is becoming more of a problem in academics. It’s made worse by the ease with which a wide range of resources can be found on the internet, as well as the ease with which they can be copied and pasted. It is academic theft since the perpetrator has ”taken” and presented the work of others as his or her own. Manual detection of plagiarism by a human being is difficult, imprecise, and time-consuming because it is difficult for anyone to compare their work to current data. Plagiarism is a big problem in higher education, and it can happen on any topic. Plagiarism detection has been studied in many scientific articles, and methods for recognition have been created utilizing the Plagiarism analysis, Authorship identification, and
... Show MoreSpeech is the essential way to interact between humans or between human and machine. However, it is always contaminated with different types of environment noise. Therefore, speech enhancement algorithms (SEA) have appeared as a significant approach in speech processing filed to suppress background noise and return back the original speech signal. In this paper, a new efficient two-stage SEA with low distortion is proposed based on minimum mean square error sense. The estimation of clean signal is performed by taking the advantages of Laplacian speech and noise modeling based on orthogonal transform (Discrete Krawtchouk-Tchebichef transform) coefficients distribution. The Discrete Kra
Computer vision seeks to mimic the human visual system and plays an essential role in artificial intelligence. It is based on different signal reprocessing techniques; therefore, developing efficient techniques becomes essential to achieving fast and reliable processing. Various signal preprocessing operations have been used for computer vision, including smoothing techniques, signal analyzing, resizing, sharpening, and enhancement, to reduce reluctant falsifications, segmentation, and image feature improvement. For example, to reduce the noise in a disturbed signal, smoothing kernels can be effectively used. This is achievedby convolving the distributed signal with smoothing kernels. In addition, orthogonal moments (OMs) are a cruc
... Show MoreFingerprints are commonly utilized as a key technique and for personal recognition and in identification systems for personal security affairs. The most widely used fingerprint systems utilizing the distribution of minutiae points for fingerprint matching and representation. These techniques become unsuccessful when partial fingerprint images are capture, or the finger ridges suffer from lot of cuts or injuries or skin sickness. This paper suggests a fingerprint recognition technique which utilizes the local features for fingerprint representation and matching. The adopted local features have determined using Haar wavelet subbands. The system was tested experimentally using FVC2004 databases, which consists of four datasets, each set holds
... Show MoreMotives for public exposure to specialized sports satellite channels and the gratifications achieved about it - Research presented by (Dr. Dr. Laila Ali Jumaa), Imam Al-Kadhim College (peace be upon him) - Department of Information-2021.
The research aims to know the extent of public exposure to specialized sports satellite channels, and what gratifications are achieved from them, and to reach scientific results that give an accurate description of exposure, motives and gratifications verified by that exposure, and the research objectives are summarized in the following:
- Revealing the habits and patterns of public exposure to specialized sports satelli