The main reason for the emergence of a deepfake (deep learning and fake) term is the evolution in artificial intelligence techniques, especially deep learning. Deep learning algorithms, which auto-solve problems when giving large sets of data, are used to swap faces in digital media to create fake media with a realistic appearance. To increase the accuracy of distinguishing a real video from fake one, a new model has been developed based on deep learning and noise residuals. By using Steganalysis Rich Model (SRM) filters, we can gather a low-level noise map that is used as input to a light Convolution neural network (CNN) to classify a real face from fake one. The results of our work show that the training accuracy of the CNN model
... Show Moreالمقالة : هي عبارة عن محاولة لتحليل المفردات القانونية بوصفها كنتيجة للاتصال المتبادل بين اللغات ولتسمية الخصائص الأساسية لها. تسمح حالة الاقتراض بالمصطلحات كوحدات مرتبطة بعملية الاتصال والتواصل بين اللغات بتحديد الاقتراض في المصطلحات القانونية كمصطلح أو بلغة أجنبية ، يتم نقله من لغة إلى أخرى نتيجة للتفاعل اللغوي وجهات الاتصال اللغوية. علاوة على ذلك ، إذا كانت المصطلحات القانونية عبارة عن نظام فرعي خا
... Show MoreThe selection and assessment of single-photon detection modules is a crucial problem in satellite-based QKD systems. The system's overall efficiency, secure key rate and quantum bit error rate are all significantly influenced by single-photon detection modules. There is a knowledge gap about the practical performance of commercially available single-photon detectors because existing research frequently relies on theoretical characteristics. This paper introduces a study on the effect of the parameters of three commercial single photon detection modules from ID Quantique company: ID Qube, ID100, and ID281 on certain Bennett-Brassard 1984 protocol parameters such as secure key rate, mean photon number per pulse, quantum bit error rate
... Show More