Social media and news agencies are major sources for tracking news and events. With these sources' massive amounts of data, it is easy to spread false or misleading information. Given the great dangers of fake news to societies, previous studies have given great attention to detecting it and limiting its impact. As such, this work aims to use modern deep learning techniques to detect Arabic fake news. In the proposed system, the attention model is adapted with bidirectional long-short-term memory (Bi-LSTM) to identify the most informative words in the sentence. Then, a multi-layer perceptron (MLP) is applied to classify news articles as fake or real. The experiments are conducted on a newly launched Arabic dataset called the Arabic Fake News Dataset (AFND). The AFDN dataset contains exactly 606912 news articles collected from multiple sources, so it is suitable for deep learning requirements. Both simple recurrent neural networks (S-RNN), long short-term memory (LSTM), and gated recurrent units (GRU) are used for comparison. According to evaluation criteria, our proposed model achieved an accuracy of (0.8127), which is the best and highest accuracy among the deep learning methods used in this work. Moreover, the performance of our proposed model is better compared to previous studies, which used the AFND.
The mechanism of managing religious difference
God is the Lord of the worlds, forget them and their jinn, Arabs and non-Muslims, He is the Lord of Muslims and Lord of non-Muslims, as He created them male and female despite their differences in tongues and colors, so He created them according to their diversity and distinction in beliefs and religions.
To prevent flare-ups due to differences, the Lord of the worlds set limits that he has forbidden to cross, and draw clear maps as mechanisms for managing religious differences and lifting psychological barriers between the different, so that they can coexist in peace and freedom, each adhering to his faith, and practicing the rituals of his religion.
... Show MoreSocial media and networks rely heavily on images. Those images should be distributed in a private manner. Image encryption is therefore one of the most crucial components of cyber security. In the present study, an effective image encryption technique is developed that combines the Rabbit Algorithm, a simple algorithm, with the Attractor of Aizawa, a chaotic map. The lightweight encryption algorithm (Rabbit Algorithm), which is a 3D dynamic system, is made more secure by the Attractor of Aizawa. The process separates color images into blocks by first dividing them into bands of red, green, and blue (RGB). The presented approach generates multiple keys, or sequences, based on the initial parameters and conditions, which are
... Show MoreThe science of information security has become a concern of many researchers, whose efforts are trying to come up with solutions and technologies that ensure the transfer of information in a more secure manner through the network, especially the Internet, without any penetration of that information, given the risk of digital data being sent between the two parties through an insecure channel. This paper includes two data protection techniques. The first technique is cryptography by using Menezes Vanstone elliptic curve ciphering system, which depends on public key technologies. Then, the encoded data is randomly included in the frame, depending on the seed used. The experimental results, using a PSNR within avera
... Show MoreA novel method for Network Intrusion Detection System (NIDS) has been proposed, based on the concept of how DNA sequence detects disease as both domains have similar conceptual method of detection. Three important steps have been proposed to apply DNA sequence for NIDS: convert the network traffic data into a form of DNA sequence using Cryptography encoding method; discover patterns of Short Tandem Repeats (STR) sequence for each network traffic attack using Teiresias algorithm; and conduct classification process depends upon STR sequence based on Horspool algorithm. 10% KDD Cup 1999 data set is used for training phase. Correct KDD Cup 1999 data set is used for testing phase to evaluate the proposed method. The current experiment results sh
... Show MoreThe concept of separation axioms constitutes a key role in general topology and all generalized forms of topologies. The present authors continued the study of gpα-closed sets by utilizing this concept, new separation axioms, namely gpα-regular and gpα-normal spaces are studied and established their characterizations. Also, new spaces namely gpα-Tk for k = 0, 1, 2 are studied.
Long memory analysis is one of the most active areas in econometrics and time series where various methods have been introduced to identify and estimate the long memory parameter in partially integrated time series. One of the most common models used to represent time series that have a long memory is the ARFIMA (Auto Regressive Fractional Integration Moving Average Model) which diffs are a fractional number called the fractional parameter. To analyze and determine the ARFIMA model, the fractal parameter must be estimated. There are many methods for fractional parameter estimation. In this research, the estimation methods were divided into indirect methods, where the Hurst parameter is estimated fir
... Show MoreThis paper is an attempt to help the manager of a manufactory to
plan for the next year by a scientific approach, to maximize the profit and آ provide optimal آ monthly quantities of آ production, آ inventory,
work-force, prices and sales. The computer programming helps us to execute that huge number of calculations.
This research aims to reveal the quality standards available in press images published in the news sites, the Iraqi News Agency and Al-Mada Press for the period from: 1/9/2019, to: 30/9/2019. The research is a descriptive research, in which the researcher relied on the survey methodology to achieve its objectives. The research reached a number of results, most notably the weak role of photojournalists in the websites and the adoption of those the Internet as a source for obtaining press images published with news and reports through its pages, as well as the neglect of the standard Description/Comment below the press images, which plays an important function in explaining and interpreting them for users.
Stemming is a pre-processing step in Text mining applications as well as it is very important in most of the Information Retrieval systems. The goal of stemming is to reduce different grammatical forms of a word and sometimes derivationally related forms of a word to a common base (root or stem) form like reducing noun, adjective, verb, adverb etc. to its base form. The stem needs not to be identical to the morphological root of the word; it is usually sufficient that related words map to the same stem, even if this stem is not in itself a valid root. As in other languages; there is a need for an effective stemming algorithm for the indexing and retrieval of Arabic documents while the Arabic stemming algorithms are not widely available.
... Show MoreSpeech is the essential way to interact between humans or between human and machine. However, it is always contaminated with different types of environment noise. Therefore, speech enhancement algorithms (SEA) have appeared as a significant approach in speech processing filed to suppress background noise and return back the original speech signal. In this paper, a new efficient two-stage SEA with low distortion is proposed based on minimum mean square error sense. The estimation of clean signal is performed by taking the advantages of Laplacian speech and noise modeling based on orthogonal transform (Discrete Krawtchouk-Tchebichef transform) coefficients distribution. The Discrete Kra