Social media and news agencies are major sources for tracking news and events. With these sources' massive amounts of data, it is easy to spread false or misleading information. Given the great dangers of fake news to societies, previous studies have given great attention to detecting it and limiting its impact. As such, this work aims to use modern deep learning techniques to detect Arabic fake news. In the proposed system, the attention model is adapted with bidirectional long-short-term memory (Bi-LSTM) to identify the most informative words in the sentence. Then, a multi-layer perceptron (MLP) is applied to classify news articles as fake or real. The experiments are conducted on a newly launched Arabic dataset called the Arabic Fake News Dataset (AFND). The AFDN dataset contains exactly 606912 news articles collected from multiple sources, so it is suitable for deep learning requirements. Both simple recurrent neural networks (S-RNN), long short-term memory (LSTM), and gated recurrent units (GRU) are used for comparison. According to evaluation criteria, our proposed model achieved an accuracy of (0.8127), which is the best and highest accuracy among the deep learning methods used in this work. Moreover, the performance of our proposed model is better compared to previous studies, which used the AFND.
Deep learning convolution neural network has been widely used to recognize or classify voice. Various techniques have been used together with convolution neural network to prepare voice data before the training process in developing the classification model. However, not all model can produce good classification accuracy as there are many types of voice or speech. Classification of Arabic alphabet pronunciation is a one of the types of voice and accurate pronunciation is required in the learning of the Qur’an reading. Thus, the technique to process the pronunciation and training of the processed data requires specific approach. To overcome this issue, a method based on padding and deep learning convolution neural network is proposed to
... Show More<span>Deepfakes have become possible using artificial intelligence techniques, replacing one person’s face with another person’s face (primarily a public figure), making the latter do or say things he would not have done. Therefore, contributing to a solution for video credibility has become a critical goal that we will address in this paper. Our work exploits the visible artifacts (blur inconsistencies) which are generated by the manipulation process. We analyze focus quality and its ability to detect these artifacts. Focus measure operators in this paper include image Laplacian and image gradient groups, which are very fast to compute and do not need a large dataset for training. The results showed that i) the Laplacian
... Show MoreInternational satellite channels in Arabic are targeted to the region with their news bulletins and their innovative programs, attracting the interest of the Arab viewers in their news articles and programs with new ideas and methods as well as high technology in the production and synthesis of videos. Therefore, they work on framing the above, And the media policy that governs, so focused the problem of the study to try to answer the question of the President is how to frame the international satellite channels in Arabic to the phenomenon of terrorism? What are the aspects that are working to highlight and aspects that try to hide? The study adopted the survey methodology for the main news bulletins in the Russian channel today for the
... Show MoreThe printed Arabic character recognition are faced numerous challenges due to its character body which are changed depending on its position in any sentence (at beginning or in the middle or in the end of the word). This paper portrays recognition strategies. These strategies depend on new pre-processing processes, extraction the structural and numerical features to build databases for printed alphabetical Arabic characters. The database information that obtained from features extracted was applied in recognition stage. Minimum Distance Classifier technique (MDC) was used to classify and train the classes of characters. The procedure of one character against all characters (OAA) was used in determination the rate
... Show MoreFinding communities of connected individuals in complex networks is challenging, yet crucial for understanding different real-world societies and their interactions. Recently attention has turned to discover the dynamics of such communities. However, detecting accurate community structures that evolve over time adds additional challenges. Almost all the state-of-the-art algorithms are designed based on seemingly the same principle while treating the problem as a coupled optimization model to simultaneously identify community structures and their evolution over time. Unlike all these studies, the current work aims to individually consider this three measures, i.e. intra-community score, inter-community score, and evolution of community over
... Show MoreWhile providing important news information, News tickers ( also called sliders or crawlers ) have become one of the methods used daily by satellite channels, because it’s almost a continuous news coverage and as it seems, it has become today an addition to the news world. Hence, satellite channels need to look for a mechanism to build news tickers in order to develop them even though they are still today not recognisable and in need of being classified in a radio or television art, and that is not easy. This research sheds light on the construction of the slider of a news satellite channel, which is important according to our modest convictions, as it can be the beginning of a long scientific research in a new field of study. The probl
... Show MoreExtractive multi-document text summarization – a summarization with the aim of removing redundant information in a document collection while preserving its salient sentences – has recently enjoyed a large interest in proposing automatic models. This paper proposes an extractive multi-document text summarization model based on genetic algorithm (GA). First, the problem is modeled as a discrete optimization problem and a specific fitness function is designed to effectively cope with the proposed model. Then, a binary-encoded representation together with a heuristic mutation and a local repair operators are proposed to characterize the adopted GA. Experiments are applied to ten topics from Document Understanding Conference DUC2002 datas
... Show MoreSentiment Analysis is a research field that studies human opinion, sentiment, evaluation, and emotions towards entities such as products, services, organizations, events, topics, and their attributes. It is also a task of natural language processing. However, sentiment analysis research has mainly been carried out for the English language. Although the Arabic language is one of the most used languages on the Internet, only a few studies have focused on Arabic language sentiment analysis.
In this paper, a review of the most important research works in the field of Arabic text sentiment analysis using deep learning algorithms is presented. This review illustrates the main steps used in these studies, which include
... Show MoreAbstract :
This present paper sheds the light on dimensions of scheduling the service that includes( the easiness of performing the service, willingness , health factors, psychological sides, family matters ,diminishing the time of waiting that improve performance of nursing process including ( the willingness of performance, the ability to perform the performance , opportunity of performance) . There is genuine problem in the Iraqi hospitals lying into the weakness of nursing staffs , no central decision to define and organize schedules. Thus the researcher has chosen this problem as to be his title . The research come a to develop the nursing service
... Show MorePrinted Arabic document image retrieval is a very important and needed system for many companies, governments and various users. In this paper, a printed Arabic document images retrieval system based on spotting the header words of official Arabic documents is proposed. The proposed system uses an efficient segmentation, preprocessing methods and an accurate proposed feature extraction method in order to prepare the document for classification process. Besides that, Support Vector Machine (SVM) is used for classification. The experiments show the system achieved best results of accuracy that is 96.8% by using polynomial kernel of SVM classifier.