Preferred Language
Articles
/
ijs-5018
Data Mining Methods for Extracting Rumors Using Social Analysis Tools
...Show More Authors

       Rumors are typically described as remarks whose true value is unknown. A rumor on social media has the potential to spread erroneous information to a large group of individuals. Those false facts will influence decision-making in a variety of societies. In online social media, where enormous amounts of information are simply distributed over a large network of sources with unverified authority, detecting rumors is critical. This research proposes that rumor detection be done using Natural Language Processing (NLP) tools as well as six distinct Machine Learning (ML) methods (Nave Bayes (NB), random forest (RF), K-nearest neighbor (KNN), Logistic Regression (LR), Stochastic Gradient Descent (SGD) and Decision Tree (DT)). The data set size for the suggested experiment was 16,865 samples. For pre-processing tokenization was used to separates each one of the tokens from the others. Normalization that removes all non-word tokens, deleting stop words was utilized to remove all unnecessary words, and stemming was used to obtain the stem of the tokens. Prior to using the six classification algorithms, the major feature extraction approach Term Frequency- Inverse Document Frequency (TF-IDF) was applied. The RF classifier performed better compared to all other classifiers with an accuracy of 99%, according to the data.

Keywords: Machine learning, Text classification, Naïve Byes, RF, KNN, DT, Natural language processing, SGD).

Scopus Crossref
View Publication Preview PDF
Quick Preview PDF
Publication Date
Sat Dec 31 2022
Journal Name
Journal Of Economics And Administrative Sciences
Using Some Estimation Methods for Mixed-Random Panel Data Regression Models with Serially Correlated Errors with Application
...Show More Authors

This research includes the study of dual data models with mixed random parameters, which contain two types of parameters, the first is random and the other is fixed. For the random parameter, it is obtained as a result of differences in the marginal tendencies of the cross sections, and for the fixed parameter, it is obtained as a result of differences in fixed limits, and random errors for each section. Accidental bearing the characteristic of heterogeneity of variance in addition to the presence of serial correlation of the first degree, and the main objective in this research is the use of efficient methods commensurate with the paired data in the case of small samples, and to achieve this goal, the feasible general least squa

... Show More
View Publication Preview PDF
Publication Date
Fri Aug 01 2014
Journal Name
Journal Of Economics And Administrative Sciences
Efficiency Measurement Model for Postgraduate Programs and Undergraduate Programs by Using Data Envelopment Analysis
...Show More Authors

Measuring the efficiency of postgraduate and undergraduate programs is one of the essential elements in educational process. In this study, colleges of Baghdad University and data for the academic year (2011-2012) have been chosen to measure the relative efficiencies of postgraduate and undergraduate programs in terms of their inputs and outputs. A relevant method to conduct the analysis of this data is Data Envelopment Analysis (DEA). The effect of academic staff to the number of enrolled and alumni students to the postgraduate and undergraduate programs are the main focus of the study.

 

View Publication Preview PDF
Crossref
Publication Date
Sun Sep 15 2019
Journal Name
Al-academy
Role of the Internet in Spreading Rumors Social networking sites "Facebook" model For the duration of 1-7-2017 until 30-11-2017: يوسف محمد حسين
...Show More Authors

This study is about the role of the Internet in spreading rumors, especially through social networking sites "Facebook" model as the effectiveness of social networks lies in the speed of transmission of events; these two characteristics are important to the public, making the Internet a strong contender for television and its relationship with the public. That's why we find that the Internet today has become a fertile environment for the growth and spread of rumors. The more limited the platforms and places of publication, the greater the responsibility in the search for the original source in   spreading this or that rumor, as the Internet is considered an easy means in the production, spreading  and re-spreading  of

... Show More
View Publication Preview PDF
Crossref
Publication Date
Tue Aug 31 2021
Journal Name
Iraqi Journal Of Science
Development of a Job Applicants E-government System Based on Web Mining Classification Methods
...Show More Authors

     Governmental establishments are maintaining historical data for job applicants for future analysis of predication, improvement of benefits, profits, and development of organizations and institutions. In e-government, a decision can be made about job seekers after mining in their information that will lead to a beneficial insight. This paper proposes the development and implementation of an applicant's appropriate job prediction system to suit his or her skills using web content classification algorithms (Logit Boost, j48, PART, Hoeffding Tree, Naive Bayes). Furthermore, the results of the classification algorithms are compared based on data sets called "job classification data" sets. Experimental results indicate

... Show More
View Publication Preview PDF
Scopus (5)
Scopus Crossref
Publication Date
Sun Jan 01 2023
Journal Name
Petroleum And Coal
Analyzing of Production Data Using Combination of empirical Methods and Advanced Analytical Techniques
...Show More Authors

Scopus (1)
Scopus
Publication Date
Fri Mar 20 2009
Journal Name
Ijcsns International Journal Of Computer Science And Network Security
Pre-processing Importance for Extracting Contours from Noisy Echocardiographic Images
...Show More Authors

Contours extraction from two dimensional echocardiographic images has been a challenge in digital image processing. This is essentially due to the heavy noise, poor quality of these images and some artifacts like papillary muscles, intra-cavity structures as chordate, and valves that can interfere with the endocardial border tracking. In this paper, we will present a technique to extract the contours of heart boundaries from a sequence of echocardiographic images, where it started with pre-processing to reduce noise and produce better image quality. By pre-processing the images, the unclear edges are avoided, and we can get an accurate detection of both heart boundary and movement of heart valves.

Preview PDF
Publication Date
Mon Jan 01 2018
Journal Name
International Journal Of Data Mining, Modelling And Management
Association rules mining using cuckoo search algorithm
...Show More Authors

Association rules mining (ARM) is a fundamental and widely used data mining technique to achieve useful information about data. The traditional ARM algorithms are degrading computation efficiency by mining too many association rules which are not appropriate for a given user. Recent research in (ARM) is investigating the use of metaheuristic algorithms which are looking for only a subset of high-quality rules. In this paper, a modified discrete cuckoo search algorithm for association rules mining DCS-ARM is proposed for this purpose. The effectiveness of our algorithm is tested against a set of well-known transactional databases. Results indicate that the proposed algorithm outperforms the existing metaheuristic methods.

View Publication Preview PDF
Scopus (7)
Crossref (3)
Scopus Crossref
Publication Date
Thu Aug 01 2019
Journal Name
Journal Of Economics And Administrative Sciences
Some Estimation methods for the two models SPSEM and SPSAR for spatially dependent data
...Show More Authors

ABSTRUCT

In This Paper, some semi- parametric spatial models were estimated, these models are, the semi – parametric spatial error model (SPSEM), which suffer from the problem of spatial errors dependence, and the semi – parametric spatial auto regressive model (SPSAR). Where the method of maximum likelihood was used in estimating the parameter of spatial error          ( λ ) in the model (SPSEM), estimated  the parameter of spatial dependence ( ρ ) in the model ( SPSAR ), and using the non-parametric method in estimating the smoothing function m(x) for these two models, these non-parametric methods are; the local linear estimator (LLE) which require finding the smoo

... Show More
View Publication Preview PDF
Crossref
Publication Date
Sat Dec 01 2007
Journal Name
Journal Of Economics And Administrative Sciences
دور تنقيب البيانات Data Mining في زيادة أداء المنظمة (( دراسة تحليلية في المصرف الصناعي ))
...Show More Authors

تمهيد

غالبا ما يكون تعامل المنظمات المالية والمصرفية مع الزبائن بشكل أساسي مما يتطلب منها جمع كميات هائلة من البيانات عن هؤلاء الزبائن هذا بالإضافة الى ما يرد اليها يوميا من بيانات يجعلها أمام أكداس كبيرة من البيانات تحتاج الى جهود جبارة تحسن التعامل معها والاستفادة منها بما يخدم المنظمة.

ان التعامل اليدوي مع مثل هذه البيانات دون استخدام تقنيات حديثة يبعد المنظمة عن التط

... Show More
View Publication Preview PDF
Crossref
Publication Date
Wed Jul 17 2019
Journal Name
Aip Conference Proceedings
The correction of the line profiles for x-ray diffraction peaks by using three analysis methods
...Show More Authors

In this study used three methods such as Williamson-hall, size-strain Plot, and Halder-Wagner to analysis x-ray diffraction lines to determine the crystallite size and the lattice strain of the nickel oxide nanoparticles and then compare the results of these methods with two other methods. The results were calculated for each of these methods to the crystallite size are (0.42554) nm, (1.04462) nm, and (3.60880) nm, and lattice strain are (0.56603), (1.11978), and (0.64606) respectively were compared with the result of Scherrer method (0.29598) nm,(0.34245),and the Modified Scherrer (0.97497). The difference in calculated results Observed for each of these methods in this study.

View Publication
Scopus (11)
Crossref (10)
Scopus Clarivate Crossref