Assessing the accuracy of classification algorithms is paramount as it provides insights into reliability and effectiveness in solving real-world problems. Accuracy examination is essential in any remote sensing-based classification practice, given that classification maps consistently include misclassified pixels and classification misconceptions. In this study, two imaginary satellites for Duhok province, Iraq, were captured at regular intervals, and the photos were analyzed using spatial analysis tools to provide supervised classifications. Some processes were conducted to enhance the categorization, like smoothing. The classification results indicate that Duhok province is divided into four classes: vegetation cover, buildings, water bodies, and bare lands. During 2013-2022, vegetation cover increased from 63% in 2013 to 66% in 2022; buildings roughly increased by 1% to 3% yearly; water bodies showed a decrease of 2% to 1%; the amount of unoccupied land showed a decrease from 34% to 30%. Therefore, the classification accuracy was assessed using the approach of comparison with field data; the classification accuracy was about 85%.
The removal of fluoride ions from aqueous solution onto algal biomass as biosorbent in batch and continuous fluidized bed systems was studied. Batch system was used to study the effects of process parameters such as, pH (2-3.5), influent fluoride ions concentration (10- 50 mg/l), algal biomass dose (0–1.5 g/ 200 ml solution), to determine the best operating conditions. These conditions were pH=2.5, influent fluoride ions concentration= 10 mg/l, and algal biomass dose=3.5 mg/l. While, in continuous fluidized bed system, different operating conditions were used; flow rate (0.667- 0.800 l/min), bed depth (8-15 cm) corresponded to bed weight of (80- 150 g). The results show that the breakthrough time increases with the inc
... Show MoreA resume is the first impression between you and a potential employer. Therefore, the importance of a resume can never be underestimated. Selecting the right candidates for a job within a company can be a daunting task for recruiters when they have to review hundreds of resumes. To reduce time and effort, we can use NLTK and Natural Language Processing (NLP) techniques to extract essential data from a resume. NLTK is a free, open source, community-driven project and the leading platform for building Python programs to work with human language data. To select the best resume according to the company’s requirements, an algorithm such as KNN is used. To be selected from hundreds of resumes, your resume must be one of the best. Theref
... Show MoreThat the structural changes in the environment, business and finance and the spread of business and the diversity of transactions between economic organizations and breadth of a commercial scale in the world have left their clear on the need to keep up with the accounting for these variables as one of the social sciences affect and are affected by the surrounding environment because of the various economic and social factors, technical, legal and others.
As a result of these variables emerged a new field of accounting called Forensic Accounting, which involves the use of expertise of multiple pour in the end to the accounting profession, where the Forensic Accounting cover a large area of disciplines including strengthening
... Show MoreCrime is considered as an unlawful activity of all kinds and it is punished by law. Crimes have an impact on a society's quality of life and economic development. With a large rise in crime globally, there is a necessity to analyze crime data to bring down the rate of crime. This encourages the police and people to occupy the required measures and more effectively restricting the crimes. The purpose of this research is to develop predictive models that can aid in crime pattern analysis and thus support the Boston department's crime prevention efforts. The geographical location factor has been adopted in our model, and this is due to its being an influential factor in several situations, whether it is traveling to a specific area or livin
... Show MoreObjective: Breast cancer is regarded as a deadly disease in women causing lots of mortalities. Early diagnosis of breast cancer with appropriate tumor biomarkers may facilitate early treatment of the disease, thus reducing the mortality rate. The purpose of the current study is to improve early diagnosis of breast by proposing a two-stage classification of breast tumor biomarkers fora sample of Iraqi women.
Methods: In this study, a two-stage classification system is proposed and tested with four machine learning classifiers. In the first stage, breast features (demographic, blood and salivary-based attributes) are classified into normal or abnormal cases, while in the second stage the abnormal breast cases are
... Show MoreSupport vector machine (SVM) is a popular supervised learning algorithm based on margin maximization. It has a high training cost and does not scale well to a large number of data points. We propose a multiresolution algorithm MRH-SVM that trains SVM on a hierarchical data aggregation structure, which also serves as a common data input to other learning algorithms. The proposed algorithm learns SVM models using high-level data aggregates and only visits data aggregates at more detailed levels where support vectors reside. In addition to performance improvements, the algorithm has advantages such as the ability to handle data streams and datasets with imbalanced classes. Experimental results show significant performance improvements in compa
... Show More<span lang="EN-US">Diabetes is one of the deadliest diseases in the world that can lead to stroke, blindness, organ failure, and amputation of lower limbs. Researches state that diabetes can be controlled if it is detected at an early stage. Scientists are becoming more interested in classification algorithms in diagnosing diseases. In this study, we have analyzed the performance of five classification algorithms namely naïve Bayes, support vector machine, multi layer perceptron artificial neural network, decision tree, and random forest using diabetes dataset that contains the information of 2000 female patients. Various metrics were applied in evaluating the performance of the classifiers such as precision, area under the c
... Show MoreLinear discriminant analysis and logistic regression are the most widely used in multivariate statistical methods for analysis of data with categorical outcome variables .Both of them are appropriate for the development of linear classification models .linear discriminant analysis has been that the data of explanatory variables must be distributed multivariate normal distribution. While logistic regression no assumptions on the distribution of the explanatory data. Hence ,It is assumed that logistic regression is the more flexible and more robust method in case of violations of these assumptions.
In this paper we have been focus for the comparison between three forms for classification data belongs
... Show More The purpose of this work is to study the classification and construction of (k,3)-arcs in the projective plane PG(2,7). We found that there are two (5,3)-arcs, four (6,3)-arcs, six (7,3)arcs, six (8,3)-arcs, seven (9,3)-arcs, six (10,3)-arcs and six (11,3)-arcs. All of these arcs are incomplete. The number of distinct (12,3)-arcs are six, two of them are complete. There are four distinct (13,3)-arcs, two of them are complete and one (14,3)-arc which is incomplete. There exists one complete (15,3)-arc.