Coronavirus disease (COVID-19), which is caused by SARS-CoV-2, has been announced as a global pandemic by the World Health Organization (WHO), which results in the collapsing of the healthcare systems in several countries around the globe. Machine learning (ML) methods are one of the most utilized approaches in artificial intelligence (AI) to classify COVID-19 images. However, there are many machine-learning methods used to classify COVID-19. The question is: which machine learning method is best over multi-criteria evaluation? Therefore, this research presents benchmarking of COVID-19 machine learning methods, which is recognized as a multi-criteria decision-making (MCDM) problem. In the recent century, the trend of developing different MCDM approaches has been raised based on different perspectives; however, the latest one, namely, the fuzzy decision by opinion score method that was produced in 2020, has efficiently been able to solve some existing issues that other methods could not manage to solve. because of the multiple criteria decision-making problem and because some criteria have a conflict problem. The methodology of this research was divided into two main stages. The first stage related to identifying the decision matrix used eight different ML methods on chest X-ray (CXR) images and extracted a new decision matrix so as to assess the ML methods. The second stage related to FDOSM was utilized to solve the multiple criteria decision-making problems. The results of this research are as follows: (1) The individual benchmarking results of three decision makers are nearly identical; however, among all the used ML methods, neural networks (NN) achieved the best results. (2) The results of the benchmarking group are comparable, and the neural network machine learning method is the best among the used methods. (3) The final rank is more logical and closest to the decision-makers' opinion. (4) Significant differences among groups' scores are shown by our validation results, which indicate the authenticity of our results. Finally, this research presents many benefits, especially for hospitals and medical clinics, with a view to speeding up the diagnosis of patients suffering from COVID-19 using the best machine learning method.
In this research , we study the inverse Gompertz distribution (IG) and estimate the survival function of the distribution , and the survival function was evaluated using three methods (the Maximum likelihood, least squares, and percentiles estimators) and choosing the best method estimation ,as it was found that the best method for estimating the survival function is the squares-least method because it has the lowest IMSE and for all sample sizes
Coronavirus diseases 2021 (COVID-19) on going situation in Iraq is characterized in this paper. The pandemic handling by the government and the difficulties of public health measures enforcement in Iraq. Estimation of the COVID-19 data set was performed. Iraq is endangered to the pandemic, like the rest of the world besides sharing borders with hotspot neighbouring country Iran. The government of Iraq launched proactive measures in an attempt to prevent the viral spread. Nevertheless, reports of new cases keep escalating leaving the public health officials racing to take more firm constriction to face the pandemic. The paper bring forth the current COVID-19 scenario in Iraq, the government measures towards the public health challenges, and
... Show MoreCloth simulation and animation has been the topic of research since the mid-80's in the field of computer graphics. Enforcing incompressible is very important in real time simulation. Although, there are great achievements in this regard, it still suffers from unnecessary time consumption in certain steps that is common in real time applications. This research develops a real-time cloth simulator for a virtual human character (VHC) with wearable clothing. This research achieves success in cloth simulation on the VHC through enhancing the position-based dynamics (PBD) framework by computing a series of positional constraints which implement constant densities. Also, the self-collision and collision wit
... Show MoreIn this research, the results of the Integral breadth method were used to analyze the X-ray lines to determine the crystallite size and lattice strain of the zirconium oxide nanoparticles and the value of the crystal size was equal to (8.2nm) and the lattice strain (0.001955), and then the results were compared with three other methods, which are the Scherer and Scherer dynamical diffraction theory and two formulas of the Scherer and Wilson method.the results were as followsScherer crystallite size(7.4nm)and lattice strain(0.011968),Schererdynamic method crystallite size(7.5 nm),Scherrer and Wilson methodcrystallite size( 8.5nm) and lattice strain( 0.001919).And using another formula for Schearer and Wilson methodwe obtain the size of the c
... Show MoreThis study relates to the estimation of a simultaneous equations system for the Tobit model where the dependent variables ( ) are limited, and this will affect the method to choose the good estimator. So, we will use new estimations methods different from the classical methods, which if used in such a case, will produce biased and inconsistent estimators which is (Nelson-Olson) method and Two- Stage limited dependent variables(2SLDV) method to get of estimators that hold characteristics the good estimator .
That is , parameters will be estim
... Show MoreAs a result of the pandemic crisis and the shift to digitization, cyber-attacks are at an all-time high in the modern day despite good technological advancement. The use of wireless sensor networks (WSNs) is an indicator of technical advancement in most industries. For the safe transfer of data, security objectives such as confidentiality, integrity, and availability must be maintained. The security features of WSN are split into node level and network level. For the node level, a proactive strategy using deep learning /machine learning techniques is suggested. The primary benefit of this proactive approach is that it foresees the cyber-attack before it is launched, allowing for damage mitigation. A cryptography algorithm is put
... Show MoreIn this article, Convolution Neural Network (CNN) is used to detect damage and no damage images form satellite imagery using different classifiers. These classifiers are well-known models that are used with CNN to detect and classify images using a specific dataset. The dataset used belongs to the Huston hurricane that caused several damages in the nearby areas. In addition, a transfer learning property is used to store the knowledge (weights) and reuse it in the next task. Moreover, each applied classifier is used to detect the images from the dataset after it is split into training, testing and validation. Keras library is used to apply the CNN algorithm with each selected classifier to detect the images. Furthermore, the performa
... Show MoreOptic Disc (OD) localization is a basic step for the screening, identification and appreciation of the risk of diverse ophthalmic pathologies such as glaucoma and diabetic retinopathy.In fact, the fundamental step towards an exact OD segmentation process is the success of OD localization. This paper proposes a fully automatic procedure for OD localization based on two of the OD most relevant features of high-intensity value and vasculature convergence. Merging ofthese two features renders the proposed method capable of localizing the OD within the variously complicated environments such as the faint disc boundary, unbalanced shading, and the existence of retinal pathologies like cotton wall and exudates,which usually share the same
... Show MoreWe examine 10 hypothetical patients suffering from some of the symptoms of COVID 19 (modified) using topological concepts on topological spaces created from equality and similarity interactions and our information system. This is determined by the degree of accuracy obtained by weighing the value of the lower and upper figures. In practice, this approach has become clearer.
The influx of data in bioinformatics is primarily in the form of DNA, RNA, and protein sequences. This condition places a significant burden on scientists and computers. Some genomics studies depend on clustering techniques to group similarly expressed genes into one cluster. Clustering is a type of unsupervised learning that can be used to divide unknown cluster data into clusters. The k-means and fuzzy c-means (FCM) algorithms are examples of algorithms that can be used for clustering. Consequently, clustering is a common approach that divides an input space into several homogeneous zones; it can be achieved using a variety of algorithms. This study used three models to cluster a brain tumor dataset. The first model uses FCM, whic
... Show More