The COVID-19 pandemic has necessitated new methods for controlling the spread of the virus, and machine learning (ML) holds promise in this regard. Our study aims to explore the latest ML algorithms utilized for COVID-19 prediction, with a focus on their potential to optimize decision-making and resource allocation during peak periods of the pandemic. Our review stands out from others as it concentrates primarily on ML methods for disease prediction.To conduct this scoping review, we performed a Google Scholar literature search using "COVID-19," "prediction," and "machine learning" as keywords, with a custom range from 2020 to 2022. Of the 99 articles that were screened for eligibility, we selected 20 for the final review.Our systematic literature review demonstrates that ML-powered tools can alleviate the burden on healthcare systems. These tools can analyze significant amounts of medical data and potentially improve predictive and preventive healthcare.
The security of message information has drawn more attention nowadays, so; cryptography has been used extensively. This research aims to generate secured cipher keys from retina information to increase the level of security. The proposed technique utilizes cryptography based on retina information. The main contribution is the original procedure used to generate three types of keys in one system from the retina vessel's end position and improve the technique of three systems, each with one key. The distances between the center of the diagonals of the retina image and the retina vessel's end (diagonal center-end (DCE)) represent the first key. The distances between the center of the radius of the retina and the retina vessel's end (ra
... Show MoreConditional logistic regression is often used to study the relationship between event outcomes and specific prognostic factors in order to application of logistic regression and utilizing its predictive capabilities into environmental studies. This research seeks to demonstrate a novel approach of implementing conditional logistic regression in environmental research through inference methods predicated on longitudinal data. Thus, statistical analysis of longitudinal data requires methods that can properly take into account the interdependence within-subjects for the response measurements. If this correlation ignored then inferences such as statistical tests and confidence intervals can be invalid largely.
A new human-based heuristic optimization method, named the Snooker-Based Optimization Algorithm (SBOA), is introduced in this study. The inspiration for this method is drawn from the traits of sales elites—those qualities every salesperson aspires to possess. Typically, salespersons strive to enhance their skills through autonomous learning or by seeking guidance from others. Furthermore, they engage in regular communication with customers to gain approval for their products or services. Building upon this concept, SBOA aims to find the optimal solution within a given search space, traversing all positions to obtain all possible values. To assesses the feasibility and effectiveness of SBOA in comparison to other algorithms, we conducte
... Show MoreThis study intends to examine the efficiency of student-centered learning (SCL) through Google classroom in enhancing the readiness of fourth stage females’ pre-service teachers. The research employs a quasi-experimental design with a control and experimental group to compare the teaching readiness of participants before and after the intervention. The participants were 30 of fourth stage students at the University of Baghdad - College of Education for Women/the department of English and data were collected through observation checklist to assess their teaching experience and questionnaires to assess their perceptions towards using Google Classroom. Two sections were selected, C as a control group and D as the experimental one each with (
... Show MoreRecently, biometric technologies are used widely due to their improved security that decreases cases of deception and theft. The biometric technologies use physical features and characters in the identification of individuals. The most common biometric technologies are: Iris, voice, fingerprint, handwriting and hand print. In this paper, two biometric recognition technologies are analyzed and compared, which are the iris and sound recognition techniques. The iris recognition technique recognizes persons by analyzing the main patterns in the iris structure, while the sound recognition technique identifies individuals depending on their unique voice characteristics or as called voice print. The comparison results show that the resul
... Show MoreThe efficient sequencing techniques have significantly increased the number of genomes that are now available, including the Crenarchaeon Sulfolobus solfataricus P2 genome. The genome-scale metabolic pathways in Sulfolobus solfataricus P2 were predicted by implementing the “Pathway Tools†software using MetaCyc database as reference knowledge base. A Pathway/Genome Data Base (PGDB) specific for Sulfolobus solfataricus P2 was created. A curation approach was carried out regarding all the amino acids biosynthetic pathways. Experimental literatures as well as homology-, orthology- and context-based protein function prediction methods were followed for the curation process. The “PathoLogicâ€
... Show MoreBackground/Objectives: The purpose of current research aims to a modified image representation framework for Content-Based Image Retrieval (CBIR) through gray scale input image, Zernike Moments (ZMs) properties, Local Binary Pattern (LBP), Y Color Space, Slantlet Transform (SLT), and Discrete Wavelet Transform (DWT). Methods/Statistical analysis: This study surveyed and analysed three standard datasets WANG V1.0, WANG V2.0, and Caltech 101. The features an image of objects in this sets that belong to 101 classes-with approximately 40-800 images for every category. The suggested infrastructure within the study seeks to present a description and operationalization of the CBIR system through automated attribute extraction system premised on CN
... Show MoreDeepfake is a type of artificial intelligence used to create convincing images, audio, and video hoaxes and it concerns celebrities and everyone because they are easy to manufacture. Deepfake are hard to recognize by people and current approaches, especially high-quality ones. As a defense against Deepfake techniques, various methods to detect Deepfake in images have been suggested. Most of them had limitations, like only working with one face in an image. The face has to be facing forward, with both eyes and the mouth open, depending on what part of the face they worked on. Other than that, a few focus on the impact of pre-processing steps on the detection accuracy of the models. This paper introduces a framework design focused on this asp
... Show MoreUniversal image stego-analytic has become an important issue due to the natural images features curse of dimensionality. Deep neural networks, especially deep convolution networks, have been widely used for the problem of universal image stegoanalytic design. This paper describes the effect of selecting suitable value for number of levels during image pre-processing with Dual Tree Complex Wavelet Transform. This value may significantly affect the detection accuracy which is obtained to evaluate the performance of the proposed system. The proposed system is evaluated using three content-adaptive methods, named Highly Undetetable steGO (HUGO), Wavelet Obtained Weights (WOW) and UNIversal WAvelet Relative Distortion (UNIWARD).
The obtain
The main task of creating new digital images of different skin diseases is to increase the resolution of the specific textures and colors of each skin disease. In this paper, the performance of generative adversarial networks has been optimized to generate multicolor and histological color digital images of a variety of skin diseases (melanoma, birthmarks, and basal cell carcinomas). Two architectures for generative adversarial networks were built using two models: the first is a model for generating new images of dermatology through training processes, and the second is a discrimination model whose main task is to identify the generated digital images as either real or fake. The gray wolf swarm algorithm and the whale swarm alg
... Show More