COVID 19 has spread rapidly around the world due to the lack of a suitable vaccine; therefore the early prediction of those infected with this virus is extremely important attempting to control it by quarantining the infected people and giving them possible medical attention to limit its spread. This work suggests a model for predicting the COVID 19 virus using feature selection techniques. The proposed model consists of three stages which include the preprocessing stage, the features selection stage, and the classification stage. This work uses a data set consists of 8571 records, with forty features for patients from different countries. Two feature selection techniques are used in order to select the best features that affect the prediction of the proposed model. These are the Recursive Feature Elimination (RFE) as wrapper feature selection and the Extra Tree Classifier (ETC) as embedded feature selection. Two classification methods are applied for classifying the features vectors which include the Naïve Bayesian method and Restricted Boltzmann Machine (RBM) method. The results were 56.181%, 97.906% respectively when classifying all features and 66.329%, 99.924% respectively when classifying the best ten features using features selection techniques.
The aim of the research is to examine the multiple intelligence test item selection based on Howard Gardner's MI model using the Generalized Partial Estimation Form, generalized intelligence. The researcher adopted the scale of multiple intelligences by Kardner, it consists of (102) items with eight sub-scales. The sample consisted of (550) students from Baghdad universities, Technology University, al-Mustansiriyah university, and Iraqi University for the academic year (2019/2020). It was verified assumptions theory response to a single (one-dimensional, local autonomy, the curve of individual characteristics, speed factor and application), and analysis of the data according to specimen partial appreciation of the generalized, and limits
... Show MoreImage recognition is one of the most important applications of information processing, in this paper; a comparison between 3-level techniques based image recognition has been achieved, using discrete wavelet (DWT) and stationary wavelet transforms (SWT), stationary-stationary-stationary (sss), stationary-stationary-wavelet (ssw), stationary-wavelet-stationary (sws), stationary-wavelet-wavelet (sww), wavelet-stationary- stationary (wss), wavelet-stationary-wavelet (wsw), wavelet-wavelet-stationary (wws) and wavelet-wavelet-wavelet (www). A comparison between these techniques has been implemented. according to the peak signal to noise ratio (PSNR), root mean square error (RMSE), compression ratio (CR) and the coding noise e (n) of each third
... Show MoreThat researchers in the play Science Find a theatrical form of an experimental creative, and these experiences taken from the innovation space aesthetic dialogue with the receiver, especially since with previous theatrical experience and is an extension of her, because the interactive theater theatrical terms that assume a pattern do from the depth of the relationship between supply playwright and the audience, which alters the receiving skills actually change, and to monitor these differing divided into three classes that include research, methodological framework (Chapter I) and includes, research problem, the importance of research and the need to him, the goal of the research, the limits Find it border, and the theoretical framework
... Show MoreCloud Computing is a mass platform to serve high volume data from multi-devices and numerous technologies. Cloud tenants have a high demand to access their data faster without any disruptions. Therefore, cloud providers are struggling to ensure every individual data is secured and always accessible. Hence, an appropriate replication strategy capable of selecting essential data is required in cloud replication environments as the solution. This paper proposed a Crucial File Selection Strategy (CFSS) to address poor response time in a cloud replication environment. A cloud simulator called CloudSim is used to conduct the necessary experiments, and results are presented to evidence the enhancement on replication performance. The obtained an
... Show MoreRecently, biometric technologies are used widely due to their improved security that decreases cases of deception and theft. The biometric technologies use physical features and characters in the identification of individuals. The most common biometric technologies are: Iris, voice, fingerprint, handwriting and hand print. In this paper, two biometric recognition technologies are analyzed and compared, which are the iris and sound recognition techniques. The iris recognition technique recognizes persons by analyzing the main patterns in the iris structure, while the sound recognition technique identifies individuals depending on their unique voice characteristics or as called voice print. The comparison results show that the resul
... Show MoreIn this paper, a new method of selection variables is presented to select some essential variables from large datasets. The new model is a modified version of the Elastic Net model. The modified Elastic Net variable selection model has been summarized in an algorithm. It is applied for Leukemia dataset that has 3051 variables (genes) and 72 samples. In reality, working with this kind of dataset is not accessible due to its large size. The modified model is compared to some standard variable selection methods. Perfect classification is achieved by applying the modified Elastic Net model because it has the best performance. All the calculations that have been done for this paper are in
In this paper, compared eight methods for generating the initial value and the impact of these methods to estimate the parameter of a autoregressive model, as was the use of three of the most popular methods to estimate the model and the most commonly used by researchers MLL method, Barg method and the least squares method and that using the method of simulation model first order autoregressive through the design of a number of simulation experiments and the different sizes of the samples.
Predicting the network traffic of web pages is one of the areas that has increased focus in recent years. Modeling traffic helps find strategies for distributing network loads, identifying user behaviors and malicious traffic, and predicting future trends. Many statistical and intelligent methods have been studied to predict web traffic using time series of network traffic. In this paper, the use of machine learning algorithms to model Wikipedia traffic using Google's time series dataset is studied. Two data sets were used for time series, data generalization, building a set of machine learning models (XGboost, Logistic Regression, Linear Regression, and Random Forest), and comparing the performance of the models using (SMAPE) and
... Show MoreThis paper introduced an algorithm for lossless image compression to compress natural and medical images. It is based on utilizing various casual fixed predictors of one or two dimension to get rid of the correlation or spatial redundancy embedded between image pixel values then a recursive polynomial model of a linear base is used.
The experimental results of the proposed compression method are promising in terms of preserving the details and the quality of the reconstructed images as well improving the compression ratio as compared with the extracted results of a traditional linear predicting coding system.