Preferred Language
Articles
/
jeasiq-2351
Semi parametric Estimators for Quantile Model via LASSO and SCAD with Missing Data
...Show More Authors

In this study, we made a comparison between LASSO & SCAD methods, which are two special methods for dealing with models in partial quantile regression. (Nadaraya & Watson Kernel) was used to estimate the non-parametric part ;in addition, the rule of thumb method was used to estimate the smoothing bandwidth (h). Penalty methods proved to be efficient in estimating the regression coefficients, but the SCAD method according to the mean squared error criterion (MSE) was the best after estimating the missing data using the mean imputation method

Crossref
View Publication Preview PDF
Quick Preview PDF
Publication Date
Tue Mar 01 2022
Journal Name
International Journal Of Nonlinear Analysis And Applications
Semi-parametric regression function estimation for environmental pollution with measurement error using artificial flower pollination algorithm
...Show More Authors

Artificial Intelligence Algorithms have been used in recent years in many scientific fields. We suggest employing flower pollination algorithm in the environmental field to find the best estimate of the semi-parametric regression function with measurement errors in the explanatory variables and the dependent variable, where measurement errors appear frequently in fields such as chemistry, biological sciences, medicine, and epidemiological studies, rather than an exact measurement. We estimate the regression function of the semi-parametric model by estimating the parametric model and estimating the non-parametric model, the parametric model is estimated by using an instrumental variables method (Wald method, Bartlett’s method, and Durbin

... Show More
Publication Date
Tue Apr 01 2014
Journal Name
Journal Of Economics And Administrative Sciences
A Note on the Hierarchical Model and Power Prior Distribution in Bayesian Quantile Regression
...Show More Authors

  In this paper, we investigate the connection between the hierarchical models and the power prior distribution in quantile regression (QReg). Under specific quantile, we develop an expression for the power parameter ( ) to calibrate the power prior distribution for quantile regression to a corresponding hierarchical model. In addition, we estimate the relation between the  and the quantile level via hierarchical model. Our proposed methodology is illustrated with real data example.

View Publication Preview PDF
Crossref
Publication Date
Thu Feb 01 2018
Journal Name
Journal Of Economics And Administrative Sciences
Comparison of Slice inverse regression with the principal components in reducing high-dimensions data by using simulation
...Show More Authors

This research aims to study the methods of reduction of dimensions that overcome the problem curse of dimensionality when traditional methods fail to provide a good estimation of the parameters So this problem must be dealt with directly . Two methods were used to solve the problem of high dimensional data, The first method is the non-classical method Slice inverse regression ( SIR ) method and the proposed weight standard Sir (WSIR) method and principal components (PCA) which is the general method used in reducing dimensions,    (SIR ) and (PCA) is based on the work of linear combinations of a subset of the original explanatory variables, which may suffer from the problem of heterogeneity and the problem of linear

... Show More
View Publication Preview PDF
Crossref
Publication Date
Fri Mar 01 2013
Journal Name
Journal Of Economics And Administrative Sciences
Robust Two-Step Estimation and Approximation Local Polynomial Kernel For Time-Varying Coefficient Model With Balance Longitudinal Data
...Show More Authors

      In this research, the nonparametric technique has been presented to estimate the time-varying coefficients functions for the longitudinal balanced data that characterized by observations obtained through (n) from the independent subjects, each one of them is measured repeatedly by group of  specific time points (m). Although the measurements are independent among the different subjects; they are mostly connected within each subject and the applied techniques is the Local Linear kernel LLPK technique. To avoid the problems of dimensionality, and thick computation, the two-steps method has been used to estimate the coefficients functions by using the two former technique. Since, the two-

... Show More
View Publication Preview PDF
Crossref
Publication Date
Mon Aug 01 2022
Journal Name
Baghdad Science Journal
New and Existing Approaches Reviewing of Big Data Analysis with Hadoop Tools
...Show More Authors

Everybody is connected with social media like (Facebook, Twitter, LinkedIn, Instagram…etc.) that generate a large quantity of data and which traditional applications are inadequate to process. Social media are regarded as an important platform for sharing information, opinion, and knowledge of many subscribers. These basic media attribute Big data also to many issues, such as data collection, storage, moving, updating, reviewing, posting, scanning, visualization, Data protection, etc. To deal with all these problems, this is a need for an adequate system that not just prepares the details, but also provides meaningful analysis to take advantage of the difficult situations, relevant to business, proper decision, Health, social media, sc

... Show More
View Publication Preview PDF
Scopus (5)
Crossref (3)
Scopus Clarivate Crossref
Publication Date
Thu Nov 01 2012
Journal Name
2012 International Conference On Advanced Computer Science Applications And Technologies (acsat)
Data Missing Solution Using Rough Set theory and Swarm Intelligence
...Show More Authors

This paper presents a hybrid approach for solving null values problem; it hybridizes rough set theory with intelligent swarm algorithm. The proposed approach is a supervised learning model. A large set of complete data called learning data is used to find the decision rule sets that then have been used in solving the incomplete data problem. The intelligent swarm algorithm is used for feature selection which represents bees algorithm as heuristic search algorithm combined with rough set theory as evaluation function. Also another feature selection algorithm called ID3 is presented, it works as statistical algorithm instead of intelligent algorithm. A comparison between those two approaches is made in their performance for null values estima

... Show More
View Publication Preview PDF
Scopus (7)
Crossref (3)
Scopus Clarivate Crossref
Publication Date
Sat Jun 01 2019
Journal Name
Journal Of Economics And Administrative Sciences
Comparison of some methods for estimating the parameters of the binary logistic regression model using the genetic algorithm with practical application
...Show More Authors

Abstract

   Suffering the human because of pressure normal life of exposure to several types of heart disease as a result of due to different factors. Therefore, and in order to find out the case of a death whether or not, are to be modeled using binary logistic regression model

    In this research used, one of the most important models of nonlinear regression models extensive use in the modeling of applications statistical, in terms of heart disease which is the binary logistic regression model. and then estimating the parameters of this model using the statistical estimation methods, another problem will be appears in estimating its parameters, as well as when the numbe

... Show More
View Publication Preview PDF
Crossref
Publication Date
Tue Aug 01 2023
Journal Name
Baghdad Science Journal
Digital Data Encryption Using a Proposed W-Method Based on AES and DES Algorithms
...Show More Authors

This paper proposes a new encryption method. It combines two cipher algorithms, i.e., DES and AES, to generate hybrid keys. This combination strengthens the proposed W-method by generating high randomized keys. Two points can represent the reliability of any encryption technique. Firstly, is the key generation; therefore, our approach merges 64 bits of DES with 64 bits of AES to produce 128 bits as a root key for all remaining keys that are 15. This complexity increases the level of the ciphering process. Moreover, it shifts the operation one bit only to the right. Secondly is the nature of the encryption process. It includes two keys and mixes one round of DES with one round of AES to reduce the performance time. The W-method deals with

... Show More
View Publication Preview PDF
Scopus (6)
Crossref (2)
Scopus Crossref
Publication Date
Thu Jun 01 2023
Journal Name
Bulletin Of Electrical Engineering And Informatics
A missing data imputation method based on salp swarm algorithm for diabetes disease
...Show More Authors

Most of the medical datasets suffer from missing data, due to the expense of some tests or human faults while recording these tests. This issue affects the performance of the machine learning models because the values of some features will be missing. Therefore, there is a need for a specific type of methods for imputing these missing data. In this research, the salp swarm algorithm (SSA) is used for generating and imputing the missing values in the pain in my ass (also known Pima) Indian diabetes disease (PIDD) dataset, the proposed algorithm is called (ISSA). The obtained results showed that the classification performance of three different classifiers which are support vector machine (SVM), K-nearest neighbour (KNN), and Naïve B

... Show More
View Publication
Scopus (5)
Crossref (1)
Scopus Crossref
Publication Date
Mon Apr 24 2017
Journal Name
Ibn Al-haitham Journal For Pure And Applied Sciences
Estimate the Parameters and Related Probability Functions for Data of the Patients of Lymph Glands Cancer via Birnbaum-Saunders Model
...Show More Authors

 In this paper,we estimate the parameters and related probability functions, survival function, cumulative distribution function , hazard function(failure rate) and failure  (death) probability function(pdf) for two parameters Birnbaum-Saunders distribution which is fitting the complete data for the patients of  lymph glands cancer. Estimating the parameters (shape and scale) using (maximum likelihood , regression quantile and shrinkage) methods and then compute the value of mentioned related probability  functions depending on sample from real data which describe the duration of survivor for patients who suffer from the lymph glands cancer based on diagnosis of disease or the inter of patients in a hospital for perio

... Show More
View Publication Preview PDF