This research includes the study of dual data models with mixed random parameters, which contain two types of parameters, the first is random and the other is fixed. For the random parameter, it is obtained as a result of differences in the marginal tendencies of the cross sections, and for the fixed parameter, it is obtained as a result of differences in fixed limits, and random errors for each section. Accidental bearing the characteristic of heterogeneity of variance in addition to the presence of serial correlation of the first degree, and the main objective in this research is the use of efficient methods commensurate with the paired data in the case of small samples, and to achieve this goal, the feasible general least squares method (FGLS) and the mean group method (MG) were used, and then the efficiency of the extracted estimators was compared in the case of mixed random parameters and the method that gives us the efficient estimator was chosen. Real data was applied that included the per capita consumption of electric energy (Y) for five countries, which represents the number of cross-sections (N = 5) over nine years (T = 9), so the number of observations is (n = 45) observations, and the explanatory variables are the consumer price index (X1) and the per capita GDP (X2). To evaluate the performance of the estimators of the (FGLS) method and the (MG) method on the general model, the mean absolute percentage error (MAPE) scale was used to compare the efficiency of the estimators. The results showed that the mean group estimation (MG) method is the best method for parameter estimation than the (FGLS) method. Also, the (MG) appeared to be the best and best method for estimating sub-parameters for each cross-section (country).
Recurrent strokes can be devastating, often resulting in severe disability or death. However, nearly 90% of the causes of recurrent stroke are modifiable, which means recurrent strokes can be averted by controlling risk factors, which are mainly behavioral and metabolic in nature. Thus, it shows that from the previous works that recurrent stroke prediction model could help in minimizing the possibility of getting recurrent stroke. Previous works have shown promising results in predicting first-time stroke cases with machine learning approaches. However, there are limited works on recurrent stroke prediction using machine learning methods. Hence, this work is proposed to perform an empirical analysis and to investigate machine learning al
... Show MoreThis paper deal with the estimation of the shape parameter (a) of Generalized Exponential (GE) distribution when the scale parameter (l) is known via preliminary test single stage shrinkage estimator (SSSE) when a prior knowledge (a0) a vailable about the shape parameter as initial value due past experiences as well as suitable region (R) for testing this prior knowledge.
The Expression for the Bias, Mean squared error [MSE] and Relative Efficiency [R.Eff(×)] for the proposed estimator are derived. Numerical results about beha
... Show MoreBusiness organizations have faced many challenges in recent times, most important of which is information technology, because it is widely spread and easy to use. Its use has led to an increase in the amount of data that business organizations deal with an unprecedented manner. The amount of data available through the internet is a problem that many parties seek to find solutions for. Why is it available there in this huge amount randomly? Many expectations have revealed that in 2017, there will be devices connected to the internet estimated at three times the population of the Earth, and in 2015 more than one and a half billion gigabytes of data was transferred every minute globally. Thus, the so-called data mining emerged as a
... Show MoreAbstract The wavelet shrink estimator is an attractive technique when estimating the nonparametric regression functions, but it is very sensitive in the case of a correlation in errors. In this research, a polynomial model of low degree was used for the purpose of addressing the boundary problem in the wavelet reduction in addition to using flexible threshold values in the case of Correlation in errors as it deals with those transactions at each level separately, unlike the comprehensive threshold values that deal with all levels simultaneously, as (Visushrink) methods, (False Discovery Rate) method, (Improvement Thresholding) and (Sureshrink method), as the study was conducted on real monthly data represented in the rates of theft crimes f
... Show MoreThe aim of this study was to develop a sensor based on a carbon paste electrodes (CPEs) modified with used MIP for determination of organophosphorus pesticides (OPPs). The modified electrode exhibited a significantly increased sensitivity and selectivity of (OPPs). The MIP was prepared by thermo-polymerization method using N,N-diethylaminoethymethacrylate (NNDAA) as functional monomer, N,N-1,4-phenylenediacrylamide (NNPDA) as cross-linker, the acetonitrile used as solvent and (Opps) as the template molecule. The three OPPs (diazinon, quinalphos and chlorpyrifos) were chosen as the templates, which have been selected as base analytes which used widely in agriculture sector. The extraction efficiency of the imprinted polymers has been evaluat
... Show MoreSecure storage of confidential medical information is critical to healthcare organizations seeking to protect patient's privacy and comply with regulatory requirements. This paper presents a new scheme for secure storage of medical data using Chaskey cryptography and blockchain technology. The system uses Chaskey encryption to ensure integrity and confidentiality of medical data, blockchain technology to provide a scalable and decentralized storage solution. The system also uses Bflow segmentation and vertical segmentation technologies to enhance scalability and manage the stored data. In addition, the system uses smart contracts to enforce access control policies and other security measures. The description of the system detailing and p
... Show MoreForecasting is one of the important topics in the analysis of time series, as the importance of forecasting in the economic field has emerged in order to achieve economic growth. Therefore, accurate forecasting of time series is one of the most important challenges that we seek to make the best decision, the aim of the research is to suggest employing hybrid models to predict daily crude oil prices. The hybrid model consists of integrating the linear component, which represents Box Jenkins models, and the non-linear component, which represents one of the methods of artificial intelligence, which is the artificial neural network (ANN), support vector regression (SVR) algorithm and it was shown that the proposed hybrid models in the predicti
... Show MoreIn this work, an inventive photovoltaic evaporative cooling (PV/EC) hybrid system was constructed and experimentally investigated. The PV/EC hybrid system has the prosperous advantage of producing electrical energy and cooling the PV panel besides providing cooled-humid air. Two cooling techniques were utilized: backside evaporative cooling (case #1) and combined backside evaporative cooling with a front-side water spray technique (case #2). The water spraying on the front side of the PV panel is intermittent to minimize water and power consumption depending on the PV panel temperature. In addition, two pad thicknesses of 5 cm and 10 cm were investigated at three different water flow rates of 1, 2, and 3 lpm. In Case #1,
... Show MoreSupport vector machines (SVMs) are supervised learning models that analyze data for classification or regression. For classification, SVM is widely used by selecting an optimal hyperplane that separates two classes. SVM has very good accuracy and extremally robust comparing with some other classification methods such as logistics linear regression, random forest, k-nearest neighbor and naïve model. However, working with large datasets can cause many problems such as time-consuming and inefficient results. In this paper, the SVM has been modified by using a stochastic Gradient descent process. The modified method, stochastic gradient descent SVM (SGD-SVM), checked by using two simulation datasets. Since the classification of different ca
... Show More