This research deals with a shrinking method concerned with the principal components similar to that one which used in the multiple regression “Least Absolute Shrinkage and Selection: LASS”. The goal here is to make an uncorrelated linear combinations from only a subset of explanatory variables that may have a multicollinearity problem instead taking the whole number say, (K) of them. This shrinkage will force some coefficients to equal zero, after making some restriction on them by some "tuning parameter" say, (t) which balances the bias and variance amount from side, and doesn't exceed the acceptable percent explained variance of these components. This had been shown by MSE criterion in the regression case and the percent explained variance in the principal component case.
Portable devices such as smartphones, tablet PCs, and PDAs are a useful combination of hardware and software turned toward the mobile workers. While they present the ability to review documents, communicate via electronic mail, appointments management, meetings, etc. They usually lack a variety of essential security features. To address the security concerns of sensitive data, many individuals and organizations, knowing the associated threats mitigate them through improving authentication of users, encryption of content, protection from malware, firewalls, intrusion prevention, etc. However, no standards have been developed yet to determine whether such mobile data management systems adequately provide the fu
... Show MoreIn this paper, integrated quantum neural network (QNN), which is a class of feedforward
neural networks (FFNN’s), is performed through emerging quantum computing (QC) with artificial neural network(ANN) classifier. It is used in data classification technique, and here iris flower data is used as a classification signals. For this purpose independent component analysis (ICA) is used as a feature extraction technique after normalization of these signals, the architecture of (QNN’s) has inherently built in fuzzy, hidden units of these networks (QNN’s) to develop quantized representations of sample information provided by the training data set in various graded levels of certainty. Experimental results presented here show that
... Show More<p>Energy and memory limitations are considerable constraints of sensor nodes in wireless sensor networks (WSNs). The limited energy supplied to network nodes causes WSNs to face crucial functional limitations. Therefore, the problem of limited energy resource on sensor nodes can only be addressed by using them efficiently. In this research work, an energy-balancing routing scheme for in-network data aggregation is presented. This scheme is referred to as Energy-aware and load-Balancing Routing scheme for Data Aggregation (hereinafter referred to as EBR-DA). The EBRDA aims to provide an energy efficient multiple-hop routing to the destination on the basis of the quality of the links between the source and destination. In
... Show MoreAs a result of the pandemic crisis and the shift to digitization, cyber-attacks are at an all-time high in the modern day despite good technological advancement. The use of wireless sensor networks (WSNs) is an indicator of technical advancement in most industries. For the safe transfer of data, security objectives such as confidentiality, integrity, and availability must be maintained. The security features of WSN are split into node level and network level. For the node level, a proactive strategy using deep learning /machine learning techniques is suggested. The primary benefit of this proactive approach is that it foresees the cyber-attack before it is launched, allowing for damage mitigation. A cryptography algorithm is put
... Show MoreIt is so much noticeable that initialization of architectural parameters has a great impact on whole learnability stream so that knowing mathematical properties of dataset results in providing neural network architecture a better expressivity and capacity. In this paper, five random samples of the Volve field dataset were taken. Then a training set was specified and the persistent homology of the dataset was calculated to show impact of data complexity on selection of multilayer perceptron regressor (MLPR) architecture. By using the proposed method that provides a well-rounded strategy to compute data complexity. Our method is a compound algorithm composed of the t-SNE method, alpha-complexity algorithm, and a persistence barcod
... Show MoreThis study produces an image of theoretical and experimental case of high loading stumbling condition for hip prosthesis. Model had been studied namely Charnley. This model was modeled with finite element method by using ANSYS software, the effect of changing the design parameters (head diameter, neck length, neck ratio, stem length) on Charnley design, for stumbling case as impact load where the load reach to (8.7* body weight) for impact duration of 0.005sec.An experimental rig had been constructed to test the hip model, this rig consist of a wood box with a smooth sliding shaft where a load of 1 pound is dropped from three heights.
The strain produced by this impact is measured by using rosette strain gauge connected to Wheatstone
In this paper a new method is proposed to perform the N-Radon orthogonal frequency division multiplexing (OFDM), which are equivalent to 4-quadrature amplitude modulation (QAM), 16-QAM, 64-QAM, 256-QAM, ... etc. in spectral efficiency. This non conventional method is proposed in order to reduce the constellation energy and increase spectral efficiency. The proposed method gives a significant improvement in Bit Error Rate performance, and keeps bandwidth efficiency and spectrum shape as good as conventional Fast Fourier Transform based OFDM. The new structure was tested and compared with conventional OFDM for Additive White Gaussian Noise, flat, and multi-path selective fading channels. Simulation tests were generated for different channels
... Show MoreSecure data communication across networks is always threatened with intrusion and abuse. Network Intrusion Detection System (IDS) is a valuable tool for in-depth defense of computer networks. Most research and applications in the field of intrusion detection systems was built based on analysing the several datasets that contain the attacks types using the classification of batch learning machine. The present study presents the intrusion detection system based on Data Stream Classification. Several data stream algorithms were applied on CICIDS2017 datasets which contain several new types of attacks. The results were evaluated to choose the best algorithm that satisfies high accuracy and low computation time.
This paper concerned with estimation reliability ( for K components parallel system of the stress-strength model with non-identical components which is subjected to a common stress, when the stress and strength follow the Generalized Exponential Distribution (GED) with unknown shape parameter α and the known scale parameter θ (θ=1) to be common. Different shrinkage estimation methods will be considered to estimate  depending on maximum likelihood estimator and prior estimates based on simulation using mean squared error (MSE) criteria. The study approved that the shrinkage estimation using shrinkage weight function was the best.
A stochastic process {Xk, k = 1, 2, ...} is a doubly geometric stochastic process if there exists the ratio (a > 0) and the positive function (h(k) > 0), so that {α 1 h-k }; k ak X k = 1, 2, ... is a generalization of a geometric stochastic process. This process is stochastically monotone and can be used to model a point process with multiple trends. In this paper, we use nonparametric methods to investigate statistical inference for doubly geometric stochastic processes. A graphical technique for determining whether a process is in agreement with a doubly geometric stochastic process is proposed. Further, we can estimate the parameters a, b, μ and σ2 of the doubly geometric stochastic process by using the least squares estimate for Xk a
... Show More