The goal of this research is to introduce the concepts of Large-small submodule and Large-hollow module and some properties of them are considered, such that a proper submodule N of an R-module M is said to be Large-small submodule, if N + K = M where K be a submodule of M, then K is essential submodule of M ( K ≤e M ). An R-module M is called Large-hollow module if every proper submodule of M is Large-small submodule in M.
In today's world, the science of bioinformatics is developing rapidly, especially with regard to the analysis and study of biological networks. Scientists have used various nature-inspired algorithms to find protein complexes in protein-protein interaction (PPI) networks. These networks help scientists guess the molecular function of unknown proteins and show how cells work regularly. It is very common in PPI networks for a protein to participate in multiple functions and belong to many complexes, and as a result, complexes may overlap in the PPI networks. However, developing an efficient and reliable method to address the problem of detecting overlapping protein complexes remains a challenge since it is considered a complex and har
... Show MoreIntegrating Renewable Energy (RE) into Distribution Power Networks (DPNs) is a choice for efficient and sustainable electricity. Controlling the power factor of these sources is one of the techniques employed to manage the power loss of the grid. Capacitor banks have been employed to control phantom power, improving voltage and reducing power losses for several decades. The voltage sag and the significant power losses in the Iraqi DPN make it good evidence to be a case study proving the efficiency enhancement by adjusting the RE power factor. Therefore, this paper studies a part of the Iraqi network in a windy and sunny region, the Badra-Zurbatya-11 kV feeder, in the Wasit governorate. A substation of hybrid RE sources is connected to this
... Show MoreIn this paper, the problem of developing turbulent flow in rectangular duct is investigated by obtaining numerical results of the velocity profiles in duct by using large eddy simulation model in two dimensions with different Reynolds numbers, filter equations and mesh sizes. Reynolds numbers range from (11,000) to (110,000) for velocities (1 m/sec) to (50 m/sec) with (56×56), (76×76) and (96×96) mesh sizes with different filter equations. The numerical results of the large eddy simulation model are compared with k-ε model and analytic velocity distribution and validated with experimental data of other researcher. The large eddy simulation model has a good agreement with experimental data for high Reynolds number with the first, seco
... Show MoreAbstract: -
The concept of joint integration of important concepts in macroeconomic application, the idea of cointegration is due to the Granger (1981), and he explained it in detail in Granger and Engle in Econometrica (1987). The introduction of the joint analysis of integration in econometrics in the mid-eighties of the last century, is one of the most important developments in the experimental method for modeling, and the advantage is simply the account and use it only needs to familiarize them selves with ordinary least squares.
Cointegration seen relations equilibrium time series in the long run, even if it contained all the sequences on t
... Show MoreA three-stage learning algorithm for deep multilayer perceptron (DMLP) with effective weight initialisation based on sparse auto-encoder is proposed in this paper, which aims to overcome difficulties in training deep neural networks with limited training data in high-dimensional feature space. At the first stage, unsupervised learning is adopted using sparse auto-encoder to obtain the initial weights of the feature extraction layers of the DMLP. At the second stage, error back-propagation is used to train the DMLP by fixing the weights obtained at the first stage for its feature extraction layers. At the third stage, all the weights of the DMLP obtained at the second stage are refined by error back-propagation. Network structures an
... Show MoreSupport vector machine (SVM) is a popular supervised learning algorithm based on margin maximization. It has a high training cost and does not scale well to a large number of data points. We propose a multiresolution algorithm MRH-SVM that trains SVM on a hierarchical data aggregation structure, which also serves as a common data input to other learning algorithms. The proposed algorithm learns SVM models using high-level data aggregates and only visits data aggregates at more detailed levels where support vectors reside. In addition to performance improvements, the algorithm has advantages such as the ability to handle data streams and datasets with imbalanced classes. Experimental results show significant performance improvements in compa
... Show MoreThis work implements an Electroencephalogram (EEG) signal classifier. The implemented method uses Orthogonal Polynomials (OP) to convert the EEG signal samples to moments. A Sparse Filter (SF) reduces the number of converted moments to increase the classification accuracy. A Support Vector Machine (SVM) is used to classify the reduced moments between two classes. The proposed method’s performance is tested and compared with two methods by using two datasets. The datasets are divided into 80% for training and 20% for testing, with 5 -fold used for cross-validation. The results show that this method overcomes the accuracy of other methods. The proposed method’s best accuracy is 95.6% and 99.5%, respectively. Finally, from the results, it
... Show More