Big data analysis has important applications in many areas such as sensor networks and connected healthcare. High volume and velocity of big data bring many challenges to data analysis. One possible solution is to summarize the data and provides a manageable data structure to hold a scalable summarization of data for efficient and effective analysis. This research extends our previous work on developing an effective technique to create, organize, access, and maintain summarization of big data and develops algorithms for Bayes classification and entropy discretization of large data sets using the multi-resolution data summarization structure. Bayes classification and data discretization play essential roles in many learning algorithms such as decision tree and nearest neighbor search. The proposed method can handle streaming data efficiently and, for entropy discretization, provide su the optimal split value.
Petrophysical properties including volume of shale, porosity and water saturation are significance parameters for petroleum companies in evaluating the reservoirs and determining the hydrocarbon zones. These can be achieved through conventional petrophysical calculations from the well logs data such as gamma ray, sonic, neutron, density and deep resistivity. The well logging operations of the targeted limestone Mishrif reservoirs in Ns-X Well, Nasiriya Oilfield, south of Iraq could not be done due to some problems related to the well condition. The gamma ray log was the only recorded log through the cased borehole. Therefore, evaluating the reservoirs and estimating the perforation zones has not performed and the drilled well was
... Show MoreIn this study, a traumatic spinal cord injury (TSCI) classification system is proposed using a convolutional neural network (CNN) technique with automatically learned features from electromyography (EMG) signals for a non-human primate (NHP) model. A comparison between the proposed classification system and a classical classification method (k-nearest neighbors, kNN) is also presented. Developing such an NHP model with a suitable assessment tool (i.e., classifier) is a crucial step in detecting the effect of TSCI using EMG, which is expected to be essential in the evaluation of the efficacy of new TSCI treatments. Intramuscular EMG data were collected from an agonist/antagonist tail muscle pair for the pre- and post-spinal cord lesi
... Show MoreIn the present study, an attempt has been made to study the change in water quality of the river in terms of turbidity during lockdown associated with COVID-19. Iraq announced the longest-ever lockdown on 25 March 2020 due to COVID-19 pandemic.
In the absence of ground observations, remote sensing data was adopted, especially during this period. The change in the visible region's spectral reflectance of water in part of the river has been analyzed using the Landsat 8 OLI multispectral remote sensing data at Tigris River, Salah al-Din province (Bayji / near the refinery), Iraq. It was found that the green and red bands are most sensitive and can be used to estimate turbidity. Furthermore, the temporal variation in turbidity was a
... Show MoreThe phenomena of Dust storm take place in barren and dry regions all over the world. It may cause by intense ground winds which excite the dust and sand from soft, arid land surfaces resulting it to rise up in the air. These phenomena may cause harmful influences upon health, climate, infrastructure, and transportation. GIS and remote sensing have played a key role in studying dust detection. This study was conducted in Iraq with the objective of validating dust detection. These techniques have been used to derive dust indices using Normalized Difference Dust Index (NDDI) and Middle East Dust Index (MEDI), which are based on images from MODIS and in-situ observation based on hourly wi
In this paper, a handwritten digit classification system is proposed based on the Discrete Wavelet Transform and Spike Neural Network. The system consists of three stages. The first stage is for preprocessing the data and the second stage is for feature extraction, which is based on Discrete Wavelet Transform (DWT). The third stage is for classification and is based on a Spiking Neural Network (SNN). To evaluate the system, two standard databases are used: the MADBase database and the MNIST database. The proposed system achieved a high classification accuracy rate with 99.1% for the MADBase database and 99.9% for the MNIST database
Deep learning convolution neural network has been widely used to recognize or classify voice. Various techniques have been used together with convolution neural network to prepare voice data before the training process in developing the classification model. However, not all model can produce good classification accuracy as there are many types of voice or speech. Classification of Arabic alphabet pronunciation is a one of the types of voice and accurate pronunciation is required in the learning of the Qur’an reading. Thus, the technique to process the pronunciation and training of the processed data requires specific approach. To overcome this issue, a method based on padding and deep learning convolution neural network is proposed to
... Show MoreThis work implements an Electroencephalogram (EEG) signal classifier. The implemented method uses Orthogonal Polynomials (OP) to convert the EEG signal samples to moments. A Sparse Filter (SF) reduces the number of converted moments to increase the classification accuracy. A Support Vector Machine (SVM) is used to classify the reduced moments between two classes. The proposed method’s performance is tested and compared with two methods by using two datasets. The datasets are divided into 80% for training and 20% for testing, with 5 -fold used for cross-validation. The results show that this method overcomes the accuracy of other methods. The proposed method’s best accuracy is 95.6% and 99.5%, respectively. Finally, from the results, it
... Show MoreNonlinear diffraction patterns can be obtained by focusing a laser beam through a thin slice of the material. Here, we investigated experimentally the formation of the far field nonlinear diffraction patterns of cw laser beam at 532 nm passing through a quartz cuvette containing multi-wall carbon nanotubes (MWCNT's) suspended in acetone and in DI water at concentrations of 0.030.wt.%, 0.045 wt.%, 0.060 wt.%, and 0.075 wt.%. Our results show that increasing the concentration of both types of suspensions (MWCNTs in acetone and MWCNTs DI water) led to increase in the number of pattern rings which indicates an increase in their nonlinear refractive indices. Moreover, MWCNTs DI water suspension at a concentration of 0.075 wt. % was more effic
... Show MoreThe research discusses the need to find the innovative structures and methodologies for developing Human Capital (HC) in Iraqi Universities. One of the most important of these structures is Communities of Practice (CoPs) which contributes to develop HC by using learning, teaching and training through the conversion speed of knowledge and creativity into practice. This research has been used the comparative approach through employing the methodology of Data Envelopment Analysis (DEA) by using (Excel 2010 - Solver) as a field evidence to prove the role of CoPs in developing HC. In light of the given information, a researcher adopted on an archived preliminary data about (23) colleges at Mosul University as a deliberate sample for t
... Show MoreThis research aims to study the methods of reduction of dimensions that overcome the problem curse of dimensionality when traditional methods fail to provide a good estimation of the parameters So this problem must be dealt with directly . Two methods were used to solve the problem of high dimensional data, The first method is the non-classical method Slice inverse regression ( SIR ) method and the proposed weight standard Sir (WSIR) method and principal components (PCA) which is the general method used in reducing dimensions, (SIR ) and (PCA) is based on the work of linear combinations of a subset of the original explanatory variables, which may suffer from the problem of heterogeneity and the problem of linear
... Show More