Preferred Language
Articles
/
YxeTP48BVTCNdQwCGWYB
Multi-Resolution Hierarchical Structure for Efficient Data Aggregation and Mining of Big Data
...Show More Authors

Big data analysis is essential for modern applications in areas such as healthcare, assistive technology, intelligent transportation, environment and climate monitoring. Traditional algorithms in data mining and machine learning do not scale well with data size. Mining and learning from big data need time and memory efficient techniques, albeit the cost of possible loss in accuracy. We have developed a data aggregation structure to summarize data with large number of instances and data generated from multiple data sources. Data are aggregated at multiple resolutions and resolution provides a trade-off between efficiency and accuracy. The structure is built once, updated incrementally, and serves as a common data input for multiple mining and learning algorithms. Data mining algorithms are modified to accept the aggregated data as input. Hierarchical data aggregation serves as a paradigm under which novel …

Scopus Crossref
View Publication
Publication Date
Tue Dec 25 2018
Journal Name
Journal Of Engineering Science And Technology
RIETVELD TEXTURE REFINEMENT ANALYSIS OF LINDE TYPE A ZEOLITE FROM X-RAY DIFFRACTION DATA
...Show More Authors

Scopus (30)
Scopus
Publication Date
Sun Dec 30 2012
Journal Name
Iraqi Journal Of Chemical And Petroleum Engineering
Estimation of the Rock Mechanical Properties Using Conventional Log Data in North Rumaila Field
...Show More Authors

Hydrocarbon production might cause changes in dynamic reservoir properties. Thus the consideration of the mechanical stability of a formation under different conditions of drilling or production is a very important issue, and basic mechanical properties of the formation should be determined.
There is considerable evidence, gathered from laboratory measurements in the field of Rock Mechanics, showing a good correlation between intrinsic rock strength and the dynamic elastic constant determined from sonic-velocity and density measurements.
The values of the mechanical properties determined from log data, such as the dynamic elastic constants derived from the measurement of the elastic wave velocities in the material, should be more a

... Show More
View Publication Preview PDF
Publication Date
Sun Dec 01 2024
Journal Name
Journal Of Economics And Administrative Sciences
Nadaraya-Watson Estimation of a Circular Regression Model on Peak Systolic Blood Pressure Data
...Show More Authors

Purpose: The research aims to estimate models representing phenomena that follow the logic of circular (angular) data, accounting for the 24-hour periodicity in measurement. Theoretical framework: The regression model is developed to account for the periodic nature of the circular scale, considering the periodicity in the dependent variable y, the explanatory variables x, or both. Design/methodology/approach: Two estimation methods were applied: a parametric model, represented by the Simple Circular Regression (SCR) model, and a nonparametric model, represented by the Nadaraya-Watson Circular Regression (NW) model. The analysis used real data from 50 patients at Al-Kindi Teaching Hospital in Baghdad. Findings: The Mean Circular Erro

... Show More
View Publication Preview PDF
Crossref
Publication Date
Wed Dec 01 2021
Journal Name
Baghdad Science Journal
Useing the Hierarchical Cluster Analysis and Fuzzy Cluster Analysis Methods for Classification of Some Hospitals in Basra
...Show More Authors

In general, the importance of cluster analysis is that one can evaluate elements by clustering multiple homogeneous data; the main objective of this analysis is to collect the elements of a single, homogeneous group into different divisions, depending on many variables. This method of analysis is used to reduce data, generate hypotheses and test them, as well as predict and match models. The research aims to evaluate the fuzzy cluster analysis, which is a special case of cluster analysis, as well as to compare the two methods—classical and fuzzy cluster analysis. The research topic has been allocated to the government and private hospitals. The sampling for this research was comprised of 288 patients being treated in 10 hospitals. As t

... Show More
View Publication Preview PDF
Scopus (4)
Crossref (3)
Scopus Clarivate Crossref
Publication Date
Fri Oct 01 2010
Journal Name
2010 Ieee Symposium On Industrial Electronics And Applications (isiea)
Distributed t-way test suite data generation using exhaustive search method with map and reduce framework
...Show More Authors

View Publication
Scopus (2)
Crossref (2)
Scopus Crossref
Publication Date
Fri Apr 14 2023
Journal Name
Journal Of Big Data
A survey on deep learning tools dealing with data scarcity: definitions, challenges, solutions, tips, and applications
...Show More Authors
Abstract<p>Data scarcity is a major challenge when training deep learning (DL) models. DL demands a large amount of data to achieve exceptional performance. Unfortunately, many applications have small or inadequate data to train DL frameworks. Usually, manual labeling is needed to provide labeled data, which typically involves human annotators with a vast background of knowledge. This annotation process is costly, time-consuming, and error-prone. Usually, every DL framework is fed by a significant amount of labeled data to automatically learn representations. Ultimately, a larger amount of data would generate a better DL model and its performance is also application dependent. This issue is the main barrier for</p> ... Show More
View Publication Preview PDF
Scopus (322)
Crossref (326)
Scopus Clarivate Crossref
Publication Date
Wed Apr 25 2018
Journal Name
Ibn Al-haitham Journal For Pure And Applied Sciences
Using Approximation Non-Bayesian Computation with Fuzzy Data to Estimation Inverse Weibull Parameters and Reliability Function
...Show More Authors

        In real situations all observations and measurements are not exact numbers but more or less non-exact, also called fuzzy. So, in this paper, we use approximate non-Bayesian computational methods to estimate inverse Weibull parameters and reliability function with fuzzy data. The maximum likelihood and moment estimations are obtained as non-Bayesian estimation. The maximum likelihood estimators have been derived numerically based on two iterative techniques namely “Newton-Raphson” and the “Expectation-Maximization” techniques. In addition, we provide compared numerically through Monte-Carlo simulation study to obtained estimates of the parameters and reliability function i

... Show More
View Publication Preview PDF
Crossref (1)
Crossref
Publication Date
Fri Jan 07 2022
Journal Name
International Journal Of Early Childhood Special Education
Hierarchical learning and its effect on learning some basic skills in fencing for third stage students.
...Show More Authors

MH Hamzah, AF Abbas, International Journal of Early Childhood Special Education, 2022

View Publication
Publication Date
Sun Jan 01 2017
Journal Name
Iraqi Journal Of Science
Strong Triple Data Encryption Standard Algorithm using Nth Degree Truncated Polynomial Ring Unit
...Show More Authors

Cryptography is the process of transforming message to avoid an unauthorized access of data. One of the main problems and an important part in cryptography with secret key algorithms is key. For higher level of secure communication key plays an important role. For increasing the level of security in any communication, both parties must have a copy of the secret key which, unfortunately, is not that easy to achieve. Triple Data Encryption Standard algorithm is weak due to its weak key generation, so that key must be reconfigured to make this algorithm more secure, effective, and strong. Encryption key enhances the Triple Data Encryption Standard algorithm securities. This paper proposed a combination of two efficient encryption algorithms to

... Show More
Publication Date
Mon May 15 2017
Journal Name
Journal Of Theoretical And Applied Information Technology
Anomaly detection in text data that represented as a graph using dbscan algorithm
...Show More Authors

Anomaly detection is still a difficult task. To address this problem, we propose to strengthen DBSCAN algorithm for the data by converting all data to the graph concept frame (CFG). As is well known that the work DBSCAN method used to compile the data set belong to the same species in a while it will be considered in the external behavior of the cluster as a noise or anomalies. It can detect anomalies by DBSCAN algorithm can detect abnormal points that are far from certain set threshold (extremism). However, the abnormalities are not those cases, abnormal and unusual or far from a specific group, There is a type of data that is do not happen repeatedly, but are considered abnormal for the group of known. The analysis showed DBSCAN using the

... Show More
Preview PDF
Scopus (4)
Scopus