Anomaly detection is still a difficult task. To address this problem, we propose to strengthen DBSCAN algorithm for the data by converting all data to the graph concept frame (CFG). As is well known that the work DBSCAN method used to compile the data set belong to the same species in a while it will be considered in the external behavior of the cluster as a noise or anomalies. It can detect anomalies by DBSCAN algorithm can detect abnormal points that are far from certain set threshold (extremism). However, the abnormalities are not those cases, abnormal and unusual or far from a specific group, There is a type of data that is do not happen repeatedly, but are considered abnormal for the group of known. The analysis showed DBSCAN using the improved algorithm can detect this type of anomaly. Thus, our approach is effective in finding abnormalities.
The aim of present study was to develop gel formulation of microsponges of poorly soluble drug meloxicam (MLX) in order to enhance the release and dissolution of MLX which is the limitation for preparation in topical forms. Also skin delivery is an alternative administration for MLX that can minimize gastrointestinal (GI) side effects and improve patient compliance. The microsponges of MLX were prepared by quasi-emulsion solvent diffusion method. The effects of drug:polymer ratio, stirring time and Eudragit polymer type on the physical characteristics of microsponges were investigated and characterized for production yield, loading efficiency, particle size, surface morphology, and in vitro drug release from microsponges. The selec
... Show MoreThe need to create the optimal water quality management process has motivated researchers to pursue prediction modeling development. One of the widely important forecasting models is the sessional autoregressive integrated moving average (SARIMA) model. In the present study, a SARIMA model was developed in R software to fit a time series data of monthly fluoride content collected from six stations on Tigris River for the period from 2004 to 2014. The adequate SARIMA model that has the least Akaike's information criterion (AIC) and mean squared error (MSE) was found to be SARIMA (2,0,0) (0,1,1). The model parameters were identified and diagnosed to derive the forecasting equations at each selected location. The correlation coefficien
... Show MoreThe need to create the optimal water quality management process has motivated researchers to pursue prediction modeling development. One of the widely important forecasting models is the sessional autoregressive integrated moving average (SARIMA) model. In the present study, a SARIMA model was developed in R software to fit a time series data of monthly fluoride content collected from six stations on Tigris River for the period from 2004 to 2014. The adequate SARIMA model that has the least Akaike's information criterion (AIC) and mean squared error (MSE) was found to be SARIMA (2, 0, 0) (0,1,1). The model parameters were identified and diagnosed to derive the forecasting equations at each selected location. The correlat
... Show MoreThe issue of liquidity, profitability, and money employment, and capital fullness is one of the most important issues that gained high consideration by other authors and researchers in their attempts to find out the real relationship and how can balance be achieved, which is the main goal of each deposits.
For the sake of comprising the study variables, the research has formed the problem of the study which refers to the bank capability to enlarge profits without dissipation in liquidity of the bank which will negatively reflect on the bank's fame as well as the customers' trust. For all these matters, the researcher has proposed a set of aims, the important of which is the estimation of the bank profitability; liquid
... Show MoreThis study discussed a biased estimator of the Negative Binomial Regression model known as (Liu Estimator), This estimate was used to reduce variance and overcome the problem Multicollinearity between explanatory variables, Some estimates were used such as Ridge Regression and Maximum Likelihood Estimators, This research aims at the theoretical comparisons between the new estimator (Liu Estimator) and the estimators
A resume is the first impression between you and a potential employer. Therefore, the importance of a resume can never be underestimated. Selecting the right candidates for a job within a company can be a daunting task for recruiters when they have to review hundreds of resumes. To reduce time and effort, we can use NLTK and Natural Language Processing (NLP) techniques to extract essential data from a resume. NLTK is a free, open source, community-driven project and the leading platform for building Python programs to work with human language data. To select the best resume according to the company’s requirements, an algorithm such as KNN is used. To be selected from hundreds of resumes, your resume must be one of the best. Theref
... Show MoreKE Sharquie, AA Noaimi, SA Galib, Journal of Cosmetics, Dermatological Sciences and Applications, 2013 - Cited by 4
In this study, the Earth's surface was studied in Razzaza Lake for 25 years, using remote sensing methods. Images of the satellites Landsat 5 (TM) and 8 (OLI) were used to study and determine the components of the land cover. The study covered the years 1995-2021 with an interval of 5 years, as this region is uninhabited, so the change in the land cover is slow. The land cover was divided into three main classes and seven subclasses and classified using the maximum likelihood classifier with the help of training sets collected to represent the classes that made up the land cover. The changes detected in the land cover were studied by considering 1995 as a reference year. It was found that there was a significant reduction in the water mass
... Show More