Audio classification is the process to classify different audio types according to contents. It is implemented in a large variety of real world problems, all classification applications allowed the target subjects to be viewed as a specific type of audio and hence, there is a variety in the audio types and every type has to be treatedcarefully according to its significant properties.Feature extraction is an important process for audio classification. This workintroduces several sets of features according to the type, two types of audio (datasets) were studied. Two different features sets are proposed: (i) firstorder gradient feature vector, and (ii) Local roughness feature vector, the experimentsshowed that the results are competitive to those gotten from other popular methods inthis field, such as Zero Crossing Rate (ZCR), Amplitude Descriptor (AD), Short Time Energy (STE), and Volume (Vo). The test results indicated, that the attained averageaccuracy of classification is improved up to94.9232% for training set and 95.8666%for testing set.The classification performance of these two extracted featuresets is studied individually, and then they used together as one feature set. Theiroverall performance is investigated, the test results showed that the proposed methods give high classification rates for the audio.
Maintaining and breeding fish in a pond are a crucial task for a large fish breeder. The main issues for fish breeders are pond management such as the production of food for fishes and to maintain the pond water quality. The dynamic or technological system for breeders has been invented and becomes important to get maximum profit return for aquaponic breeders in maintaining fishes. This research presents a developed prototype of a dynamic fish feeder based on fish existence. The dynamic fish feeder is programmed to feed where sensors detected the fish's existence. A microcontroller board NodeMCU ESP8266 is programmed for the developed h
... Show MoreRecommender Systems are tools to understand the huge amount of data available in the internet world. Collaborative filtering (CF) is one of the most knowledge discovery methods used positively in recommendation system. Memory collaborative filtering emphasizes on using facts about present users to predict new things for the target user. Similarity measures are the core operations in collaborative filtering and the prediction accuracy is mostly dependent on similarity calculations. In this study, a combination of weighted parameters and traditional similarity measures are conducted to calculate relationship among users over Movie Lens data set rating matrix. The advantages and disadvantages of each measure are spotted. From the study, a n
... Show MoreThe rapid development of telemedicine services and the requirements for exchanging medical information between physicians, consultants, and health institutions have made the protection of patients’ information an important priority for any future e-health system. The protection of medical information, including the cover (i.e. medical image), has a specificity that slightly differs from the requirements for protecting other information. It is necessary to preserve the cover greatly due to its importance on the reception side as medical staff use this information to provide a diagnosis to save a patient's life. If the cover is tampered with, this leads to failure in achieving the goal of telemedicine. Therefore, this work provides an in
... Show MoreImage recognition is one of the most important applications of information processing, in this paper; a comparison between 3-level techniques based image recognition has been achieved, using discrete wavelet (DWT) and stationary wavelet transforms (SWT), stationary-stationary-stationary (sss), stationary-stationary-wavelet (ssw), stationary-wavelet-stationary (sws), stationary-wavelet-wavelet (sww), wavelet-stationary- stationary (wss), wavelet-stationary-wavelet (wsw), wavelet-wavelet-stationary (wws) and wavelet-wavelet-wavelet (www). A comparison between these techniques has been implemented. according to the peak signal to noise ratio (PSNR), root mean square error (RMSE), compression ratio (CR) and the coding noise e (n) of each third
... Show MoreThe objective of this research is to analyze the content of science textbook at the elementary level, according to the dimensions of sustainable development for the academic year (2015-2016). To achieve this goal has been to build a list with dimensions of sustainable development to be included in science textbooks in primary school, after seeing the collection of literature and research and studies, as has been reached to the list of the dimensions of the three sustainable development and social, economic and environmental in the initial image consisted of (63) the issue of sub-divided the three-dimensional, the menu and offered a group of arbitrators and specialists in curriculum and teaching methods, and thus the menu consiste
... Show MoreThe study aims to analyze computer textbooks content for preparatory stage according to the logical thinking. The researcher followed the descriptive analytical research approach (content analysis), and adopted an explicit idea during the analysis process. One of the content analysis tools which was designed based on mental processes employed during logical thinking has utilized to figure out the study results. The findings revealed that logical thinking skills formed (52%) in fourth preparatory textbook and (47%) in fifth preparatory textbook.
The current study aims to identify the needs in the stories of the Brothers Grimm. The research sample consisted of (3) stories, namely: 1- The story of the Thorn Rose (Sleeping Beauty) 2- The story of Snow White 3- The story of Little Red Riding Hood. The number of pages analyzed reached (15.5) pages, and to achieve the research objectives, Murray's classification of needs was adopted, which contains (36) basic needs that are further divided into (129) sub-needs. The idea was adopted as a unit of analysis and repetition as a unit of enumeration, Reliability was extracted in two ways: 1- Agreement between the researcher and himself over time, where the agreement coefficient reached 97%. The second was agreement between the researcher and tw
... Show MoreIn this research, we use fuzzy nonparametric methods based on some smoothing techniques, were applied to real data on the Iraqi stock market especially the data about Baghdad company for soft drinks for the year (2016) for the period (1/1/2016-31/12/2016) .A sample of (148) observations was obtained in order to construct a model of the relationship between the stock prices (Low, high, modal) and the traded value by comparing the results of the criterion (G.O.F.) for three techniques , we note that the lowest value for this criterion was for the K-Nearest Neighbor at Gaussian function .
Aspect categorisation and its utmost importance in the eld of Aspectbased Sentiment Analysis (ABSA) has encouraged researchers to improve topic model performance for modelling the aspects into categories. In general, a majority of its current methods implement parametric models requiring a pre-determined number of topics beforehand. However, this is not e ciently undertaken with unannotated text data as they lack any class label. Therefore, the current work presented a novel non-parametric model drawing a number of topics based on the semantic association present between opinion-targets (i.e., aspects) and their respective expressed sentiments. The model incorporated the Semantic Association Rules (SAR) into the Hierarchical Dirichlet Proce
... Show More