Big data analysis has important applications in many areas such as sensor networks and connected healthcare. High volume and velocity of big data bring many challenges to data analysis. One possible solution is to summarize the data and provides a manageable data structure to hold a scalable summarization of data for efficient and effective analysis. This research extends our previous work on developing an effective technique to create, organize, access, and maintain summarization of big data and develops algorithms for Bayes classification and entropy discretization of large data sets using the multi-resolution data summarization structure. Bayes classification and data discretization play essential roles in many learning algorithms such as decision tree and nearest neighbor search. The proposed method can handle streaming data efficiently and, for entropy discretization, provide su the optimal split value.
This article presents the results of an experimental investigation of using carbon fiber–reinforced polymer sheets to enhance the behavior of reinforced concrete deep beams with large web openings in shear spans. A set of 18 specimens were fabricated and tested up to a failure to evaluate the structural performance in terms of cracking, deformation, and load-carrying capacity. All tested specimens were with 1500-mm length, 500-mm cross-sectional deep, and 150-mm wide. Parameters that studied were opening size, opening location, and the strengthening factor. Two deep beams were implemented as control specimens without opening and without strengthening. Eight deep beams were fabricated with openings but without strengthening, while
... Show MoreThe deep learning algorithm has recently achieved a lot of success, especially in the field of computer vision. This research aims to describe the classification method applied to the dataset of multiple types of images (Synthetic Aperture Radar (SAR) images and non-SAR images). In such a classification, transfer learning was used followed by fine-tuning methods. Besides, pre-trained architectures were used on the known image database ImageNet. The model VGG16 was indeed used as a feature extractor and a new classifier was trained based on extracted features.The input data mainly focused on the dataset consist of five classes including the SAR images class (houses) and the non-SAR images classes (Cats, Dogs, Horses, and Humans). The Conv
... Show MoreThe aggregation capacity of human reb blood cells lies between that of the non- aggregated arythrocyte and the remarkably full sedimentation. As the ability to aggregate is atributed to many factors such as the availability of macromolecules and plasma lipids, the role of plasm lipid profile on RBC aggregation and sedimentation changes in normal and diabetic patients is studied.Also serum lipid profile measurement (Total cholesterol, Triglyceride, HDL, LDL, VLDL) in normal and diabetic subjects were made. The principle of measurement includes detecting the transmitted laser light through a suspension of 10% diluted red blood cells in plasma. In all diabetics, the raulux formation and sedimentation rate is enhanced.
Two unsupervised classifiers for optimum multithreshold are presented; fast Otsu and k-means. The unparametric methods produce an efficient procedure to separate the regions (classes) by select optimum levels, either on the gray levels of image histogram (as Otsu classifier), or on the gray levels of image intensities(as k-mean classifier), which are represent threshold values of the classes. In order to compare between the experimental results of these classifiers, the computation time is recorded and the needed iterations for k-means classifier to converge with optimum classes centers. The variation in the recorded computation time for k-means classifier is discussed.
When soft tissue planning is important, usually, the Magnetic Resonance Imaging (MRI) is a medical imaging technique of selection. In this work, we show a modern method for automated diagnosis depending on a magnetic resonance images classification of the MRI. The presented technique has two main stages; features extraction and classification. We obtained the features corresponding to MRI images implementing Discrete Wavelet Transformation (DWT), inverse and forward, and textural properties, like rotation invariant texture features based on Gabor filtering, and evaluate the meaning of every
... Show MoreText categorization refers to the process of grouping text or documents into classes or categories according to their content. Text categorization process consists of three phases which are: preprocessing, feature extraction and classification. In comparison to the English language, just few studies have been done to categorize and classify the Arabic language. For a variety of applications, such as text classification and clustering, Arabic text representation is a difficult task because Arabic language is noted for its richness, diversity, and complicated morphology. This paper presents a comprehensive analysis and a comparison for researchers in the last five years based on the dataset, year, algorithms and the accuracy th
... Show More