Arabic text categorization for pattern recognitions is challenging. We propose for the first time a novel holistic method based on clustering for classifying Arabic writer. The categorization is accomplished stage-wise. Firstly, these document images are sectioned into lines, words, and characters. Secondly, their structural and statistical features are obtained from sectioned portions. Thirdly, F-Measure is used to evaluate the performance of the extracted features and their combination in different linkage methods for each distance measures and different numbers of groups. Finally, experiments are conducted on the standard KHATT dataset of Arabic handwritten text comprised of varying samples from 1000 writers. The results in the generation step are obtained from multiple runs of individual clustering methods for each distance measures. The best results are achieved when intensity, lines slope and their
In this research, a study is introduced on the effect of several environmental factors on the performance of an already constructed quality inspection system, which was designed using a transfer learning approach based on convolutional neural networks. The system comprised two sets of layers, transferred layers set from an already trained model (DenseNet121) and a custom classification layers set. It was designed to discriminate between damaged and undamaged helical gears according to the configuration of the gear regardless to its dimensions, and the model showed good performance discriminating between the two products at ideal conditions of high-resolution images.
So, this study aimed at testing the system performance at poor s
... Show MoreIn this research, a study is introduced on the effect of several environmental factors on the performance of an already constructed quality inspection system, which was designed using a transfer learning approach based on convolutional neural networks. The system comprised two sets of layers, transferred layers set from an already trained model (DenseNet121) and a custom classification layers set. It was designed to discriminate between damaged and undamaged helical gears according to the configuration of the gear regardless to its dimensions, and the model showed good performance discriminating between the two products at ideal conditions of high-resolution images. So, this study aimed at testing the system performance at poo
... Show MoreMotives: Baghdad is the capital city and an important political, administrative, social, cultural and economic centre of Iraq. Baghdad’s growth and development has been significantly influenced by efforts to accommodate various needs of its steadily growing population. Uncontrolled population and urban growth have exerted negative effects in numerous dimensions, including environmental sustainability because urban expansion occurred in green spaces within the city and the surrounding areas.Aim: The aim of this study was to examine the planning solutions in Baghdad’s green areas in the past and at present, and to identify the key changes in the city’s green areas, including changes in the ratio of green urban spaces to the tota
... Show MoreThe data preprocessing step is an important step in web usage mining because of the nature of log data, which are heterogeneous, unstructured, and noisy. Given the scalability and efficiency of algorithms in pattern discovery, a preprocessing step must be applied. In this study, the sequential methodologies utilized in the preprocessing of data from web server logs, with an emphasis on sub-phases, such as session identification, user identification, and data cleansing, are comprehensively evaluated and meticulously examined.
Background: Endodontically treated teeth have low resistance to fracture against occlusal forces. The strengthening effect of bonded esthetic onlay restoration on weakened tooth has been reported. This study aimed to assess the fracture resistance of endodontically treated premolars restored with composite with and without cuspal coverage by using direct and indirect techniques. Indirect technique done by CAD/CAM system (computer aided design –computer aided manufacturer) and laboratory processing. Material and methods: Forty human extracted maxillary premolars of approximately comparable sizes were divided into four groups: Group (A): Ten endodontically treated teeth directly filled with Filtek Z250xt without cuspal coverage. Group
... Show MoreObjectives: To identify the impact of the brain consensus model on the acquisition of Arabic grammar concepts among students in the fourth grade, methodology: The pilot curriculum was used, and a partial control pilot design was adopted. There were 30 female students in the pilot group, 30 female students in the control group, and the two researchers were statistically rewarded among the two groups' students in some variables and used appropriate statistical means to analyse the results, including the test for two independent samples, the square (c2) and the Alpha Kronbach equation.Results: The pilot group outperformed the control group. The results showed that there is a significant statistical difference at the indicative level (0.05) for
... Show MoreImage classification is the process of finding common features in images from various classes and applying them to categorize and label them. The main problem of the image classification process is the abundance of images, the high complexity of the data, and the shortage of labeled data, presenting the key obstacles in image classification. The cornerstone of image classification is evaluating the convolutional features retrieved from deep learning models and training them with machine learning classifiers. This study proposes a new approach of “hybrid learning” by combining deep learning with machine learning for image classification based on convolutional feature extraction using the VGG-16 deep learning model and seven class
... Show MoreBig data analysis has important applications in many areas such as sensor networks and connected healthcare. High volume and velocity of big data bring many challenges to data analysis. One possible solution is to summarize the data and provides a manageable data structure to hold a scalable summarization of data for efficient and effective analysis. This research extends our previous work on developing an effective technique to create, organize, access, and maintain summarization of big data and develops algorithms for Bayes classification and entropy discretization of large data sets using the multi-resolution data summarization structure. Bayes classification and data discretization play essential roles in many learning algorithms such a
... Show More