In this research, a group of gray texture images of the Brodatz database was studied by building the features database of the images using the gray level co-occurrence matrix (GLCM), where the distance between the pixels was one unit and for four angles (0, 45, 90, 135). The k-means classifier was used to classify the images into a group of classes, starting from two to eight classes, and for all angles used in the co-occurrence matrix. The distribution of the images on the classes was compared by comparing every two methods (projection of one class onto another where the distribution of images was uneven, with one category being the dominant one. The classification results were studied for all cases using the confusion matrix between every Two cases or two steps (two different angles and for the same number of classes). The agreement percentage between the classification results and the various methods was calculated.
There is an evidence that channel estimation in communication systems plays a crucial issue in recovering the transmitted data. In recent years, there has been an increasing interest to solve problems due to channel estimation and equalization especially when the channel impulse response is fast time varying Rician fading distribution that means channel impulse response change rapidly. Therefore, there must be an optimal channel estimation and equalization to recover transmitted data. However. this paper attempt to compare epsilon normalized least mean square (ε-NLMS) and recursive least squares (RLS) algorithms by computing their performance ability to track multiple fast time varying Rician fading channel with different values of Doppler
... Show MorePan sharpening (fusion image) is the procedure of merging suitable information from two or more images into a single image. The image fusion techniques allow the combination of different information sources to improve the quality of image and increase its utility for a particular application. In this research, six pan-sharpening method have been implemented between the panchromatic and multispectral images, these methods include Ehlers, color normalize, Gram-Schmidt, local mean and variance matching, Daubechies of rank two and Symlets of rank four wavelet transform. Two images captured by two different sensors such as landsat-8 and world view-2 have been adopted to achieve the fusion purpose. Different fidelity metric like MS
... Show MoreBig data analysis has important applications in many areas such as sensor networks and connected healthcare. High volume and velocity of big data bring many challenges to data analysis. One possible solution is to summarize the data and provides a manageable data structure to hold a scalable summarization of data for efficient and effective analysis. This research extends our previous work on developing an effective technique to create, organize, access, and maintain summarization of big data and develops algorithms for Bayes classification and entropy discretization of large data sets using the multi-resolution data summarization structure. Bayes classification and data discretization play essential roles in many learning algorithms such a
... Show MoreThe deep learning algorithm has recently achieved a lot of success, especially in the field of computer vision. This research aims to describe the classification method applied to the dataset of multiple types of images (Synthetic Aperture Radar (SAR) images and non-SAR images). In such a classification, transfer learning was used followed by fine-tuning methods. Besides, pre-trained architectures were used on the known image database ImageNet. The model VGG16 was indeed used as a feature extractor and a new classifier was trained based on extracted features.The input data mainly focused on the dataset consist of five classes including the SAR images class (houses) and the non-SAR images classes (Cats, Dogs, Horses, and Humans). The Conv
... Show MoreAccuracy in multiple objects segmentation using geometric deformable models sometimes is not achieved for reasons relating to a number of parameters. In this research, we will study the effect of changing the parameters values on the work of the geometric deformable model and define their efficient values, as well as finding out the relations that link these parameters with each other, by depending on different case studies including multiple objects different in spacing, colors, and illumination. For specific ranges of parameters values the segmentation results are found good, where the success of the work of geometric deformable models has been limited within certain limits to the values of these parameters.
Nanoparticles of Pb1-xCdxS within the composition of 0≤x≤1 were prepared from the reaction of aqueous solution of cadmium acetate, lead acetate, thiourea, and NaOH by chemical co-precipitation. The prepared samples were characterized by UV-Vis spectroscopy(in the range 300-1100nm) to study the optical properties, AFM and SEM to check the surface morphology(Roughness average and shape) and the particle size. XRD technique was used to determine the crystalline structure, XRD technique was used to determine the purity of the phase and the crystalline structure, The crystalline size average of the nanoparticles have been found to be 20.7, 15.48, 11.9, 11.8, and 13.65 nm for PbS, Pb0.75Cd0.25S,
... Show MoreThis work concerns the thermal and sound insulation as well as the mechanical properties of polymer matrix composite reinforced with glass fibers. These fibers may have dangerous effect during handling, for example the glass fibers might cause some damage to the eyes, lungs and even skin. For this reason the present work, investigates the behavior of polymer composite reinforced with natural fibers (Plant fibers) as replacement to glass fibers. Unsaturated Polyester resin was used as matrix material reinforced with two types of fibers, one of them is artificial (Glass fibers) and the other type is natural (Jute, Fronds Palm and Reed Fibers) by hand lay-up technique. All fibers are untreated with any chemical solvent. The Percentage of mi
... Show MoreThe influx of data in bioinformatics is primarily in the form of DNA, RNA, and protein sequences. This condition places a significant burden on scientists and computers. Some genomics studies depend on clustering techniques to group similarly expressed genes into one cluster. Clustering is a type of unsupervised learning that can be used to divide unknown cluster data into clusters. The k-means and fuzzy c-means (FCM) algorithms are examples of algorithms that can be used for clustering. Consequently, clustering is a common approach that divides an input space into several homogeneous zones; it can be achieved using a variety of algorithms. This study used three models to cluster a brain tumor dataset. The first model uses FCM, whic
... Show MoreThe economy is exceptionally reliant on agricultural productivity. Therefore, in domain of agriculture, plant infection discovery is a vital job because it gives promising advance towards the development of agricultural production. In this work, a framework for potato diseases classification based on feed foreword neural network is proposed. The objective of this work is presenting a system that can detect and classify four kinds of potato tubers diseases; black dot, common scab, potato virus Y and early blight based on their images. The presented PDCNN framework comprises three levels: the pre-processing is first level, which is based on K-means clustering algorithm to detect the infected area from potato image. The s
... Show More