LK Abood, RA Ali, M Maliki, International Journal of Science and Research, 2015 - Cited by 2
A study is made about the size and dynamic activity of sunspot using automatically detecting Matlab code ''mySS .m'' written for this purpose which mainly finds a good estimate about Sunspot diameter (in km). Theory of the Sunspot size has been described using equations, where the growth and decay phases and the area of Sunspot could be calculated. Two types of images, namely H-alpha and HMI magnetograms, have been implemented. The results are divided into four main parts. The first part is sunspot size automatic detection by the Matlab program. The second part is numerical calculations of Sunspot growth and decay phases. The third part is the calculation of Sunspot area. The final part is to explain the Sunspot activit
... Show MoreAbstract: Word sense disambiguation (WSD) is a significant field in computational linguistics as it is indispensable for many language understanding applications. Automatic processing of documents is made difficult because of the fact that many of the terms it contain ambiguous. Word Sense Disambiguation (WSD) systems try to solve these ambiguities and find the correct meaning. Genetic algorithms can be active to resolve this problem since they have been effectively applied for many optimization problems. In this paper, genetic algorithms proposed to solve the word sense disambiguation problem that can automatically select the intended meaning of a word in context without any additional resource. The proposed algorithm is evaluated on a col
... Show MoreA global pandemic has emerged as a result of the widespread coronavirus disease (COVID-19). Deep learning (DL) techniques are used to diagnose COVID-19 based on many chest X-ray. Due to the scarcity of available X-ray images, the performance of DL for COVID-19 detection is lagging, underdeveloped, and suffering from overfitting. Overfitting happens when a network trains a function with an incredibly high variance to represent the training data perfectly. Consequently, medical images lack the availability of large labeled datasets, and the annotation of medical images is expensive and time-consuming for experts. As the COVID-19 virus is an infectious disease, these datasets are scarce, and it is difficult to get large datasets
... Show MoreSemantic segmentation realization and understanding is a stringent task not just for computer vision but also in the researches of the sciences of earth, semantic segmentation decompose compound architectures in one elements, the most mutual object in a civil outside or inside senses must classified then reinforced with information meaning of all object, it’s a method for labeling and clustering point cloud automatically. Three dimensions natural scenes classification need a point cloud dataset to representation data format as input, many challenge appeared with working of 3d data like: little number, resolution and accurate of three Dimensional dataset . Deep learning now is the po
Fluoroscopic images are a field of medical images that depends on the quality of image for correct diagnosis; the main trouble is the de-nosing and how to keep the poise between degradation of noisy image, from one side, and edge and fine details preservation, from the other side, especially when fluoroscopic images contain black and white type noise with high density. The previous filters could usually handle low/medium black and white type noise densities, that expense edge, =fine details preservation and fail with high density of noise that corrupts the images. Therefore, this paper proposed a new Multi-Line algorithm that deals with high-corrupted image with high density of black and white type noise. The experiments achieved i
... Show MoreMedical Ultrasound (US) has many features that make it widely used in the world. These features are safety, availability and low cost. However, despite these features, the ultrasound suffers from problems. These problems are speckle noise and artifacts. In this paper, a new method is proposed to improve US images by removing speckle noise and reducing artifacts to enhance the contrast of the image. The proposed method involves algorithms for image preprocessing and segmentation. A median filter is used to smooth the image in the pre-processing. Additionally, to obtain best results, applying median filter with different kernel values. We take the better output of the median filter and feed it into the Gaussian filter, which then
... Show MoreDigital images are open to several manipulations and dropped cost of compact cameras and mobile phones due to the robust image editing tools. Image credibility is therefore become doubtful, particularly where photos have power, for instance, news reports and insurance claims in a criminal court. Images forensic methods therefore measure the integrity of image by apply different highly technical methods established in literatures. The present work deals with copy move forgery images of Media Integration and Communication Center Forgery (MICC-F2000) dataset for detecting and revealing the areas that have been tampered portion in the image, the image is sectioned into non overlapping blocks using Simple
... Show MoreFeature extraction provide a quick process for extracting object from remote sensing data (images) saving time to urban planner or GIS user from digitizing hundreds of time by hand. In the present work manual, rule based, and classification methods have been applied. And using an object- based approach to classify imagery. From the result, we obtained that each method is suitable for extraction depending on the properties of the object, for example, manual method is convenient for object, which is clear, and have sufficient area, also choosing scale and merge level have significant effect on the classification process and the accuracy of object extraction. Also from the results the rule-based method is more suitable method for extracting
... Show More