Digital images are open to several manipulations and dropped cost of compact cameras and mobile phones due to the robust image editing tools. Image credibility is therefore become doubtful, particularly where photos have power, for instance, news reports and insurance claims in a criminal court. Images forensic methods therefore measure the integrity of image by apply different highly technical methods established in literatures. The present work deals with copy move forgery images of Media Integration and Communication Center Forgery (MICC-F2000) dataset for detecting and revealing the areas that have been tampered portion in the image, the image is sectioned into non overlapping blocks using Simple liner iterative clustering (SLIC) method. Then, Scale invariant feature transform (SIFT) descriptor is applied on the grey of the handled image to gives distinctive key points that classified by K-Nearest neighbor to detect and localize the forged portion in the tempered image. The forgery detection results gave a performance percent of about 98%, which reflects the ability of the KNN classifier that cooperated with SIFT descriptor to detect the forged portions even if the forged area is rotated or scaled or both of them.
Feature extraction provide a quick process for extracting object from remote sensing data (images) saving time to urban planner or GIS user from digitizing hundreds of time by hand. In the present work manual, rule based, and classification methods have been applied. And using an object- based approach to classify imagery. From the result, we obtained that each method is suitable for extraction depending on the properties of the object, for example, manual method is convenient for object, which is clear, and have sufficient area, also choosing scale and merge level have significant effect on the classification process and the accuracy of object extraction. Also from the results the rule-based method is more suitable method for extracting
... Show MoreBackground: Few Studies had been done on the role of histopathology in the medico-legal diagnosis of Electrocution even abroad.
Aim of the study: To determine the main histopathlogical features in cases of electrocution especially at the entry site of the electrical current which help in the diagnosis of those cases.
Methods: A full medico-legal autopsy had been done on 64 cadavers of persons died as a result of electrocution chosen randomly out of a total number of 144 cases of electrocution
during the year 2005 in the medico-legal institute of Baghdad including histopathological examination by ordinary method of different specimens from those cadavers at histopathology
department of the mentioned institu
Brain tissues segmentation is usually concerned with the delineation of three types of brain matters Grey Matter (GM), White Matter (WM) and Cerebrospinal Fluid (CSF). Because most brain structures are anatomically defined by boundaries of these tissue classes, accurate segmentation of brain tissues into one of these categories is an important step in quantitative morphological study of the brain. As well as the abnormalities regions like tumors are needed to be delineated. The extra-cortical voxels in MR brain images are often removed in order to facilitate accurate analysis of cortical structures. Brain extraction is necessary to avoid the misclassifications of surrounding tissues, skull and scalp as WM, GM or tumor when implementing s
... Show MoreThe data compression is a very important process in order to reduce the size of a large data to be stored or transported, parametric curves such that Bezier curve is a suitable method to return gradual change and mutability of this data. Ridghelet transform solve the problems in the wavelet transform and it can compress the image well but when it uses with Bezier curve, the equality of compressed image become very well. In this paper, a new compression method is proposed by using Bezier curve with Ridgelet transform on RGB images. The results showed that the proposed method present good performance in both subjective and objective experiments. When the PSNR values equal to (34.2365, 33.4323 and 33.0987), they were increased in the propos
... Show MoreDigital forensic is part of forensic science that implicitly covers crime related to computer and other digital devices. It‟s being for a while that academic studies are interested in digital forensics. The researchers aim to find out a discipline based on scientific structures that defines a model reflecting their observations. This paper suggests a model to improve the whole investigation process and obtaining an accurate and complete evidence and adopts securing the digital evidence by cryptography algorithms presenting a reliable evidence in a court of law. This paper presents the main and basic concepts of the frameworks and models used in digital forensics investigation.
Legislation may be often not enough for protecting information, and regulatory strategies are insufficient as well. Technical means are not also sufficient in preventing risks threating information whatever their effectiveness is. Thus, the protection is a complex structure consisting of law, regulation strategy and technology. The increasing use of and reliance on computer information systems has highlighted the need for good information system management. Legislative control can have a positive effect on this system by providing deterrence and increasing the public awareness of users about the problem.
Consequently, it is required looking for legislative means at the time in which the fight is more effective against this kind o
... Show MoreMultiple eliminations (de-multiple) are one of seismic processing steps to remove their effects and delineate the correct primary refractors. Using normal move out to flatten primaries is the way to eliminate multiples through transforming these data to frequency-wavenumber domain. The flatten primaries are aligned with zero axis of the frequency-wavenumber domain and any other reflection types (multiples and random noise) are distributed elsewhere. Dip-filter is applied to pass the aligned data and reject others will separate primaries from multiple after transforming the data back from frequency-wavenumber domain to time-distance domain. For that, a suggested name for this technique as normal move out- frequency-wavenumber domain
... Show More