Digital image manipulation has become increasingly prevalent due to the widespread availability of sophisticated image editing tools. In copy-move forgery, a portion of an image is copied and pasted into another area within the same image. The proposed methodology begins with extracting the image's Local Binary Pattern (LBP) algorithm features. Two main statistical functions, Stander Deviation (STD) and Angler Second Moment (ASM), are computed for each LBP feature, capturing additional statistical information about the local textures. Next, a multi-level LBP feature selection is applied to select the most relevant features. This process involves performing LBP computation at multiple scales or levels, capturing textures at different resolutions. By considering features from multiple levels, the detection algorithm can better capture both global and local characteristics of the manipulated regions, enhancing the accuracy of forgery detection. To achieve a high accuracy rate, this paper presents a variety of scenarios based on a machine-learning approach. In Copy-Move detection, artifacts and their properties are used as image features and support Vector Machine (SVM) to determine whether an image is tampered with. The dataset is manipulated to train and test each classifier; the target is to learn the discriminative patterns that detect instances of copy-move forgery. Media Integration and Call Center Forgery (MICC-F2000) were utilized in this paper. Experimental evaluations demonstrate the effectiveness of the proposed methodology in detecting copy-move. The implementation phases in the proposed work have produced encouraging outcomes. In the case of the best-implemented scenario involving multiple trials, the detection stage achieved a copy-move accuracy of 97.8 %.
Background: The skull offers a high resistance of adverse environmental conditions over time, resulting in the greater stability of the dimorphic features as compared to other skeletal bony pieces. Sex determination of human skeletal considered an initial step in its identification. The present study is undertaken to evaluate the validity of 3D reconstructed computed tomographic images in sex differentiation by using craniometrical measurements at various parts of the skull. Materials and Method: 3D reconstructed computed tomographic scanning of 100 Iraqi subject, (50 males and 50 females) were analyzed with their age range from20-70 years old. Craniometrical linear measurements were located and marked on both side of the 3D skull images.
... Show MoreAggregate production planning (APP) is one of the most significant and complicated problems in production planning and aim to set overall production levels for each product category to meet fluctuating or uncertain demand in future. and to set decision concerning hiring, firing, overtime, subcontract, carrying inventory level. In this paper, we present a simulated annealing (SA) for multi-objective linear programming to solve APP. SA is considered to be a good tool for imprecise optimization problems. The proposed model minimizes total production and workforce costs. In this study, the proposed SA is compared with particle swarm optimization (PSO). The results show that the proposed SA is effective in reducing total production costs and req
... Show MoreThe present research is descriptive and analytical by nature; it practically presents the method of implementing the standard pattern in an unconventional way using the bias-cut line. The study aims at investigating the variables of bias-cut and their suitability for fitting large-shaped Iraqi ladies. It also aims at exploring the artistic and innovative features of the bias-cut. Therefore, one needs to understand the rules and basics of clothing and the nature of the body to reach the maximum degree of control.Consequently, the study is to answer the following questions: What is the effectiveness of tailoring on the bias-cut in fitting a standard template of a large-shaped Iraqi ladies? Is it possible to obtain from the offered possibil
... Show MoreAmong the metaheuristic algorithms, population-based algorithms are an explorative search algorithm superior to the local search algorithm in terms of exploring the search space to find globally optimal solutions. However, the primary downside of such algorithms is their low exploitative capability, which prevents the expansion of the search space neighborhood for more optimal solutions. The firefly algorithm (FA) is a population-based algorithm that has been widely used in clustering problems. However, FA is limited in terms of its premature convergence when no neighborhood search strategies are employed to improve the quality of clustering solutions in the neighborhood region and exploring the global regions in the search space. On the
... Show MoreIn this paper, we investigate and characterize the effects of multi-channel and rendezvous protocols on the connectivity of dynamic spectrum access networks using percolation theory. In particular, we focus on the scenario where the secondary nodes have plenty of vacant channels to choose from a phenomenon which we define as channel abundance. To cope with the existence of multi-channel, we use two types of rendezvous protocols: naive ones which do not guarantee a common channel and advanced ones which do. We show that, with more channel abundance, even with the use of either type of rendezvous protocols, it becomes difficult for two nodes to agree on a common channel, thereby, potentially remaining invisible to each other. We model this in
... Show MoreIn this research, has been to building a multi objective Stochastic Aggregate Production Planning model for General al Mansour company Data with Stochastic demand under changing of market and uncertainty environment in aim to draw strong production plans. The analysis to derive insights on management issues regular and extra labour costs and the costs of maintaining inventories and good policy choice under the influence medium and optimistic adoption of the model of random has adoption form and had adopted two objective functions total cost function (the core) and income and function for a random template priority compared with fixed forms with objective function and the results showed that the model of two phases wit
... Show MoreAlthough text document images authentication is difficult due to the binary nature and clear separation between the background and foreground but it is getting higher demand for many applications. Most previous researches in this field depend on insertion watermark in the document, the drawback in these techniques lie in the fact that changing pixel values in a binary document could introduce irregularities that are very visually noticeable. In this paper, a new method is proposed for object-based text document authentication, in which I propose a different approach where a text document is signed by shifting individual words slightly left or right from their original positions to make the center of gravity for each line fall in with the m
... Show MoreThis article studies a comprehensive methods of edge detection and algorithms in digital images which is reflected a basic process in the field of image processing and analysis. The purpose of edge detection technique is discovering the borders that distinct diverse areas of an image, which donates to refining the understanding of the image contents and extracting structural information. The article starts by clarifying the idea of an edge and its importance in image analysis and studying the most noticeable edge detection methods utilized in this field, (e.g. Sobel, Prewitt, and Canny filters), besides other schemes based on distinguishing unexpected modifications in light intensity and color gradation. The research as well discuss
... Show More