Document source identification in printer forensics involves determining the origin of a printed document based on characteristics such as the printer model, serial number, defects, or unique printing artifacts. This process is crucial in forensic investigations, particularly in cases involving counterfeit documents or unauthorized printing. However, consistent pattern identification across various printer types remains challenging, especially when efforts are made to alter printer-generated artifacts. Machine learning models are often used in these tasks, but selecting discriminative features while minimizing noise is essential. Traditional KNN classifiers require a careful selection of distance metrics to capture relevant printing characteristics effectively. This study proposes leveraging quantum-inspired computing to improve KNN classifiers for printer source identification, offering better accuracy even with noisy or variable printing conditions. The proposed approach uses the Gray Level Co-occurrence Matrix (GLCM) for feature extraction, which is resilient to changes in rotation and scale, making it well-suited for texture analysis. Experimental results show that the quantum-inspired KNN classifier captures subtle printing artifacts, leading to improved classification accuracy despite noise and variability.
In recent years, the performance of Spatial Data Infrastructures for governments and companies is a task that has gained ample attention. Different categories of geospatial data such as digital maps, coordinates, web maps, aerial and satellite images, etc., are required to realize the geospatial data components of Spatial Data Infrastructures. In general, there are two distinct types of geospatial data sources exist over the Internet: formal and informal data sources. Despite the growth of informal geospatial data sources, the integration between different free sources is not being achieved effectively. The adoption of this task can be considered the main advantage of this research. This article addresses the research question of how the
... Show MoreThis paper focuses on the optimization of drilling parameters by utilizing “Taguchi method” to obtain the minimum surface roughness. Nine drilling experiments were performed on Al 5050 alloy using high speed steel twist drills. Three drilling parameters (feed rates, cutting speeds, and cutting tools) were used as control factors, and L9 (33) “orthogonal array” was specified for the experimental trials. Signal to Noise (S/N) Ratio and “Analysis of Variance” (ANOVA) were utilized to set the optimum control factors which minimized the surface roughness. The results were tested with the aid of statistical software package MINITAB-17. After the experimental trails, the tool diameter was found as the most important facto
... Show MoreThis paper presents the design of a longitudinal controller for an autonomous unmanned aerial vehicle (UAV). This paper proposed the dual loop (inner-outer loop) control based on the intelligent algorithm. The inner feedback loop controller is a Linear Quadratic Regulator (LQR) to provide robust (adaptive) stability. In contrast, the outer loop controller is based on Fuzzy-PID (Proportional, Integral, and Derivative) algorithm to provide reference signal tracking. The proposed dual controller is to control the position (altitude) and velocity (airspeed) of an aircraft. An adaptive Unscented Kalman Filter (AUKF) is employed to track the reference signal and is decreased the Gaussian noise. The mathematical model of aircraft
... Show MoreFace recognition is required in various applications, and major progress has been witnessed in this area. Many face recognition algorithms have been proposed thus far; however, achieving high recognition accuracy and low execution time remains a challenge. In this work, a new scheme for face recognition is presented using hybrid orthogonal polynomials to extract features. The embedded image kernel technique is used to decrease the complexity of feature extraction, then a support vector machine is adopted to classify these features. Moreover, a fast-overlapping block processing algorithm for feature extraction is used to reduce the computation time. Extensive evaluation of the proposed method was carried out on two different face ima
... Show MoreThe art of preventing the detection of hidden information messages is the way that steganography work. Several algorithms have been proposed for steganographic techniques. A major portion of these algorithms is specified for image steganography because the image has a high level of redundancy. This paper proposed an image steganography technique using a dynamic threshold produced by the discrete cosine coefficient. After dividing the green and blue channel of the cover image into 1*3-pixel blocks, check if any bits of green channel block less or equal to threshold then start to store the secret bits in blue channel block, and to increase the security not all bits in the chosen block used to store the secret bits. Firstly, store in the cente
... Show MoreAs a result of the entry of multinationals companies in Iraq for investing in the joint projects through conducting agreements and contracts for work on important and strategic projects to get the necessary funds and various experiences which characterize the foreign participant sides that Iraq currently needs them and because of the non-applying the accounting processing stipulated in the unified accounting system in addition to the absence of a local accounting bases as well as the default of the participant contracts on indicating the accounting methods about those projects which are considered one of the bases that enables auditors in the public sector to depend on it, thus the research paper deals with studying an
... Show More