This paper tackles with principal component analysis method (PCA ) to dimensionality reduction in the case of linear combinations to digital image processing and analysis. The PCA is statistical technique that shrinkages a multivariate data set consisting of inter-correlated variables into a data set consisting of variables that are uncorrelated linear combination, while ensuring the least possible loss of useful information. This method was applied to a group of satellite images of a certain area in the province of Basra, which represents the mouth of the Tigris and Euphrates rivers in the Shatt al-Arab in the province of Basra. In this research, when selected the best imaging band in terms of taking the highest eigen value, it is shown that the fourth image band is best when using the PCA method . The application principal component analysis, which depends on the eigen values, showed that the application of PCA gave high and accurate results in determining the best image among the six image beams. It was found that the fourth images which has the highest eigen value is the best and this indicates that it contains the most important Independent properties in images, which are used for analysis, such as isolating water areas, agricultural areas, soil type, etc.
Video steganography has become a popular option for protecting secret data from hacking attempts and common attacks on the internet. However, when the whole video frame(s) are used to embed secret data, this may lead to visual distortion. This work is an attempt to hide sensitive secret image inside the moving objects in a video based on separating the object from the background of the frame, selecting and arranging them according to object's size for embedding secret image. The XOR technique is used with reverse bits between the secret image bits and the detected moving object bits for embedding. The proposed method provides more security and imperceptibility as the moving objects are used for embedding, so it is difficult to notice the
... Show MoreIn this paper, we investigate and characterize the effects of multi-channel and rendezvous protocols on the connectivity of dynamic spectrum access networks using percolation theory. In particular, we focus on the scenario where the secondary nodes have plenty of vacant channels to choose from a phenomenon which we define as channel abundance. To cope with the existence of multi-channel, we use two types of rendezvous protocols: naive ones which do not guarantee a common channel and advanced ones which do. We show that, with more channel abundance, even with the use of either type of rendezvous protocols, it becomes difficult for two nodes to agree on a common channel, thereby, potentially remaining invisible to each other. We model this in
... Show MoreNowadays, people's expression on the Internet is no longer limited to text, especially with the rise of the short video boom, leading to the emergence of a large number of modal data such as text, pictures, audio, and video. Compared to single mode data ,the multi-modal data always contains massive information. The mining process of multi-modal information can help computers to better understand human emotional characteristics. However, because the multi-modal data show obvious dynamic time series features, it is necessary to solve the dynamic correlation problem within a single mode and between different modes in the same application scene during the fusion process. To solve this problem, in this paper, a feature extraction framework of
... Show MoreIn this paper, we studied the scheduling of jobs on a single machine. Each of n jobs is to be processed without interruption and becomes available for processing at time zero. The objective is to find a processing order of the jobs, minimizing the sum of maximum earliness and maximum tardiness. This problem is to minimize the earliness and tardiness values, so this model is equivalent to the just-in-time production system. Our lower bound depended on the decomposition of the problem into two subprograms. We presented a novel heuristic approach to find a near-optimal solution for the problem. This approach depends on finding efficient solutions for two problems. The first problem is minimizing total completi
... Show MoreIn this research, has been to building a multi objective Stochastic Aggregate Production Planning model for General al Mansour company Data with Stochastic demand under changing of market and uncertainty environment in aim to draw strong production plans. The analysis to derive insights on management issues regular and extra labour costs and the costs of maintaining inventories and good policy choice under the influence medium and optimistic adoption of the model of random has adoption form and had adopted two objective functions total cost function (the core) and income and function for a random template priority compared with fixed forms with objective function and the results showed that the model of two phases wit
... Show MoreA fast moving infrared excess source (G2) which is widely interpreted as a core-less gas and dust cloud approaches Sagittarius A* (Sgr A*) on a presumably elliptical orbit. VLT
Document source identification in printer forensics involves determining the origin of a printed document based on characteristics such as the printer model, serial number, defects, or unique printing artifacts. This process is crucial in forensic investigations, particularly in cases involving counterfeit documents or unauthorized printing. However, consistent pattern identification across various printer types remains challenging, especially when efforts are made to alter printer-generated artifacts. Machine learning models are often used in these tasks, but selecting discriminative features while minimizing noise is essential. Traditional KNN classifiers require a careful selection of distance metrics to capture relevant printing
... Show More