Watermarking operation can be defined as a process of embedding special wanted and reversible information in important secure files to protect the ownership or information of the wanted cover file based on the proposed singular value decomposition (SVD) watermark. The proposed method for digital watermark has very huge domain for constructing final number and this mean protecting watermark from conflict. The cover file is the important image need to be protected. A hidden watermark is a unique number extracted from the cover file by performing proposed related and successive operations, starting by dividing the original image into four various parts with unequal size. Each part of these four treated as a separate matrix and applying SVD on it, the diagonal matrix is selected to determine its norm. The four norms will be processed to produce one unique number used as a watermark and this number can be developed in future by exploiting some other features in constructing watermark number other than SVD process to construct two watermark numbers, each one of them owned special methodology, for avoiding some challenges and changings in the transformation process.
Chaotic systems have been proved to be useful and effective for cryptography. Through this work, a new Feistel cipher depend upon chaos systems and Feistel network structure with dynamic secret key size according to the message size have been proposed. Compared with the classical traditional ciphers like Feistel-based structure ciphers, Data Encryption Standards (DES), is the common example of Feistel-based ciphers, the process of confusion and diffusion, will contains the dynamical permutation choice boxes, dynamical substitution choice boxes, which will be generated once and hence, considered static,
While using chaotic maps, in the suggested system, called
The study seeks to determine the levels of credit structure (independent variable) depending on its components (loans, credit disseminate, other facilities) To get the eight patterns of the structure of bank credit for the purpose of assessing the relationship between changes in levels of each style of structure credit (increase or decrease) and reflected in maximizing the value of the Bank(The adopted a measured variable depending on the approximate equation of simple Tobin's Q) to determine the style that achieves the highest value of the Bank, to take advantage of it in management, planning and control by knowing the strengths and weaknesses of the historical distribution of the facilities . the sample of the
... Show MoreEstimating the semantic similarity between short texts plays an increasingly prominent role in many fields related to text mining and natural language processing applications, especially with the large increase in the volume of textual data that is produced daily. Traditional approaches for calculating the degree of similarity between two texts, based on the words they share, do not perform well with short texts because two similar texts may be written in different terms by employing synonyms. As a result, short texts should be semantically compared. In this paper, a semantic similarity measurement method between texts is presented which combines knowledge-based and corpus-based semantic information to build a semantic network that repre
... Show MoreAutomatic recognition of individuals is very important in modern eras. Biometric techniques have emerged as an answer to the matter of automatic individual recognition. This paper tends to give a technique to detect pupil which is a mixture of easy morphological operations and Hough Transform (HT) is presented in this paper. The circular area of the eye and pupil is divided by the morphological filter as well as the Hough Transform (HT) where the local Iris area has been converted into a rectangular block for the purpose of calculating inconsistencies in the image. This method is implemented and tested on the Chinese Academy of Sciences (CASIA V4) iris image database 249 person and the IIT Delhi (IITD) iris
... Show MoreIn regression testing, Test case prioritization (TCP) is a technique to arrange all the available test cases. TCP techniques can improve fault detection performance which is measured by the average percentage of fault detection (APFD). History-based TCP is one of the TCP techniques that consider the history of past data to prioritize test cases. The issue of equal priority allocation to test cases is a common problem for most TCP techniques. However, this problem has not been explored in history-based TCP techniques. To solve this problem in regression testing, most of the researchers resort to random sorting of test cases. This study aims to investigate equal priority in history-based TCP techniques. The first objective is to implement
... Show MoreThe rhetoric is concerned with the expressive emphasis of things, the science of the statement takes the treatment of graphic images and rhetorical fiction, and in the science of the exquisite dealing with the study of verbal and moral improvements, and in the science of meanings took all related to compositions and methods, and these vocabulary entered into the field of art and its branches largely, especially in the uncle and art of design because it has a major role in including a lot of biological functions with a deep meaning of its comprehensiveness on the multiplicity of real meanings characterized by suggestive and important and semantics refer to the recipient in the research to refer to the recipient The discovery of the intern
... Show MoreIntegrated project delivery is collaboratively applying the skills and knowledge of all participants to optimize the project's results, increase owner value, decrease waste, and maximize efficiency during the design, fabrication, and construction processes. This study aims to determine IPD criteria positively impacting value engineering. To do this, the study has considered 9 main criteria according to PMP classification that already covers all project phases and 183 sub-criteria obtained from theoretical study and expert interviews (fieldwork). In this study, the SPSS (V26) program was used to analyze the main criteria and sub-criteria priorities from top to bottom according to their values of the Relative Importance In
... Show MoreRegarding to the computer system security, the intrusion detection systems are fundamental components for discriminating attacks at the early stage. They monitor and analyze network traffics, looking for abnormal behaviors or attack signatures to detect intrusions in early time. However, many challenges arise while developing flexible and efficient network intrusion detection system (NIDS) for unforeseen attacks with high detection rate. In this paper, deep neural network (DNN) approach was proposed for anomaly detection NIDS. Dropout is the regularized technique used with DNN model to reduce the overfitting. The experimental results applied on NSL_KDD dataset. SoftMax output layer has been used with cross entropy loss funct
... Show MoreRecommender Systems are tools to understand the huge amount of data available in the internet world. Collaborative filtering (CF) is one of the most knowledge discovery methods used positively in recommendation system. Memory collaborative filtering emphasizes on using facts about present users to predict new things for the target user. Similarity measures are the core operations in collaborative filtering and the prediction accuracy is mostly dependent on similarity calculations. In this study, a combination of weighted parameters and traditional similarity measures are conducted to calculate relationship among users over Movie Lens data set rating matrix. The advantages and disadvantages of each measure are spotted. From the study, a n
... Show More