Symmetric cryptography forms the backbone of secure data communication and storage by relying on the strength and randomness of cryptographic keys. This increases complexity, enhances cryptographic systems' overall robustness, and is immune to various attacks. The present work proposes a hybrid model based on the Latin square matrix (LSM) and subtractive random number generator (SRNG) algorithms for producing random keys. The hybrid model enhances the security of the cipher key against different attacks and increases the degree of diffusion. Different key lengths can also be generated based on the algorithm without compromising security. It comprises two phases. The first phase generates a seed value that depends on producing a randomly predefined set of key numbers of size n via the Donald E. Knuths SRNG algorithm (subtractive method). The second phase uses the output key (or seed value) from the previous phase as input to the Latin square matrix (LSM) to formulate a new key randomly. To increase the complexity of the generated key, another new random key of the same length that fulfills Shannon’s principle of confusion and diffusion properties is XORed. Four test keys for each 128, 192,256,512, and 1024–bit length are used to evaluate the strength of the proposed model. The experimental results and security analyses revealed that all test keys met the statistical National Institute of Standards (NIST) standards and had high values for entropy values exceeding 0.98. The key length of the proposed model for n bits is 25*n, which is large enough to overcome brute-force attacks. Moreover, the generated keys are very sensitive to initial values, which increases the complexity against different attacks.
A substantial portion of today’s multimedia data exists in the form of unstructured text. However, the unstructured nature of text poses a significant task in meeting users’ information requirements. Text classification (TC) has been extensively employed in text mining to facilitate multimedia data processing. However, accurately categorizing texts becomes challenging due to the increasing presence of non-informative features within the corpus. Several reviews on TC, encompassing various feature selection (FS) approaches to eliminate non-informative features, have been previously published. However, these reviews do not adequately cover the recently explored approaches to TC problem-solving utilizing FS, such as optimization techniques.
... Show MoreCrime is considered as an unlawful activity of all kinds and it is punished by law. Crimes have an impact on a society's quality of life and economic development. With a large rise in crime globally, there is a necessity to analyze crime data to bring down the rate of crime. This encourages the police and people to occupy the required measures and more effectively restricting the crimes. The purpose of this research is to develop predictive models that can aid in crime pattern analysis and thus support the Boston department's crime prevention efforts. The geographical location factor has been adopted in our model, and this is due to its being an influential factor in several situations, whether it is traveling to a specific area or livin
... Show MoreBiometrics represent the most practical method for swiftly and reliably verifying and identifying individuals based on their unique biological traits. This study addresses the increasing demand for dependable biometric identification systems by introducing an efficient approach to automatically recognize ear patterns using Convolutional Neural Networks (CNNs). Despite the widespread adoption of facial recognition technologies, the distinct features and consistency inherent in ear patterns provide a compelling alternative for biometric applications. Employing CNNs in our research automates the identification process, enhancing accuracy and adaptability across various ear shapes and orientations. The ear, being visible and easily captured in
... Show MoreFace recognition is required in various applications, and major progress has been witnessed in this area. Many face recognition algorithms have been proposed thus far; however, achieving high recognition accuracy and low execution time remains a challenge. In this work, a new scheme for face recognition is presented using hybrid orthogonal polynomials to extract features. The embedded image kernel technique is used to decrease the complexity of feature extraction, then a support vector machine is adopted to classify these features. Moreover, a fast-overlapping block processing algorithm for feature extraction is used to reduce the computation time. Extensive evaluation of the proposed method was carried out on two different face ima
... Show MoreThis paper focuses on the optimization of drilling parameters by utilizing “Taguchi method” to obtain the minimum surface roughness. Nine drilling experiments were performed on Al 5050 alloy using high speed steel twist drills. Three drilling parameters (feed rates, cutting speeds, and cutting tools) were used as control factors, and L9 (33) “orthogonal array” was specified for the experimental trials. Signal to Noise (S/N) Ratio and “Analysis of Variance” (ANOVA) were utilized to set the optimum control factors which minimized the surface roughness. The results were tested with the aid of statistical software package MINITAB-17. After the experimental trails, the tool diameter was found as the most important facto
... Show More