This study aims to enhance the RC5 algorithm to improve encryption and decryption speeds in devices with limited power and memory resources. These resource-constrained applications, which range in size from wearables and smart cards to microscopic sensors, frequently function in settings where traditional cryptographic techniques because of their high computational overhead and memory requirements are impracticable. The Enhanced RC5 (ERC5) algorithm integrates the PKCS#7 padding method to effectively adapt to various data sizes. Empirical investigation reveals significant improvements in encryption speed with ERC5, ranging from 50.90% to 64.18% for audio files and 46.97% to 56.84% for image files, depending on file size. A substantial improvement of 59.90% is observed for data files sized at 1500000kb. Partitioning larger files notably reduces encryption time, while smaller files experience marginal benefits. Certain file types benefit from both strategies. Evaluation metrics include encryption execution time and throughput, consistently demonstrating ERC5's superiority over the original RC5. Moreover, ERC5 exhibits reduced power consumption and heightened throughput, highlighting its multifaceted benefits in resource-constrained environments. ERC5 is developed and tested on various file types and sizes to evaluate encryption speed, power consumption, and throughput. ERC5 significantly improves encryption speed across different file types and sizes, with notable gains for audio, image, and large data files. While partitioning smaller files only slightly improves encryption time, larger files partitioning yields faster results. Future research could explore ERC5 optimizations for different computing environments, its integration into real-time encryption scenarios, and its impact on other cryptographic operations and security protocols.
In low-latitude areas less than 10° in latitude angle, the solar radiation that goes into the solar still increases as the cover slope approaches the latitude angle. However, the amount of water that is condensed and then falls toward the solar-still basin is also increased in this case. Consequently, the solar yield still is significantly decreased, and the accuracy of the prediction method is affected. This reduction in the yield and the accuracy of the prediction method is inversely proportional to the time in which the condensed water stays on the inner side of the condensing cover without collection because more drops will fall down into the basin of the solar-still. Different numbers of scraper motions per hour (NSM), that is
... Show MoreIn the present study, synthesis of bis Schiff base [I, II] by reaction of one mole of terephthalaldehyde with two mole of 2-amino-5-mercapto-1,3,4-thiadiazole or 4-amino benzene thiol in the ethanol absolute, then compounds [I,II] were reacted with Na2CO3 of distilled H2O, then chloroacetic acid was added to yield compounds [III,IV]. O-chitosan derivatives [V,VI] were synthesized by reaction of chitosan with compounds [III,IV] in acidic media in distilled water according to the steps of Fischer. O–chitosan (grafted chitosan) [V,VI] was blended with synthetic polymer polyvinyl alcohol (PVA) to produce polymers [VII,VIII], then these polymers were blended with nano: Gold or Silver by u
... Show MoreAn accurate assessment of the pipes’ conditions is required for effective management of the trunk sewers. In this paper the semi-Markov model was developed and tested using the sewer dataset from the Zublin trunk sewer in Baghdad, Iraq, in order to evaluate the future performance of the sewer. For the development of this model the cumulative waiting time distribution of sewers was used in each condition that was derived directly from the sewer condition class and age data. Results showed that the semi-Markov model was inconsistent with the data by adopting ( 2 test) and also, showed that the error in prediction is due to lack of data on the sewer waiting times at each condition state which can be solved by using successive conditi
... Show MoreA substantial matter to confidential messages' interchange through the internet is transmission of information safely. For example, digital products' consumers and producers are keen for knowing those products are genuine and must be distinguished from worthless products. Encryption's science can be defined as the technique to embed the data in an images file, audio or videos in a style which should be met the safety requirements. Steganography is a portion of data concealment science that aiming to be reached a coveted security scale in the interchange of private not clear commercial and military data. This research offers a novel technique for steganography based on hiding data inside the clusters that resulted from fuzzy clustering. T
... Show MoreGraphite Coated Electrodes (GCE) based on molecularly imprinted polymers were fabricated for the selective potentiometric determination of Risperidone (Ris). The molecularly imprinted (MIP) and nonimprinted (NIP) polymers were synthesized by bulk polymerization using (Ris.) as a template, acrylic acid (AA) and acrylamide (AAm) as monomers, ethylene glycol dimethacrylate (EGDMA) as a cross-linker and benzoyl peroxide (BPO) as an initiator. The imprinted membranes and the non-imprinted membranes were prepared using dioctyl phthalate (DOP) and Dibutylphthalate (DBP) as plasticizers in PVC matrix. The membranes were coated on graphite electrodes. The MIP electrodes using
... Show MoreIn this research, we use fuzzy nonparametric methods based on some smoothing techniques, were applied to real data on the Iraqi stock market especially the data about Baghdad company for soft drinks for the year (2016) for the period (1/1/2016-31/12/2016) .A sample of (148) observations was obtained in order to construct a model of the relationship between the stock prices (Low, high, modal) and the traded value by comparing the results of the criterion (G.O.F.) for three techniques , we note that the lowest value for this criterion was for the K-Nearest Neighbor at Gaussian function .
Image compression is a serious issue in computer storage and transmission, that simply makes efficient use of redundancy embedded within an image itself; in addition, it may exploit human vision or perception limitations to reduce the imperceivable information Polynomial coding is a modern image compression technique based on modelling concept to remove the spatial redundancy embedded within the image effectively that composed of two parts, the mathematical model and the residual. In this paper, two stages proposed technqies adopted, that starts by utilizing the lossy predictor model along with multiresolution base and thresholding techniques corresponding to first stage. Latter by incorporating the near lossless com
... Show MoreThe direct electron transfer behavior of hemoglobin that is immobilized onto screen-printed carbon electrode (SPCE) modified with silver nanoparticles (AgNPs) and chitosan (CS) was studied in this work. Cyclic voltametry and spectrophotometry were used to characterize the hemoglobin (Hb) bioconjunction with AgNPs and CS. Results of the modified electrode showed quasi-reversible redox peaks with a formal potential of (-0.245 V) versus Ag/AgCl in 0.1 M phosphate buffer solution (PBS), pH7, at a scan rate of 0.1 Vs-1. The charge transfer coefficient (α) was 0.48 and the apparent electron transfer rate constant (Ks) was 0.47 s-1. The electrode was used as a hydrogen peroxide biosensor with a linear response over 3 to 240 µM and a detection li
... Show MoreMost of the medical datasets suffer from missing data, due to the expense of some tests or human faults while recording these tests. This issue affects the performance of the machine learning models because the values of some features will be missing. Therefore, there is a need for a specific type of methods for imputing these missing data. In this research, the salp swarm algorithm (SSA) is used for generating and imputing the missing values in the pain in my ass (also known Pima) Indian diabetes disease (PIDD) dataset, the proposed algorithm is called (ISSA). The obtained results showed that the classification performance of three different classifiers which are support vector machine (SVM), K-nearest neighbour (KNN), and Naïve B
... Show More