The Internet is providing vital communications between millions of individuals. It is also more and more utilized as one of the commerce tools; thus, security is of high importance for securing communications and protecting vital information. Cryptography algorithms are essential in the field of security. Brute force attacks are the major Data Encryption Standard attacks. This is the main reason that warranted the need to use the improved structure of the Data Encryption Standard algorithm. This paper proposes a new, improved structure for Data Encryption Standard to make it secure and immune to attacks. The improved structure of Data Encryption Standard was accomplished using standard Data Encryption Standard with a new way of two key generations. This means the key generation system generates two keys: one is simple, and the other one is encrypted by using an improved Caesar algorithm. The encryption algorithm in the first 8 round uses simple key 1, and from round 9 to round 16, the algorithm uses encrypted key 2. Using the improved structure of the Data Encryption Standard algorithm, the results of this paper increase Data Encryption Standard encryption security, performance, and complexity of search compared with standard Data Encryption Standard. This means the Differential cryptanalysis cannot be performed on the cipher-text.
In this study, SnO2 nanoparticles were prepared from cost-low tin chloride (SnCl2.2H2O) and ethanol by adding ammonia solution by the sol-gel method, which is one of the lowest-cost and simplest techniques. The SnO2 nanoparticles were dried in a drying oven at a temperature of 70°C for 7 hours. After that, it burned in an oven at a temperature of 200°C for 24 hours. The structure, material, morphological, and optical properties of the synthesized SnO2 in nanoparticle sizes are studied utilizing X-ray diffraction. The Scherrer expression was used to compute nanoparticle sizes according to X-ray diffraction, and the results needed to be scrutinized more closely. The micro-strain indicates the broadening of diffraction peaks for nano
... Show MoreThe research aims to measure the efficiency of health services Quality in the province of Karbala, using the Data Envelopment analysis Models in ( 2006). According to these models the degree of efficiency ranging between zero and unity. We estimate Scale efficiency for two types of orientation direction, which are input and output oriented direction.
The results showed, according Input-oriented efficiency that the levels of Scale efficiency on average is ( 0.975), in the province of Karbala. While the index of Output-oriented efficiency on average is (o.946).
Photonic crystal fiber interferometers (PCFIs) are widely used for sensing applications. This work presented solid core-PCFs based on Mach-Zehnder modal interferometer for sensing refractive index. The general structure of sensor was applied by splicing short lengths of PCF in both sides with conventional single mode fiber (SMF-28).To apply modal interferometer theory collapsing technique based on fusion splicing used to excite higher order modes (LP01 and LP11). A high sensitive optical spectrum analyzer (OSA) was used to monitor and record the transmitted wavelength. This work studied a Mach-Zahnder interferometer refractive index sensor based on splicing point tapered SMF-PCF-SMF. Relation between refractive index sensitivity and tape
... Show MoreThree-dimensional (3D) reconstruction from images is a most beneficial method of object regeneration by using a photo-realistic way that can be used in many fields. For industrial fields, it can be used to visualize the cracks within alloys or walls. In medical fields, it has been used as 3D scanner to reconstruct some human organs such as internal nose for plastic surgery or to reconstruct ear canal for fabricating a hearing aid device, and others. These applications need high accuracy details and measurement that represent the main issue which should be taken in consideration, also the other issues are cost, movability, and ease of use which should be taken into consideration. This work has presented an approach for design and construc
... Show MoreIn this paper, a compact genetic algorithm (CGA) is enhanced by integrating its selection strategy with a steepest descent algorithm (SDA) as a local search method to give I-CGA-SDA. This system is an attempt to avoid the large CPU time and computational complexity of the standard genetic algorithm. Here, CGA dramatically reduces the number of bits required to store the population and has a faster convergence. Consequently, this integrated system is used to optimize the maximum likelihood function lnL(φ1, θ1) of the mixed model. Simulation results based on MSE were compared with those obtained from the SDA and showed that the hybrid genetic algorithm (HGA) and I-CGA-SDA can give a good estimator of (φ1, θ1) for the ARMA(1,1) model. Anot
... Show MoreFinding a path solution in a dynamic environment represents a challenge for the robotics researchers, furthermore, it is the main issue for autonomous robots and manipulators since nowadays the world is looking forward to this challenge. The collision free path for robot in an environment with moving obstacles such as different objects, humans, animals or other robots is considered as an actual problem that needs to be solved. In addition, the local minima and sharp edges are the most common problems in all path planning algorithms. The main objective of this work is to overcome these problems by demonstrating the robot path planning and obstacle avoidance using D star (D*) algorithm based on Particle Swarm Optimization (PSO)
... Show MoreThe current research aims to recognize the exploratory and confirmatory factorial structure of the test-wiseness scale on a sample of Hama University students, using the descriptive method. Thus, the sample consists of (472) male and female students from the faculties of the University of Hama. Besides, Abu Hashem’s 50 item test-wiseness scale (2008) has been used. The validity and reliability of the items of the scale have also been verified, and six items have been deleted accordingly. The results of the exploratory factor analysis of the first degree have shown the presence of the following five acceptable factors: (exam preparation, test time management, question paper handling, answer sheet handling, and revision). Moreover,
... Show MoreThe aim of the research is to use the data content analysis technique (DEA) in evaluating the efficiency of the performance of the eight branches of the General Tax Authority, located in Baghdad, represented by Karrada, Karkh parties, Karkh Center, Dora, Bayaa, Kadhimiya, New Baghdad, Rusafa according to the determination of the inputs represented by the number of non-accountable taxpayers and according to the categories professions and commercial business, deduction, transfer of property ownership, real estate and tenders, In addition to determining the outputs according to the checklist that contains nine dimensions to assess the efficiency of the performance of the investigated branches by investing their available resources T
... Show MoreQuantitative real-time Polymerase Chain Reaction (RT-qPCR) has become a valuable molecular technique in biomedical research. The selection of suitable endogenous reference genes is necessary for normalization of target gene expression in RT-qPCR experiments. The aim of this study was to determine the suitability of each 18S rRNA and ACTB as internal control genes for normalization of RT-qPCR data in some human cell lines transfected with small interfering RNA (siRNA). Four cancer cell lines including MCF-7, T47D, MDA-MB-231 and Hela cells along with HEK293 representing an embryonic cell line were depleted of E2F6 using siRNA specific for E2F6 compared to negative control cells, which were transfected with siRNA not specific for any gene. Us
... Show MoreResearch on the automated extraction of essential data from an electrocardiography (ECG) recording has been a significant topic for a long time. The main focus of digital processing processes is to measure fiducial points that determine the beginning and end of the P, QRS, and T waves based on their waveform properties. The presence of unavoidable noise during ECG data collection and inherent physiological differences among individuals make it challenging to accurately identify these reference points, resulting in suboptimal performance. This is done through several primary stages that rely on the idea of preliminary processing of the ECG electrical signal through a set of steps (preparing raw data and converting them into files tha
... Show More