Neural cryptography deals with the problem of “key exchange” between two neural networks by using the mutual learning concept. The two networks exchange their outputs (in bits) and the key between two communicating parties ar eventually represented in the final learned weights, when the two networks are said to be synchronized. Security of neural synchronization is put at risk if an attacker is capable of synchronizing with any of the two parties during the training process.
— In light of the pandemic that has swept the world, the use of e-learning in educational institutions has become an urgent necessity for continued knowledge communication with students. Educational institutions can benefit from the free tools that Google provide and from these applications, Google classroom which is characterized by ease of use, but the efficiency of using Google classroom is affected by several variables not studied in previous studies Clearly, this study aimed to identify the use of Google classroom as a system for managing e-learning and the factors affecting the performance of students and lecturer. The data of this study were collected from 219 members of the faculty and students at the College of Administra
... Show MoreIn the present study, the effect of new cross-section fin geometries on overall thermal/fluid performance had been investigated. The cross-section included the base original geometry of (triangular, square, circular, and elliptical pin fins) by adding exterior extra fins along the sides of the origin fins. The present extra fins include rectangular extra fin of 2 mm (height) and 4 mm (width) and triangular extra fin of 2 mm (base) 4 mm (height). The use of entropy generation minimization method (EGM) allows the combined effect of thermal resistance and pressure drop to be assessed through the simultaneous interaction with the heat sink. A general dimensionless expression for the entropy generation rate is obtained by con
... Show MoreST Alawi, NA Mustafa, Al-Mustansiriyah Journal of Science, 2013
In this paper, the reliability and scheduling of maintenance of some medical devices were estimated by one variable, the time variable (failure times) on the assumption that the time variable for all devices has the same distribution as (Weibull distribution.
The method of estimating the distribution parameters for each device was the OLS method.
The main objective of this research is to determine the optimal time for preventive maintenance of medical devices. Two methods were adopted to estimate the optimal time of preventive maintenance. The first method depends on the maintenance schedule by relying on information on the cost of maintenance and the cost of stopping work and acc
... Show MoreThe main objective of this work is to propose a new routing protocol for wireless sensor network employed to serve IoT systems. The routing protocol has to adapt with different requirements in order to enhance the performance of IoT applications. The link quality, node depth and energy are used as metrics to make routing decisions. Comparison with other protocols is essential to show the improvements achieved by this work, thus protocols designed to serve the same purpose such as AODV, REL and LABILE are chosen to compare the proposed routing protocol with. To add integrative and holistic, some of important features are added and tested such as actuating and mobility. These features are greatly required by some of IoT applications and im
... Show MoreIn this paper three techniques for image compression are implemented. The proposed techniques consist of three dimension (3-D) two level discrete wavelet transform (DWT), 3-D two level discrete multi-wavelet transform (DMWT) and 3-D two level hybrid (wavelet-multiwavelet transform) technique. Daubechies and Haar are used in discrete wavelet transform and Critically Sampled preprocessing is used in discrete multi-wavelet transform. The aim is to maintain to increase the compression ratio (CR) with respect to increase the level of the transformation in case of 3-D transformation, so, the compression ratio is measured for each level. To get a good compression, the image data properties, were measured, such as, image entropy (He), percent r
... Show MoreGypsum Plaster is an important building materials, and because of the availabilty of its raw materials. In this research the effect of various additives on the properties of plaster was studied , like Polyvinyl Acetate, Furfural, Fumed Silica at different rate of addition and two types of fibers, Carbon Fiber and Polypropylene Fiber to the plaster at a different volumetric rate. It was found that after analysis of the results the use of Furfural as an additive to plaster by 2.5% is the optimum ratio of addition to that it improved the flexural Strength by 3.18%.
When using Polyvinyl Acetate it was found that the ratio of the additive 2% is the optimum ratio of addition to the plaster, because it improved the value of the flexural stre
With the spread of global markets for modern technical education and the diversity of programs for the requirements of the local and global market for information and communication technology, the universities began to race among themselves to earn their academic reputation. In addition, they want to enhance their technological development by developing IMT systems with integrated technology as the security and fastest response with the speed of providing the required service and sure information and linking it The network and using social networking programs with wireless networks which in turn is a driver of the emerging economies of technical education. All of these facilities opened the way to expand the number of students and s
... Show MoreSimulation experiments are a means of solving in many fields, and it is the process of designing a model of the real system in order to follow it and identify its behavior through certain models and formulas written according to a repeating software style with a number of iterations. The aim of this study is to build a model that deals with the behavior suffering from the state of (heteroskedasticity) by studying the models (APGARCH & NAGARCH) using (Gaussian) and (Non-Gaussian) distributions for different sample sizes (500,1000,1500,2000) through the stage of time series analysis (identification , estimation, diagnostic checking and prediction). The data was generated using the estimations of the parameters resulting f
... Show More