With the continuous downscaling of semiconductor processes, the growing power density and thermal issues in multicore processors become more and more challenging, thus reliable dynamic thermal management (DTM) is required to prevent severe challenges in system performance. The accuracy of the thermal profile, delivered to the DTM manager, plays a critical role in the efficiency and reliability of DTM, different sources of noise and variations in deep submicron (DSM) technologies severely affecting the thermal data that can lead to significant degradation of DTM performance. In this article, we propose a novel fault-tolerance scheme exploiting approximate computing to mitigate the DSM effects on DTM efficiency. Approximate computing in hardware design can lead to significant gains in energy efficiency, area, and performance. To exploit this opportunity, there is a need for design abstractions that can systematically incorporate approximation in hardware design which is the main contribution of our work. Our proposed scheme achieves 11.20% lower power consumption, 6.59% smaller area, and 12% reduction in the number of wires, while increasing DTM efficiency by 5.24%.
Some biological aspects of the zebra mussel, Dreissena polymorpha have been studied at Al-Musayab thermal power plant ,sixty km. south west of Baghdad. Data collected during the period extended from November, 2002 to October, 2003 except for the month of April The population consisted of five age groups; O, I, II, III, and IV which have 0, 1, 2, 3 and 4 annuli respectively. The study also proved the validity of annuli readings for age and growth determination. The average annual growth rates for age groups O,I, II, III, and IV were 5.7, 5.5, 5.4, 5.2 and 5.4 respectively. Average calculated length for laboratory reared mussel was 2.5 mm compared to 5.4 mm in natural environment. Correlation coefficients were very high between age an
... Show MoreThe expanding use of multi-processor supercomputers has made a significant impact on the speed and size of many problems. The adaptation of standard Message Passing Interface protocol (MPI) has enabled programmers to write portable and efficient codes across a wide variety of parallel architectures. Sorting is one of the most common operations performed by a computer. Because sorted data are easier to manipulate than randomly ordered data, many algorithms require sorted data. Sorting is of additional importance to parallel computing because of its close relation to the task of routing data among processes, which is an essential part of many parallel algorithms. In this paper, sequential sorting algorithms, the parallel implementation of man
... Show MoreCalculation of the power density of the nuclear fusion reactions plays an important role in the construction of any power plants. It is clear that the power released by fusion reaction strongly depended on the fusion cross section and fusion reactivity. Our calculation concentrates on the most useful and famous fuels (Deuterium-tritium) since it represents the principle fuels in any large scale system like the so called tokomak.
Seventy five isolates of Saccharomyces cerevisiae were identified, they were isolated from different local sources which included decayed fruits and vegetables, vinegar, fermented pasta, baker yeast and an alcohol factory. Identification of isolates was carried out by cultural microscopical and biochemical tests. Ethanol sensitivity of the isolates showed that the minimal inhibitory concentration of the isolate (Sy18) was 16% and Lethal concentration was 17%. The isolate (Sy18) was most efficient as ethanol producer 9.36% (v/w). The ideal conditions to produce ethanol from Date syrup by yeast isolate, were evaluated, various temperatures, pH, Brix, incubation period and different levels of (NH4)2HP04. Maximum ethanol produced was 10
... Show MoreIn this study, multi-objective optimization of nanofluid aluminum oxide in a mixture of water and ethylene glycol (40:60) is studied. In order to reduce viscosity and increase thermal conductivity of nanofluids, NSGA-II algorithm is used to alter the temperature and volume fraction of nanoparticles. Neural network modeling of experimental data is used to obtain the values of viscosity and thermal conductivity on temperature and volume fraction of nanoparticles. In order to evaluate the optimization objective functions, neural network optimization is connected to NSGA-II algorithm and at any time assessment of the fitness function, the neural network model is called. Finally, Pareto Front and the corresponding optimum points are provided and
... Show MoreLead-acid batteries have been used increasingly in recent years in solar power systems, especially in homes and small businesses, due to their cheapness and advanced development in manufacturing them. However, these batteries have low voltages and low capacities, to increase voltage and capacities, they need to be connected in series and parallel. Whether they are connected in series or parallel, their voltages and capacities must be equal otherwise the quality of service will be degraded. The fact that these different voltages are inherent in their manufacturing, but these unbalanced voltages can be controlled. Using a switched capacitor is a method that was used in many methods for balancing voltages, but their respons
... Show MoreIn this study, the response and behavior of machine foundations resting on dry and saturated sand was investigated experimentally. In order to investigate the response of soil and footing to steady state dynamic loading, a physical model was manufactured to simulate steady state harmonic load at different operating frequencies. Total of 84 physical models were performed. The footing parameters are related to the size of the rectangular footing and depth of embedment. Two sizes of rectangular steel model footing were tested at the surface and at 50 mm depth below model surface. Meanwhile the investigated parameters of the soil condition include dry and saturated sand for two relative densities 30% and 80%. The response of the footing was ela
... Show MoreKrawtchouk polynomials (KPs) and their moments are promising techniques for applications of information theory, coding theory, and signal processing. This is due to the special capabilities of KPs in feature extraction and classification processes. The main challenge in existing KPs recurrence algorithms is that of numerical errors, which occur during the computation of the coefficients in large polynomial sizes, particularly when the KP parameter (p) values deviate away from 0.5 to 0 and 1. To this end, this paper proposes a new recurrence relation in order to compute the coefficients of KPs in high orders. In particular, this paper discusses the development of a new algorithm and presents a new mathematical model for computing the
... Show More