Researchers ‘overclocking’ world’s fastest supercomputers to process big data faster 2015-03-05 12:37:41 / CONFERENCES

Researchers at Queen’s University Belfast, the University of Manchester, and the STFC Daresbury Laboratory are developing new software to increase the ability of supercomputers to process big data faster while minimizing increases in power consumption.

 

To do that, computer scientists in the Scalable, Energy-Efficient, Resilient and Transparent Software Adaptation (SERT) project are using “approximate computing” (also known as “significance-based computing”) — a form of “overclocking” that trades reliability for reduced energy consumption.

 

The idea is to operate hardware slightly above the threshold voltage (also called near-threshold voltage, NTV), actually allowing components to operate in an unreliable state — and assuming that software and parallelism can cope with the resulting timing errors that will occur — using increased iterations to reach convergence, for example.

    Copyright © İnformasiya Texnologiyaları İnstitutu, 2024