Can Computing Clean Up Its Act?

Longtime reader of Slashdot SpzToid shares a report from The Economist: The first thing you notice is how quiet it is,” says Kimo Koski, head of the Finnish IT Center for Science. Dr Koski describes LUMI – Finnish for ‘snow’ – Europe’s most powerful supercomputer, which is located 250 km south of the Arctic Circle in the city of Kajaani in Finland. LUMI, which opened last year, is used for everything from climate modeling to the search for new drugs. It has tens of thousands of individual processors and is capable of performing up to 429 quadrillion calculations each second. This makes it the third most powerful supercomputer in the world. Powered by hydroelectric power and with its waste heat used to heat homes in Kayaani, it even boasts negative carbon dioxide emissions. LUMI offers a glimpse into the future of high-performance computing (HPC), both on dedicated supercomputers and in the cloud infrastructure that powers much of the Internet. Over the past decade, demand for HPC has boomed, driven by technologies such as machine learning, genome sequencing, and simulations of everything from stock markets and nuclear weapons to time. It will likely continue to grow, as such applications will happily consume as much computing power as you can throw at them. During the same period, the amount of computing power needed to train a cutting-edge AI model doubled every five months. All of this has environmental consequences.

HPC – and computing in general – is becoming a big power user. The International Energy Agency estimates that data centers account for between 1.5% and 2% of global electricity consumption, roughly the size of the entire British economy. This is expected to grow to 4% by 2030. With an eye on government pledges to reduce greenhouse gas emissions, the computer industry is trying to find ways to do more with less and increase the efficiency of its products. The work is done at three levels: that of the individual microchips; of the computers that are built from these chips; and the data centers, which in turn contain the computers. […] The standard measure of data center efficiency is power usage efficiency (pue), the ratio of the data center’s total energy consumption to how much of it is used to do useful work. According to the Uptime Institute, an IT consulting firm, a typical data center has a pue of 1.58. That means about two-thirds of its electricity goes to running its computers, while one-third goes to running the data center itself, most of which will be consumed by its cooling systems. Smart design can reduce that number a lot.

Most existing data centers rely on air cooling. Liquid cooling offers better heat transfer at the cost of additional engineering effort. A few startups are even offering to submerge boards entirely in specially designed liquid baths. Thanks in part to the use of liquid cooling, the Frontier boasts a pue of 1.03. One of the reasons lumi was built near the Arctic Circle was to take advantage of the cool sub-Arctic air. A neighboring PC built in the same facility used this free cooling to achieve a pue rating of just 1.02. This means that 98% of the electricity that comes in is converted into useful math. Even the best commercial data centers do not reach such numbers. Google, for example, has an average pue value of 1.1. The latest data from the Uptime Institute, published in June, showed that after several years of steady improvement, global data center performance has stagnated since 2018. The report notes that the US, UK and European Union, among others, are considering new rules that “could make data centers more efficient.” Germany has proposed an Energy Efficiency Act that would impose a minimum pue of 1.5 by 2027 and 1.3 by 2030.

Comments are closed.