Climate models can run for months on supercomputers – but my new algorithm can do them ten times faster

Climate models are some of the most complex pieces of software ever written, capable of simulating many different parts of the entire system, such as the atmosphere or the ocean. Many have been developed by hundreds of scientists over the years and are constantly being added to and refined. They can run to more than a million lines of computer code – thousands of printed pages.

Not surprisingly, these models are expensive. The simulations take time, often several months, and the supercomputers on which the models are run consume a lot of energy. But a new algorithm I’ve developed promises to make many of these climate model simulations ten times faster, and could ultimately be an important tool in the fight against climate change.

One reason climate modeling takes so long is that some of the processes being simulated are inherently slow. The ocean is a good example. It takes a few thousand years for water to circulate from the surface to the deep ocean and back (by contrast, the atmosphere has a “mixing time” of weeks).

Ever since the first climate models were developed in the 1970s, scientists realized that this was going to be a problem. To use a model to simulate climate change, it must be started from conditions that are representative of before industry released greenhouse gases into the atmosphere.

To produce such a stable equilibrium, scientists “wind up” their model by letting it run until it stops changing (the system is so complex, as in real life, that there will always be some fluctuations ).

An initial condition with minimal “flux” is necessary to accurately simulate the effects of man-made factors on the climate. But thanks to the ocean and other sluggish components this can take several months even on large supercomputers. No wonder climate scientists have called this obstacle one of the “grand challenges” of their field.

More computers can only be thrown at the problem

You might ask, “why not use an even bigger machine?” Unfortunately, it wouldn’t help. Supercomputers are simply thousands of individual computer chips, each with dozens of processing units (CPUs or “cores”) connected to each other via a high-speed network.

One of the machines I use has over 300,000 cores and can do nearly 20 quadrillion arithmetic operations per second. (Obviously it’s shared by hundreds of users and any single simulation will only use a small fraction of the machine.)

Big sea wave, stormy sky

Big sea wave, stormy sky

A climate model takes advantage of this by subdividing the planet’s surface into smaller regions – subdomains – while performing calculations for each region simultaneously on a different CPU. In principle, the more subdomains you have, the less time it will take to do the calculations.

That is true up to a point. The problem is that the various subdomains need to “know” what is happening in the adjacent ones, which requires the transmission of information between chips. That’s much slower than the speed at which modern chips can perform arithmetic calculations, something computer scientists call “bandwidth limiting.” (Anyone who’s tried to stream video over a slow internet connection will know what that means.) So throwing more computing power at the problem has diminishing returns. Ocean models especially suffer from such poor “scaling”.

Ten times faster

This is where the new computer algorithm I developed and published in Science Advances comes in. It promises to significantly reduce the downtime of the ocean and other components of the earth system models. In tests on typical climate models, the algorithm was on average about ten times faster than current approaches, reducing the time from several months to a week.

The time and energy that climate scientists could save is valuable in itself. But being able to make other models quickly means that scientists can calibrate them against what we know actually happened in the real world, improving their accuracy, or reducing the uncertainty in their predictions. better define climate. Spin-ups take so much time that neither is currently possible.

The new algorithm will also allow us to perform simulations in a more spatially detailed manner. Currently, ocean models usually tell us nothing about features less than 1º wide in longitude and latitude (about 110km at the equator). But many critical phenomena in the ocean occur on much smaller scales – thousands of meters to a few kilometers – and higher spatial resolution will certainly lead to more accurate climate predictions, such as sea level rise, storm surge and hurricane intensity.

How it works

Like a lot of “new” research it is based on an old idea, in this case one that goes back hundreds of years to the Swiss mathematician Leonhard Euler. Called “sequence acceleration”, you can think of it as using information from the past to extrapolate a “better” future.

Among other applications, it is widely used by chemists and materials scientists to calculate the structure of atoms and molecules, a problem that happens to take up more than half of the world’s supercomputing resources.

Sequence acceleration is useful when a problem is iterative in nature, exactly what a climate model is: you feed the output from the model back as input to the model. Rinse and repeat until the output equals the input and you have found your equilibrium solution.

In the 1960s Harvard mathematician DG Anderson came up with a clever way to combine multiple previous outputs into a single input so that you arrive at the final solution with far fewer repetitions of the procedure. About ten times less than I found when I applied his scheme to the spin-off problem.

Developing a new algorithm is easy. Getting other people to use it is often the biggest challenge. It is therefore promising that the UK Met Office and other climate modeling centers are trying it out.

The next major report from the IPCC is due in 2029. That seems a long way off but given the amount of time it takes to develop models and run simulations, preparations are already underway. Coordinated by an international collaboration called the Coupled Model Intercomparison Project, these simulations form the basis of the report. It’s exciting to think that my algorithm and software could contribute.


Read more: Noise in the brain enables us to make extraordinary leaps of imagination. It could also change the power of computers


This article from The Conversation is republished under a Creative Commons license. Read the original article.

The conversationThe conversation

The conversation

Samar Khatiwala is funded by UKRI.

Leave a Reply

Your email address will not be published. Required fields are marked *