Given the enormous problem-solving potential of artificial intelligence (AI), it wouldn’t be far-fetched to think that AI could also help us tackle the climate crisis. However, when we consider the energy requirements of AI models, it becomes clear that technology is as much a part of the climate problem as a solution.
The emissions come from AI-related infrastructure, such as building and running data centers that handle the vast amounts of information required to power these systems.
But different technological approaches to how we build AI systems could help reduce their carbon footprint. Two technologies in particular hold promise for doing this: neural networks and lifelong learning.
The life cycle of an AI system can be divided into two phases: training and inference. During training, a relevant data set is used to build and refine – improve – the system. In conclusion, the trained system generates predictions on previously unseen data.
For example, training an AI to be used in self-driving cars would require a dataset of many different driving situations and decisions made by human drivers.
After the training phase, the AI system will predict effective maneuvers for your self-driving car. Artificial neural networks (ANNs) are a fundamental technology used in most current AI systems.
They have many different features, called parameters, whose values are adjusted during the training phase of the AI system. These parameters can total more than 100 billion.
Although a large number of parameters improve the capabilities of ANNs, they also make training and inference processes resource-intensive. To put things into perspective, training GPT-3 (the predecessor AI system to the current ChatGPT) generated 502 metric tons of carbon, which is equivalent to driving 112 petrol-powered cars for a year.
In conclusion, GPT-3 emits 8.4 tonnes of CO₂ annually. Since the AI boom began in the early 2010s, the energy requirements of AI systems known as large language models (LLMs) – the type of technology behind ChatGPT – have increased by 300,000.
With the increasing ubiquity and complexity of AI models, this trend is continuing, with the potential for AI to contribute significantly to CO₂ emissions. In fact, our current estimates may be lower than the actual carbon footprint of AI due to the lack of standard and accurate techniques for measuring AI-related emissions.
Spiking neural networks
The previously mentioned new technologies, spiking neural networks (SNNs) and lifelong learning (L2), have the potential to reduce the ever-increasing carbon footprint of AI, with SNNs acting as an energy-efficient alternative in ANN’s place.
ANNs work by processing and learning patterns from data, enabling them to make predictions. They work with decimal numbers. To make precise calculations, especially when multiplying together numbers with decimal points, the computer needs to be very accurate. It is because of these decimal numbers that ANNs require a lot of computing power, memory and time.
This means that ANNs become more energy intensive as the networks become larger and more complex. Both ANNs and SNNs are inspired by the brain, which contains billions of neurons (nerve cells) connected to each other by synapses.
Like the brain, ANNs and SNNs also have components that researchers call neurons, although these are artificial, not biological ones. The main difference between the two types of neural networks is the way individual neurons transmit information to each other.
Neurons in the human brain communicate with each other by transmitting intermittent electrical signals called spikes. The spikes themselves do not contain information. Instead, the information lies in the timing of these spikes. This binary, all-or-none characteristic of spikes (usually represented as 0 or 1) implies that neurons are active when they are spiking and inactive otherwise.
This is one of the reasons for the efficient processing of energy in the brain.
Just as Morse code uses specific sequences of dots and dashes to convey messages, SNNs use patterns or timings of spikes to process and transmit information. Thus, while the artificial neurons in ANNs are always active, SNNs only consume energy when a spike occurs.
Otherwise, they are closer to zero energy requirements. SNNs can be up to 280 times more energy efficient than ANNs.
My colleagues and I are developing learning algorithms for SNNs that could bring them closer to the energy efficiency represented by the brain. The lower computing requirements also imply that SNNs may be able to make decisions more quickly.
These properties make SNNs useful for a wide range of applications, including space exploration, defense cars and self-driving due to the limited energy sources available in these cases.
L2 is another strategy to reduce the overall energy requirements of ANNs over their lifetime that we are also working on.
By training ANNs sequentially (where the systems learn from sequences of data) on new problems they forget their previous knowledge when learning new tasks. ANNs require retraining from scratch when their operating environment changes, increasing AI-related emissions.
L2 is a collection of algorithms that enable AI models to be trained sequentially on multiple tasks without forgetting or forgetting. L2 enables models to learn for life by building on their existing knowledge without retraining them from scratch.
The field of AI is growing rapidly and other potential advances are emerging that can alleviate the energy demands of this technology. For example, building smaller AI models that exhibit the same predictive capabilities as a larger model.
Advances in quantum computing – a different approach to building computers that takes advantage of phenomena from the world of quantum physics – would enable faster training and inference using ANNs and SNNs. The greater computing capabilities offered by quantum computing could allow us to find energy-efficient solutions for AI on a much larger scale.
The challenge of climate change demands that we try to find solutions for rapidly advancing areas like AI before their carbon footprint becomes too large.
This article from The Conversation is republished under a Creative Commons license. Read the original article.
Shirin Dora does not work for any company or organization that would benefit from this article, she does not consult with, shares in or is funded by any company or organization that would benefit from this article, and she does not disclose any affiliations relevant beyond their academic appointment.