Rapid changes are occurring in technology and the range of computer systems. Exciting advances are being made in artificial intelligence, in the number of tiny interconnected devices we call the “Internet of Things” and in wireless connectivity.
Unfortunately, these improvements present potential dangers as well as benefits. To achieve a safe future we must anticipate what would happen in computing and address it soon. So what do experts think will happen, and what can we do to prevent major problems?
To answer that question, our research team from universities in Lancaster and Manchester turned to the science of looking into the future, called “forecasting”. No one can predict the future, but we can make predictions: descriptions of what will happen based on current trends.
Indeed, long-term forecasts of technology trends can be remarkably accurate. And a great way to get forecasts is to combine the ideas of many different experts to see where they agree.
We consulted 12 “future” experts for a new research paper. These are people whose roles involve long-term forecasting of the effects of changes in computer technology by the year 2040.
Using a technique known as a Delphi study, we combined future forecasts into a set of risks, along with their recommendations for addressing those risks.
Software concerns
Experts expected rapid advances in artificial intelligence (AI) and connected systems, leading to a much more computer-driven world than today. Surprisingly, however, they did not expect a small impact from two much more hyped innovations: Blockchain, a way to record information that makes it impossible or difficult to manipulate the system, they suggested, is largely irrelevant relevant to today’s problems; and quantum computing is still at an early stage and may have little impact in the next 15 years.
The futurists drew attention to three major risks associated with developments in computer software, as follows.
An AI competition that leads to trouble
Our experts suggested that many countries’ stance on AI as an area where they want to gain a competitive technological advantage would encourage software developers to take risks when using AI. This, combined with the complexity of AI and the ability to exceed human capabilities, could lead to disasters.
For example, imagine that testing shortcuts cause an error in the control systems of cars built after 2025, something that goes unnoticed amid complex AI programming. It could even be linked to a specific date, causing a large number of cars to start behaving erratically at the same time, killing many people around the world.
AI generation
Generational AI could make the truth impossible. Over the years, photos and videos have been very hard to fake, so we expect them to be real. Generational AI has already changed this situation significantly. We expect that its ability to produce persuasive fake media will improve so it will be extremely difficult to tell if an image or video is real.
Adopting a trusted person – a respected leader, or a celebrity – uses social media to show real content, but occasionally incorporates a convincing fake. For those who follow them, there is no way to tell the difference – it will be impossible to know the truth.
Invisible cyber attacks
Finally, there will be an unexpected consequence of the overall complexity of the systems that will be built – networks of systems owned by different organizations, all dependent on each other. It will become difficult, if not impossible, to get to the bottom of what is causing the loss.
Imagine a cybercriminal hacking an app used to control devices such as ovens or refrigerators, causing all the devices to go off at the same time. This creates a spike in the demand for electricity on the grid, causing major power outages.
It will be challenging for the power company’s experts to even identify which devices caused the spike, let alone see that they are all controlled by the same app. Cyber sabotage will be invisible, and indistinguishable from normal problems.
Jujitsu software
The point of such forecasts is not to sound the alarm, but to allow us to begin to address the problems. Perhaps the simplest suggestion suggested by the experts is a kind of software jujitsu: using software to defend and protect oneself. We can make programs perform their own safety checks by creating additional code that validates the programs’ output – effectively, code that checks itself.
Similarly, we can demand that the methods already used to ensure the safe operation of software continue to be applied to new technologies. And that the novelty of these systems is not used as an excuse to ignore good safety practice.
Strategic solutions
But the experts agreed that technical answers alone are not enough. Instead, solutions will be found in the interactions between people and technology.
We need to develop the skills to deal with these human-technological problems, and with new forms of education that cross disciplines. And governments must establish safety principles for their own AI provision and legislate on AI safety across the sector, encouraging responsible development and deployment methods.
These forecasts give us a range of tools to tackle the problems that may arise in the future. We will take those tools, to realize the exciting promise of our technological future.
This article from The Conversation is republished under a Creative Commons license. Read the original article.
This research was funded by the UK’s North West Partnership for Security and Trust, which is funded through GCHQ. The funding arrangements required that this article be reviewed to ensure that it did not breach the UK Official Secrets Act and that it did not disclose sensitive, classified or personal information.