AI may be to blame for our failure to make contact with alien civilizations

<span rang=sdecoret/Shutterstock” src=”https://s.yimg.com/ny/api/res/1.2/.Yy6dWKNuf3I2R725cV85A–/YXBwaWQ9aGlnaGxhbmRlcjt3PTk2MDtoPTcwNg–/https://media.zenfs.com/en/the_conversation_464/69378f5cd4e7381586f0f6a910424 064″ data-src =”https://s.yimg.com/ny/api/res/1.2/.Yy6dWKNuf3I2R725cV85A–/YXBwaWQ9aGlnaGxhbmRlcjt3PTk2MDtoPTcwNg–/https://media.zenfs.com/en/the_conversation_464/69378f5cd4e7381586f0f6a91042406 4″/>

Artificial intelligence (AI) has advanced at an amazing pace in the last few years. Some scientists are now looking towards the development of artificial intelligence (ASI) — a type of AI that would not only surpass human intelligence but would not be bound by the learning speed of humans.

But what if this milestone is just a great achievement? What if it also represents a huge bottleneck in the development of all civilizations, one so challenging that it hinders their long-term survival?

This idea is at the heart of a research paper I recently published in Acta Astronautica. Could AI be the universe’s “great filter”—a threshold so difficult to cross that it prevents most life from evolving into spacefaring civilizations?

This is a concept that could explain why the search for extraterrestrial intelligence (Seti) has yet to find the signatures of advanced technical civilizations elsewhere in the galaxy.

The perfect filter hypothesis is ultimately a proposed solution to the Fermi Paradox. This begs the question of why, in a universe vast and ancient enough to host billions of potentially habitable planets, we haven’t detected any signs of alien civilizations. The hypothesis suggests that there are insurmountable obstacles in the evolutionary timeline of civilizations that prevent them from developing into spacefaring entities.

I believe the emergence of ASI could be such a filter. The rapid advancement of AI, which may cause ASI, may intersect with a critical stage in the development of civilization – the transition from a single-planet species to a multi-planet species.

This is where many civilizations could end up, with AI advancing far faster than our ability to control it or to explore and sustainably populate our Solar System.

The challenge with AI, and specifically with ASI, is its autonomous, self-amplifying and enhancing nature. It has the potential to improve its own abilities at a pace that exceeds our own evolutionary timelines without AI.

There is the possibility of something going terribly wrong, causing both biological and AI civilizations to collapse before they get a chance to be multiplanetary. For example, if nations increasingly rely on and submit to the power of competing autonomous AI systems, military capabilities could be harnessed to kill and destroy on an unprecedented scale. This could lead to the destruction of our entire civilization, including the AI ​​systems themselves.

In this case, I estimate that the typical longevity of a technological civilization could be less than 100 years. That’s about the time between being able to receive and broadcast signals between the stars (1960), and the estimated advent of ASI (2040) on Earth. This is extremely short compared to the cosmic time scale of billions of years.

Image of the star cluster NGC 6440.

There are a mind boggling number of planets out there. NASA/James Webb telescope

Plugging this estimate into optimistic versions of the Drake equation – which attempts to estimate the number of active, communicating extraterrestrial civilizations in the Milky Way – suggests that, at any given time, there are only a handful of intelligent civilizations out there. . Furthermore, like us, their relatively small technological activities may make them challenging to detect.

Wake up call

This research is not just a cautionary tale of potential doom. It serves as a call for humanity to establish strong regulatory frameworks to guide the development of AI, including military systems.

This is not just about preventing the abusive use of AI on Earth; it is also meant to ensure that the evolution of AI is compatible with the long-term survival of our species. It suggests that we need to put more resources into becoming a multi-planetary society as soon as possible – a goal that has been dormant since the early days of the Apollo project, but has recently been overtaken by advances made by private companies.

As historian Yuval Noah Harari has noted, nothing in history has prepared us for the impact that unconscious, super-intelligent entities will have on our planet. Recently, the decision-making implications of autonomous AI have led to calls from prominent leaders in the field for a moratorium on AI development, until a form of responsible control and regulation can be introduced.

But even if all countries agreed to adhere to strict rules and regulations, it will be difficult for rogue organizations to adhere to them.

The integration of autonomous AI into military defense systems must be of particular concern. There is already evidence that people will voluntarily release significant power to increasingly capable systems, because they can perform useful tasks much faster and more efficiently without human intervention. Governments are therefore reluctant to regulate in this area given the strategic advantages that AI offers, as recently and disastrously demonstrated in Gaza.

This means we are already dangerously close to a place where autonomous armies operate outside ethical boundaries and alongside international law. In such a world, giving up power to AI systems to gain a tactical advantage could lead to a very destructive, rapidly escalating chain of events. In the blink of an eye, our planet’s collective intelligence could be wiped out.

Humanity is at a critical point in its technological trajectory. Our actions now could determine whether we become a viable interstellar civilization, or whether we succumb to the challenges of our own creation.

Using Seti as a lens through which we can examine our future development adds a new dimension to the discussion of the future of AI. It is up to all of us to make sure that when we reach the stars, we do so not as a cautionary tale for other civilizations, but as a sign of hope – a species that has learned to thrive alongside AI.

This article from The Conversation is republished under a Creative Commons license. Read the original article.

The conversationThe conversation

The conversation

Michael Garrett does not work for, consult with, own shares in or receive funding from any company or organization that would benefit from this article this, and has not disclosed any relevant affiliations beyond his academic appointment.

Leave a Reply

Your email address will not be published. Required fields are marked *