Technology companies are trying to build artificial general intelligence. But who decides when AGI will be achieved?

There is a race to build artificial general intelligence, a future vision of machines that are as intelligent as humans or at least can do many things as well as humans.

Achieving such a concept – commonly known as AGI – is the driving mission of ChatGPT maker OpenAI and a priority for the elite research wings of tech giants Amazon, Google, Meta and Microsoft.

It is also a cause for concern for world governments. Leading AI scientists published research Thursday in the journal Science warning that unchecked AI agents with “long-term planning” skills could pose an existential threat to humanity.

But what exactly is AGI and how will we know when it has been achieved? At the cutting edge of computer science, it is now a buzzword that is constantly being redefined by those who want to achieve it.

What is AGI?

Not to be confused with the similar generative AI – which describes the AI ​​systems behind cutting-edge tools that “generate” new documents, images and sounds – artificial general intelligence is a more nebulous idea.

It’s not a technical term but “a serious, albeit ill-defined concept,” said Geoffrey Hinton, a pioneering AI scientist known as the “Godfather of AI.”

“I don’t think there is agreement on what the term means,” Hinton said by email this week. “I use it to mean an AI that is at least as good as a human in almost all of the cognitive things that humans do.”

Hinton prefers another term — superintendence — “for AGIs that are better than humans.”

A small group of early proponents of the term AGI wanted to illustrate how mid-20th century computer scientists envisioned an intelligent machine. That was before AI research branched out into subfields that led to specialized and commercially viable versions of the technology—from facial recognition to popular voice assistants like Siri and Alexa.

Mainstream AI research has “turned away from the original vision of artificial intelligence, which was quite ambitious at the beginning,” said Pei Wang, a professor who teaches an AGI course at Temple University and helped organize the first AGI conference in 2008. .

Putting the ‘G’ in the AGI was a sign for those who “still want to do the big thing. We don’t want to build tools. We want to build a thinking machine,” Wang said.

Are we at AGI yet?

Without a clear definition, it’s hard to know when — or if — a company or group of researchers will have achieved general artificial intelligence.

“Twenty years ago, I think people would have been happy to agree that systems with the capabilities of GPT-4 or (Google) Gemini had achieved general intelligence comparable to humans,” Hinton said. there to answer any question in a more or less sensible way. But now that AI can do that, people are trying to change the test.”

Improvements in “autoregressive” AI techniques that predict the next most plausible word in a sequence, coupled with massive computing power to train those systems on data sets, have made for great chatbots, but they’re still not the AGI of many people. was included. Getting to AGI requires technology that can perform as well as humans in a wide range of tasks, including reasoning, planning and the ability to learn from experience.

Some researchers want to reach a consensus on how to measure it. It’s one of the topics at next month’s AGI workshop in Vienna, Austria — the first at a major AI research conference.

“This needs real effort and public attention so that we can collectively agree on some kind of AGI classification,” said workshop organizer Jiaxuan You, an assistant professor at the University of Illinois Urbana-Champaign. One idea is to break it down into levels the same way carmakers try to benchmark the path between cruise control and fully self-driving vehicles.

Others plan to figure it out on their own. San Francisco company OpenAI has given its non-profit board of directors – whose members include a former US Treasury secretary – the responsibility of deciding when its AI systems reach the point where they “outperform humans at the most economically valuable work.”

“The board decides when we’ve achieved AGI,” says OpenAI’s own explanation of its governance structure. Such a feat would deprive the company’s biggest partner, Microsoft, of the rights to commercialize such a system, since the terms of their agreement “only apply to pre-AGI technology”.

Is AGI dangerous?

Hinton made global headlines last year when he resigned from Google and warned of the inherent dangers of AI. A new Science study published Thursday may bolster those concerns.

Lead author Michael Cohen, a University of California, Berkeley researcher, studies “the expected behavior of artificially intelligent agents in general,” particularly those competent enough to “properly threaten us by planning us.”

Cohen made it clear in an interview Thursday that such AI long-term planning agents do not yet exist. But they “have the potential” to go even further as tech companies try to combine today’s chatbot technology with more deliberate planning skills using a technique called reinforcement learning.

“If an advanced AI system is given the objective of maximizing its reward and, at some point, withholding reward from it, the AI ​​system is strongly encouraged to take humans out of the loop, if the opportunity exists,” according to the paper which is co-. Authors include prominent AI scientists Yoshua Bengio and Stuart Russell and law professor and former OpenAI advisor Gillian Hadfield.

“I hope we’ve made the case that people in government (need to) start thinking seriously about exactly what regulations we need to address this problem,” Cohen said. So far, “governments only know what these companies decide to tell them.”

Too legit to retire AGI?

With so much money coming to the promise of AI advancement, it’s no surprise that AGI is also becoming a corporate buzzword that sometimes attracts near-religious energy.

Part of the tech world is divided between those who argue that it should be developed slowly and carefully and others – including venture capitalists and rapper MC Hammer – who have declared themselves part of the “accelerator” camp.

London startup DeepMind, founded in 2010 and now part of Google, was one of the first companies to explicitly set out to develop AGI. OpenAI did the same in 2015 with a promise focused on safety.

But now it may seem that everyone else is jumping on the bandwagon. Google co-founder Sergey Brin was recently spotted hanging out at a California venue called the AGI House. And less than three years after changing its name from Facebook to focus on virtual life, Meta Platforms revealed in January that AGI was also at the top of its agenda.

Meta CEO Mark Zuckerberg said his company’s long-term goal is to “build complete general knowledge” that would require advances in reasoning, planning, coding and other cognitive abilities. While researchers at Zuckerberg’s company had long been focusing on those topics, his attention changed tone.

At Amazon, one sign of the new messages was when the chief scientist of the Alexa voice assistant changed job titles to become the chief scientist of the AGI.

While not as tangible to Wall Street as generative AI, airing AGI’s ambitions could help recruit AI talent who have a choice about where they want to work.

When deciding between an “old-school AI institute” or an institute that “has the goal of building AGI” and has enough resources to do so, many would choose the latter, said You, the University of Illinois researcher.

Leave a Reply

Your email address will not be published. Required fields are marked *