The year we played with artificial intelligence — and we weren’t sure what to make of it

Artificial intelligence went mainstream in 2023 – it was a long time coming but the technology has a long way to go to match the science fiction fantasies of humans on human-like machines.

ChatGPT brought a year of brutal AI authorship. The chatbot has given the world an insight into the recent advances in computer science even if not everyone fully understands how it works or what to do with it.

“I would say this is a watershed moment,” said pioneering AI scientist Fei-Fei Li. “We hope, in history, that 2023 will be remembered for the profound changes in technology as well as the awakening of the public. It also shows how dirty this technology is.”

It was a year for people to figure out “what this is, how to use it, what the impact is – the good, the bad and the ugly,” she said.

PANIC OVER IT

The first AI panic of 2023 arrived shortly after New Year’s when classrooms reopened and schools from Seattle to Paris began blocking ChatGPT. Teenagers were already asking the chatbot — released in late 2022 — to compose essays and answer take-home tests.

The AI ​​of large language models behind technology such as ChatGPT works by repeatedly guessing the next word in a sentence after “learning” the patterns of many human-written works. They often get facts wrong. But the outputs looked so natural that it sparked curiosity about the next possible advances in AI and its potential use for deception and deception.

Concern grew as this new cohort of AI generation tools – spitting out not just words but novel images, music and synthetic voices – threatened the livelihood of anyone who writes, draws, strums or his codes for living. It prompted strikes by Hollywood writers and actors and legal challenges by visual artists and best-selling authors.

Some of the most prominent scientists in the AI ​​field have warned that the technology’s unchecked progress is making humans smarter and possibly threatening their existence, while other scientists have claimed that their -worry too much or have drawn attention to more immediate risks.

By spring, AI-generated insights — some more convincing than others — had made their way into the US election campaign, where one falsely portrayed Donald Trump embracing the nation’s former infectious disease expert. Technology has made it increasingly difficult to distinguish between real and fake war footage in Ukraine and Gaza.

By the end of the year, the AI ​​crises had moved to ChatGPT’s own maker, San Francisco startup OpenAI, nearly destroyed by corporate turmoil over its energetic CEO, and to a government meeting room in Belgium, where political leaders consumed throughout Europe. The Union emerged after days of tough talks with an agreement on the world’s first major AI legal protections.

The new AI law won’t take effect until 2025, and other legislative bodies – including the United States Congress – are still far from enacting their own.

TOO MUCH HYPE?

I have no doubt that commercial AI products unveiled in 2023 will incorporate technological achievements that were not possible in earlier stages of AI research, which date back to the mid-20th century.

But the latest AI generation trend is at peak hype, according to market research firm Gartner, which has tracked what it calls the emerging technology’s “hype cycle” since the 1990s. Picture a wooden router ticking up to its highest hill, about to look down into what Gartner describes as an “effort tank” before snapping back to reality.

“Generative AI is right at the peak of inflated expectations,” said Gartner analyst Dave Micko. “Vendors and producers of generational AI have huge demands on their capabilities, their ability to deliver those capabilities.”

Google drew criticism this month for editing a video demonstration of its most capable AI model, called Gemini, in a way that made it appear more impressive — and human-like.

Micko said that leading AI developers are pushing in certain ways to implement the latest technology, and most of it corresponds to their existing product line – be it search engines or workplace productivity software. That doesn’t mean the world will use it that way.

“As much as Google and Microsoft and Amazon and Apple would love to adopt the way they think about their technology and deliver that technology, I think adoption really comes from the bottom up,” he said.

IS IT DIFFERENT THIS TIME?

It’s easy to forget that this isn’t the first wave of AI commercialization. Computer vision techniques developed by Li and other scientists have helped sort through a huge database of photographs to recognize individual objects and faces and guide self-driving cars. Advances in speech recognition have made voice assistants like Siri and Alexa a part of many people’s lives.

“When we launched Siri in 2011, it was at the time the fastest growing consumer app and the only major mainstream application of AI that people had ever experienced,” said Tom Gruber, co-founder of Siri Inc., which was bought by Apple and It became an iPhone feature.

But Gruber believes what’s happening now is the “biggest wave ever” in AI, unleashing new possibilities as well as dangers.

“We’re amazed that we could discover this amazing ability with language by accident, by training a machine to play solitaire on the entire internet,” Gruber said. “It’s a wonderful thing.”

The dangers could come quickly in 2024, as major national elections in the United States, India and elsewhere could be flooded with AI-generated deep bugs.

In the long term, rapidly improving AI technology’s language, visual perception and step-by-step planning capabilities could overwhelm the digital assistant’s vision — unless access is given to the “inner loop of our digital life stream,” Gruber said. .

“They can manage your attention as in, ‘You should watch this video. You should read this book. You should respond to this person’s communications,” Gruber said. “That’s what a real executive assistant does. And we could have that, but at great risk to personal information and privacy.”

Leave a Reply

Your email address will not be published. Required fields are marked *