Artificial intelligence (AI) mainstream in 2023. It’s been a long time coming but technology has a long way to go to match the people of science fiction fantasy human-like machines.
ChatGPT was the catalyst for a year of brutal AI authorship. The chatbot gave the world a glimpse of the recent advances in computer science even if not everyone fully understands how it works or what to do with it.
“I would say this is a watershed moment,” said pioneering AI scientist Fei-Fei Li.
“2023, in history, is expected to be remembered for the profound changes in technology as well as the awakening of the public. It also shows how dirty this technology is”.
It was a year for people to figure out “what this is, how to use it, what the impact is – everything good, bad and the ugly,” she said.
Panic over AI
The first AI panic of 2023 arrived shortly after New Year’s when classrooms reopened and schools from Seattle to Paris began blocking ChatGPT.
Teenagers were already asking the chatbot – released in late 2022 – to compose essays and answer take-home tests.
AI makes large language models behind technology such as ChatGPT work by repeatedly guessing the next word in a sentence after “learning” the patterns associated with a huge collection of human-written works.
They often get facts wrong. But the outputs looked so natural that it sparked curiosity about the next possible advances in AI and its potential use for deception and deception.
Concern grew as this new group of AI generation tools – churning out not just words but novel images, music, and synthetic voices – threatened the livelihoods of anyone who writes, draws, doodles or codes with for living.
It prompted strikes by Hollywood writers and actors and legal challenges by visual artists and best-selling authors.
Some of the most prominent scientists in the AI field have warned that the technology’s unchecked progress is making humans smarter and possibly threatening their existence, while other scientists have claimed that their -worry too much or have drawn attention to more immediate risks.
By spring, AI-generated deepfakes—some more convincing than others—had jumped into the US election campaign, where one falsely portrayed Donald Trump embracing the nation’s former infectious disease expert.
The technology it made it more difficult to distinguish between real and manufactured war footage in Ukraine and Gaza.
By the end of the year, the AI crises had moved to ChatGPT’s own maker, the company based in San Francisco. OpenAI startupnearly destroyed by corporate turmoil over its charismatic CEO, and to a government meeting room in Belgium, where exhausted political leaders from across the European Union emerged after days of intense talks with a measure for the world’s first major AI legal protections.
The IS The new EU AI Act It will take a few years to take full effect, and other legislative bodies – including the United States Congress – are far from enacting their own.
Too much hype?
I have no doubt that commercial AI products unveiled in 2023 will incorporate technological achievements that were not possible in earlier stages of AI research, which date back to the mid-20th century.
But the latest AI generation trend is peaking hype, according to market research firm Gartner, which has tracked the so-called “hype cycle” of the emerging technology since the 1990s. Picture a wooden router ticking up to the highest hill, about to look down into what Gartner describes as an “effort tank” before snapping back to reality.
“Generative AI is right at the peak of inflated expectations,” said Gartner analyst Dave Micko. “There are huge demands from vendors and producers of AI generation regarding their capabilities, their ability to deliver those capabilities”.
Google drew criticism this month for editing a video demonstration of its most capable AI model, called Gemini, in a way that made it appear more impressive — and human-like.
Micko said that leading AI developers are pushing in certain ways to implement the latest technology, and most of it corresponds to their existing product line – be it search engines or workplace productivity software. That doesn’t mean the world will use it that way.
“As much as Google and Microsoft and Amazon and Apple love us to adopt the way they think about their technology and deliver that technology, I think adoption really comes from the bottom up,” he said.
Is it different this time?
It’s easy to forget that this isn’t the first wave of it Commercialization of AI. Computer vision techniques developed by Li and other scientists have helped sort through a huge database of photographs to recognize individual objects and faces and guide self-driving cars. Advances in speech recognition have made voice assistants like Siri and Alexa a feature in many people’s lives.
“When we launched Siri in 2011, it was at the time the fastest growing consumer app and the only major mainstream application of AI that people had ever experienced,” said Tom Gruber, co-founder of Siri Inc., which was bought by Apple and became an iPhone feature.
But Gruber believes what’s happening now is the “biggest wave ever” in AI, unleashing new possibilities as well as dangers.
“We are amazed that we could discover this amazing ability with language by accident, by training a machine to play solitaire on the entire Internet,” Gruber said. “It’s kind of amazing”.
The dangers could come quickly in 2024, as major national elections in the United States, India, and elsewhere could be flooded with AI-generated deep bugs.
In the long term, rapidly improving AI technology’s language, visual perception and step-by-step planning capabilities could overwhelm the digital assistant’s vision—unless access is given to the “inner loop of our digital life stream,” Gruber said. .
“They can manage your attention as in, ‘you should watch this video. You should read this book. You should respond to this person’s communication,'” Gruber said.
“That’s what a real executive assistant does. And we could have that, but at great risk to personal information and privacy”.