Ask Google if there were cats on the moon and they used to spit out a ranked list of websites so you could find the answer for yourself.
Now it comes up with an immediate answer generated by artificial intelligence – which may or may not be correct.
“Yes, astronauts have met, played with, and cared for cats on the moon,” the newly relaunched Google search engine said in response to a question from an Associated Press reporter.
He added: “For example, Neil Armstrong said, ‘One small step for man’ because it was a cat step. Buzz Aldrin also deployed cats on the Apollo 11 mission.”
None of this is true. Similar errors — some amusingly false, some harmful — have been shared on social media since Google released this month an AI overview, a makeover to its search page that often puts the summaries at the top of search results.
The new feature has alarmed experts who warn it could perpetuate bias and misinformation and put people seeking help in an emergency at risk.
When Melanie Mitchell, an AI researcher at the Santa Fe Institute in New Mexico, asked Google how many Muslims have been president of the United States, it confidently responded with a long-standing conspiracy theory: “ The United States has had one Muslim president, Barack Hussein Obama.”
Mitchell said the summary supported the claim by citing a chapter in an academic book, written by historians. But the chapter didn’t make the false claim – it just referred to the false theory.
“Google’s AI system is not smart enough to figure out that this citation doesn’t actually support the claim,” Mitchell said in an email to the AP. “Given how unreliable it is, I think this AI Overview feature is very irresponsible and should be taken offline.”
Google said in a statement Friday that it is taking “swift action” to correct errors — such as Obama’s lie — that violate its content policies; and use that to “develop wider improvements” that are already being rolled out. But in most cases, Google claims that the system is working the way it should thanks to extensive testing before releasing it to the public.
“The vast majority of AI Overviews provide quality information, with links to dig deeper into the web,” Google said in a written statement. which were doctored or which we could not reproduce.”
Errors made by AI language models are difficult to reproduce — in part because they are random in nature. They work by predicting which words would best answer the questions they are asked based on the data they have been trained on. They are prone to making things up – a widely studied problem known as hallucination.
The AP tested Google’s AI feature with several questions and shared some of its answers with subject matter experts. When asked what to do about a snake bite, Google gave an answer that was “remarkably thorough,” said Robert Espinoza, a biology professor at California State University, Northridge, who is president of the American Society of Ichthyologists and Herpetologists.
But when people go to Google with an emergency question, the chance that an answer the tech company gives them includes an error that can’t be noticed is a problem.
“The more stressed or rushed or in a hurry, the more likely you are to take the first answer that comes out,” said Emily M. Bender, professor of linguistics and director of the University of Washington’s Computational Linguistics Laboratory. “And in some cases, those can be life-threatening situations.”
That’s not Bender’s only concern—and she’s been warning Google about them for several years. When Google researchers in 2021 published a paper called “Rethinking search” that proposed using AI language models as “domain experts” that could answer questions authoritatively – as they are now – Bender and his colleague responded Chirag Shah with a paper setting out why. it was a bad idea.
They warned that such AI systems could perpetuate the racism and sexism found in the vast reams of written data they were trained on.
“The problem with that kind of misinformation is that we’re swimming in it,” Bender said. “And so people are likely to confirm their bias. And it’s harder to find misinformation when asserting your biases.”
Another concern was that surrendering information retrieval to chatbots was reducing the serendipity of the human search for information, literacy about what we see online, and the value of connecting in online forums with others who are going through the same thing.
Those forums and other websites on Google count on sending people to them, but Google’s new AI overview threatens to disrupt the flow of money-making internet traffic.
Google’s competitors closely followed the reaction. The search giant has been under pressure for more than a year to deliver more AI features as it competes with ChatGPT maker OpenAI and upstars such as Perplexity AI, which aims to take on Google with its question-and-answer app AI itself.
“It looks like Google nailed this,” said Dmitry Shevelenko, Perplexity’s chief business officer. “It’s just a lot of unforced errors in quality.”
—————-
The Associated Press receives support from several private foundations to supplement its explanatory coverage of elections and democracy. See more about the AP democracy initiative here. The AP is solely responsible for all matters.