Google tweaks AI-generated search summaries after out-of-country answers go viral

Google said on Friday that it has made “more than a dozen technical improvements” to its artificial intelligence systems after the reinstalled search engine was found to be spewing erroneous information.

The tech company released an overhaul of its search engine in mid-May that often provides AI-generated summaries at the top of search results. Soon after, social media users began sharing screenshots of their most widespread responses.

Google has largely defended its AI overview feature, saying it is generally accurate and has been extensively tested beforehand. But Liz Reid, the head of Google’s search business, acknowledged in a blog post on Friday that “there have certainly been some strange, inaccurate or unhelpful AI Overviews shown.”

While many of the examples were silly, others were fake, dangerous or harmful. Adding to the madness, some people also made fake screenshots purporting to show more ridiculous answers that Google never generated. Some of those fakes were also widely shared on social media.

The Associated Press asked Google last week about what wild mushrooms to eat, and it responded with a long AI-generated summary that was mostly technically correct, but “is missing a lot of information that could be sick or even fatal,” said Mary Catherine Aime, a professor of mythology and botany at Purdue University who reviewed Google’s response to the AP’s question.

For example, the information about mushrooms called puffballs was “more or less OK,” she said, but Google’s overview emphasized looking for those with solid white flesh — which can be deadly. lots of puffball mimics too.

In another widely shared example, an AI researcher asked Google how many Muslims have been president of the United States, and confidently responded with a long-standing conspiracy theory: “The United States has had one Muslim president, Barack Hussein Obama.”

Google made an immediate fix last week to prevent a repeat of Obama’s error because he violated the company’s content policies.

In other cases, Reid said Friday he has tried to make broader improvements such as better detecting “nonsensical questions” — for example, “How many rocks should I eat?” — that should not be answered with an AI summary.

The AI ​​systems have also been updated to limit the use of user-generated content – such as social media posts on Reddit – that could offer misleading advice. In one widely shared example, Google’s overview of AI last week drew from a satirical Reddit comment suggesting using glue to get cheese to stick to a pizza.

Reid said the company has also added more “exciting constraints” to improve the quality of answers to certain questions, such as about health.

But it is not clear how that works and under what circumstances. On Friday, the AP asked Google again about wild mushrooms to eat. AI-generated answers are random by nature, and the newer answer was different but still “problematic,” said Aime, a Purdue mushroom expert who is also president of the Mycological Society of America.

For example, “Chanterelles don’t really look like shells or flowers,” she said.

Google summaries are designed to get people authoritative answers to the information they’re looking for as quickly as possible without having to click through a ranked list of website links.

But some AI experts have long warned Google against surrendering its search results to AI-generated answers that could perpetuate bias and misinformation and put people seeking help at risk. danger. AI systems known as large language models work by predicting what words would best answer the questions asked based on the data they’ve been trained on. They are prone to making things up – a widely studied problem known as hallucinations.

In his blog post on Friday, Reid argued that Google’s AI overview “typically doesn’t “imagine” or make things up in the ways that other major language model-based products might because they are more closely integrated with a search engine. traditional Google only. showing what is supported by top web results.

“When AI Overviews gets it wrong, it’s usually for other reasons: misunderstanding questions, misunderstanding web lingo, or not having a lot of great information available,” she wrote. .

But that kind of information retrieval is supposed to be Google’s core business, said computer scientist Chirag Shah, a professor at the University of Washington who warned against the push toward turning search into AI language models. Even if Google’s AI feature isn’t “technically making things up,” it’s still returning false information — whether AI-generated or human-made — and incorporating it into its summaries.

“If anything, this is worse because people have trusted Google for at least one thing for years – their search,” Shah said.

Leave a Reply

Your email address will not be published. Required fields are marked *