Google apologized on Friday for its flawed rollout of a new artificial intelligence image generator, admitting that the tool would “overcompensate” in some cases when looking for a different range of people even when such a range didn’t make sense.
The partial explanation for why its images put people of color in historical settings where they wouldn’t normally be found came a day after Google said it was temporarily stopping its Gemini chatbot from generating any images with people in them. . That was in response to an outcry on social media from some users who claimed the tool had anti-white bias in the way it generated a series of racially diverse images in response to written prompts.
“It’s clear that this feature missed the mark,” said a Friday blog post from Prabhakar Raghavan, a senior vice president who runs Google’s search engine and other businesses. “Some of the images generated are inaccurate or even offensive. We appreciate user feedback and are sorry that the feature did not work well.”
Raghavan did not cite specific examples but among those that drew attention on social media this week were images depicting a Black woman as the founder of the US and depicting Black and Asian people as Nazi-era German soldiers. The Associated Press was unable to independently verify what clues were used to generate those images.
Google added the new image generation feature to its Gemini chatbot, formerly known as Bard, about three weeks ago. It was built on top of an earlier Google research experiment called Imagen 2.
Google has known for some time that such tools can be unwieldly. In a 2022 technical paper, the researchers who developed Imagen warned that AI-generated tools can be used for harassment or the spread of incorrect information “and raise many concerns about social and cultural exclusion and bias.” Those factors factored into Google’s decision not to release a “public demonstration” of Imagen or its underlying code, the researchers said at the time.
Since then, pressure to publicly release AI-generated products has increased due to a competitive race between tech companies trying to capitalize on interest in the emerging technology thanks to OpenAI’s ChatGPT chatbot.
The problems with Gemini are not the first to plague an image generator in recent times. Microsoft had to adjust its own Designer tool several weeks ago after some used it to create deepfake pornographic images of Taylor Swift and other celebrities. Studies have also shown that AI image generators can amplify racial and gender stereotypes found in their training data, and without filters are more likely to show lighter-skinned men when asked to generate a person in different contexts .
“When we built this feature into Gemini, we tuned it to make sure it doesn’t fall into some of the traps we’ve seen in the past with image generation technology – like creating violent or sexual images, or pictures of people true. ,” Raghavan said on Friday. “And because our users come from all over the world, we want it to work well for everyone.”
He said many people might be “trying to get a range of people” when they want a picture of football players or someone walking a dog. But users looking for someone of a particular race or ethnicity or in particular cultural contexts should “get an answer that accurately reflects what you ask for.”
While he overcompensated in response to some cues, in others he was “more alert than we intended and refused to respond to certain cues altogether—mistaking some highly anodic cues for sensitivity.”
He did not explain what the tips were but Gemini regularly rejects requests for certain topics such as protest movements, according to tests of the tool conducted by the AP on Friday, in which it refused to generate images about the Arab Spring, protests George Floyd. or Tiananmen Square. In one case, the chatbot said it did not want to contribute to the spread of misinformation or “modification of sensitive topics.”
Much of this week’s furor over Gemini’s output came from X, formerly known as Twitter, and was magnified by the social media platform’s owner Elon Musk who criticized Google for what he described as “ racist, anti-civilian programs.” Musk, who has his own AI startup, criticized rival AI developers as well as Hollywood for the alleged liberal bias.
Raghavan said Google will do “extensive testing” before enabling the chatbot to show people again.
University of Washington researcher Sourojit Ghosh, who has studied bias in AI image generators, said Friday that he was disappointed that Raghavan’s message ended with a denial that the Google executive “could not promise that Gemini will not occasionally create embarrassing, inaccurate or offensive . results.”
For a company that has perfected search algorithms and has “one of the largest sets of data in the world, accurate results or non-aggressive results should be a fairly low bar that we can hold them accountable to,” he said. Ghosh.