Gemini’s flawed AI racial imagery is seen as a warning about titans’ technological power

Google’s Gemini AI gaffe in creating images on command highlighted the challenge of eliminating cultural bias in such technological tools without red fruits (PAU BARRENA)

For people at the trend-setting tech festival here, the scandal that erupted after Google’s Gemini chatbot churned out images of Black and Asian Nazi soldiers was seen as a warning about the power that artificial intelligence can bring to the titans technology.

Google CEO Sundar Pichai last month criticized his company’s Gemini AI app for “absolutely unacceptable” errors, after gags like images of ethnically diverse Nazi troops forced it to temporarily block users from pictures of people who create.

Social media users mocked and criticized Google for the historically inaccurate images, such as those showing a black female US senator from the 1800s — when the first such senator was not elected until 1992.

“We definitely messed up the image generation,” Google co-founder Sergey Brin said at a recent AI “hackathon,” adding that the company should have tested Gemini more thoroughly.

Those interviewed at the South by Southwest arts and technology festival in Austin said the Gemini shows the insufficient power a handful of companies have over the artificial intelligence platforms poised to change the way people live and work.

“Basically, it was too wide-eyed,” said Joshua Weaver, a lawyer and tech entrepreneur, meaning that Google had gone overboard in its effort to provide inclusion and diversity.

Google quickly corrected its errors, but the underlying problem remains, said Charlie Burgoyne, chief executive of the Valkyrie applied science lab in Texas.

The Google Gemini setup was like putting a Band-Aid on a bullet wound.

While Google has long had the luxury of time to refine its products, it is now scrambling in an AI race with Microsoft, OpenAI, Anthropic and others, Weaver noted, adding, “They are moving faster than they know how to move.”

Mistakes made in an attempt at cultural sensitivity are flashpoints, especially given the political divisions that exist in the United States, a situation exacerbated by Elon Musk’s X platform, the ex-Twitter.

“People on Twitter are very happy to celebrate anything shameful that happens in technology,” Weaver said, adding that reaction to the Nazi gaffe was “overblown”.

However, the disaster has challenged the amount of control that people using AI tools have over information, he said.

In the next decade, the amount of information — or misinformation — created by AI could outstrip that generated by humans, meaning those AI defenses will have a huge impact on the world, Weaver said.

– Lean in, Lean out –

Karen Palmer, an award-winning mixed reality creator with Interactive Films Ltd., said she could envision a future where a person gets into a robo-taxi and, “the AI ​​scans you and thinks there are any outstanding violations against you. … you will be taken into the local police station,” not your intended destination.

AI is trained on mountains of data and can be put to work on a growing range of tasks, from generating an image or sound to determining who gets a loan or whether a medical scan finds cancer.

But that data comes from a world full of cultural bias, misinformation and social inequality – not to mention online content that can include casual conversations between friends or deliberately exaggerated and provocative posts – and AI models can pick up on those flaws to replicate.

With Gemini, Google engineers tried to rebalance the algorithms to provide results that better reflect human diversity.

The attempt backfired.

“Determining where the bias is and how it counts can be difficult, nuanced and subtle,” said technology lawyer Alex Shahrestani, managing partner at law firm Promise Legal for technology companies.

Even well-intentioned engineers involved in AI training can’t help but bring their own life experiences and subconscious biases to the process, he and others believe.

Valkyrie’s Burgoyne also castigated great technology for keeping the inner workings of AI generation hidden in “black boxes,” so that users can’t detect any hidden bias.

“The capabilities of the outputs are much higher than our understanding of the methodology,” he said.

Experts and activists are calling for more diversity in teams creating AI and related tools, and more transparency about how they work — especially when algorithms rewrite user requests to “reproduce” results. improve”.

A challenge is how to appropriately capture the perspectives of the world’s many and diverse communities, said Jason Lewis of the Indigenous Futures Resource Center and related groups here.

At Indigenous AI, Jason works with remote indigenous communities to design algorithms that use their data ethically while reflecting their perspectives on the world, something he doesn’t always see in the “arrogance” of big tech leaders.

His own work, he told a group, stands in “such contrast to the rhetoric of Silicon Valley, where it’s top-down ‘Oh, we’re doing this because we’re benefiting all humanity’ bullshit, right? “

His audience laughed.

juj-gc/bbk

Leave a Reply

Your email address will not be published. Required fields are marked *