Ethical AI is not to blame for Google’s Gemini Devil

In this photo, the Google Gemini logo appears in the background of a silhouette of a person using a notebook. (Photo Illustration by Rafael Henrique/SOPA Images/LightRocket via Getty Images) Credit – Rafael Henrique-SOPA Images/LightRocket

Eearlier this month, Google released its long-awaited system “Gemini,” which gives users access to its AI image generation technology for the first time. Although most early users agreed that the system was impressive, creating detailed images for text prompts in seconds, users soon found that it was difficult to get the system to generate images of white people, and soon there were viral tweets showing examples of head scratching like. Racially heterogeneous Nazis.

Some have faulted Gemini for being “too awake,” using Gemini as the latest weapon in a growing culture war over the importance of recognizing the effects of historical discrimination. Many said it showed malaise within Google, and some criticism the field of “EI ethics AI” as a shame.

The idea that ethical AI work is to blame is wrong. In fact, Gemini showed Google he was not interfering properly the lessons on AI ethics. Where the ethics of AI is focused on addressing predictable use cases – such as historical productions – Gemini seems to have opted for a “one size fits all” approach, resulting in an odd mix of diverse output. and cringe-worthy.

I should know. I have worked on ethics in AI within technology companies for over 10 years, making me one of the world’s most senior experts on the subject (it’s a young field!). I also founded and co-led Google’s “Ethical AI” team, before they fired me and my co-leader after our report warned of exactly these issues with language generation. Many people criticized Google for their decision, believing it showed systemic discrimination and a preference for reckless speed over well-considered strategy in AI. Yes I strongly agree.

The Gemini debacle once again laid bare Google’s non-specific strategy in areas that I am uniquely qualified to help with, and that I can now help the public understand more generally. This piece will discuss some ways that companies can do AI better next time, avoiding the far-right unhelpful ammunition in culture wars, and ensuring that AI benefits as many people as possible in the future .

One of the critical pieces of ethical work in AI is to communicate foreseeable use, including malicious use and misuse. This means working through questions like Once the model we’re thinking of building is implemented, how will people use it? And how can we design it to be as beneficial as possible in these contexts? This approach recognizes the central importance of “context of use” in creating AI systems. This kind of foresight and contextual thinking, based on the interaction of society and technology, is more difficult for some people than for others – this is where people with expertise in human-computer interaction, social science and cognitive science are well trained. especially (speaking to the importance of interdisciplinarity in technology engagement). These roles are usually not given as much power and influence as engineering roles, and in my opinion this was true for Gemini: those most skilled in articulating predictable uses were not given the power, which resulted in a system that could not be handled multiple times. forms of appropriate use, such as the representation of historically white groups.

Things go wrong when organizations treat every use case as a single use case, or don’t create hypothetical use cases at all. Therefore, without ethics-based analysis of use cases in different contexts, AI systems may not have “under the hood” models that help identify what the user is looking for (and should generate it). In Gemini’s case, this might involve determining whether the user is looking for historical or miscellaneous images, and whether their request is ambiguous or malicious. We recently saw this same failure to build robust models for their predictive use leading to the proliferation of AI-generated Taylor Swift pornography.

To help, I made the following chart years ago. The task is to fill out the cells; I filled it out today with some Gemini-specific examples.

<span rang=Credit Margaret Mitchell” data-src=”https://s.yimg.com/ny/api/res/1.2/j.YBpCv2zDGRLw4SAsR_8w–/YXBwaWQ9aGlnaGxhbmRlcjt3PTk2MDtoPTUzMw–/https://media.zenfs.com/en/time_72/2e21414075b37a9fb8f400893c50c946″ />
Believe Margaret Mitchell

The green cells (top row) are those where AI is most likely to be beneficial (not where AI will always be beneficial). The red cells (middle of the row) are those where AI is most likely to be harmful (but an unexpected beneficial innovation may occur). The rest of the cells are more likely to have mixed results – some good, some bad.

The next steps involve working through likely errors in different contexts, addressing disproportionate errors in the case of subgroups subject to discrimination. It looks like the Gemini developers got this part mostly right. The team seems to have had the foresight to recognize the danger of over-representing white people in neutral or positive situations, which would contribute to a problematic white-dominant view of the world. And so, a sub-module within Gemini was probably designed to display darker skin tones to users.

These stages may be obvious in Gemini, but the stages of use are not predictable, due in part to the growing public awareness of bias in AI systems: pro-white bias was an easily predictable PR nightmare, at echoes of Google’s famous Gorilla Incident, but the approaches to handling “context of use” were not new. The net result was a system that “missed the mark” when it came to including predictable, suitable use cases.

The high-level point is that technology can exist that benefits users and minimizes harm to those most likely to be negatively affected. But you have to include people who are good at this in development and deployment decisions. And these people are often disempowered (or worse) in technology. It doesn’t have to be this way: We can have different ways for AI to enable the right people to the most qualified to help them. When different perspectives are sought, there is no stopping. To get it requires some rough work and ruffled feathers. We’ll know we’re on the right track when we start seeing tech executives as diverse as the images Gemini generates.

Call us at letters@time.com.

Leave a Reply

Your email address will not be published. Required fields are marked *