President Joe Biden’s administration is pushing the tech industry and financial institutions to shut down a growing market for offensive sex images made with artificial intelligence technology.
New AI generation tools make it easy to transform a person’s likeness into sexually explicit AI depth fiction and share those realistic images across chat rooms or social media. The victims – be they celebrities or children – do little to stop it.
The White House is putting out a call Thursday seeking voluntary cooperation from companies in the absence of federal legislation. By committing to a series of specific measures, officials hope the private sector can curb the creation, dissemination and monetization of such non-consensual AI images, including explicit images of children.
“With next generation AI emerging, everyone was speculating about where the first real injuries would come from. And I think we have the answer,” said Arati Prabhakar, Biden’s chief science adviser, director of the White House Office of Science and Technology Policy.
She described to The Associated Press a “tremendous acceleration” of non-consensual images fueled by AI tools that primarily targeted women and girls in a way that could end their lives.
“If you’re a teenage girl, if you’re a gay kid, these are issues that people have right now,” she said. “We’ve seen an acceleration due to very fast-moving generational AI. And the fastest thing that could happen is for companies to step up and take responsibility.”
A document shared with AP ahead of its release Thursday calls for action not just from AI developers but from payment processors, financial institutions, cloud computing providers, search engines and the gatekeepers — namely Apple and Google — that control what it does on mobile app stores.
The private sector should step up to “disrupt the monetization” of image-based sexual abuse, particularly restricting payment access to sites that advertise explicit images of minors, the administration said.
Prabhakar said many payment platforms and financial institutions already say they will not support the types of businesses that promote offensive imagery.
“But sometimes it’s not implemented; sometimes they don’t have those terms of service,” she said. “And so that’s an example of something that could be done much more rigorously.”
Cloud service providers and mobile app stores could “restrict web services and mobile applications that are marketed to create or alter sexual images without the consent of individuals,” the document says.
And whether it’s generated by AI or a real nude photo posted on the internet, survivors should be able to find online platforms to remove them more easily.
The most well-known victim of deepfake pornographic images is Taylor Swift, whose devoted fans fought back in January when offensive images of the AI-created singer-songwriter began circulating on social media. Microsoft promised to strengthen its defenses after some of the Swift images were traced to its AI visual design tool.
A growing number of schools in the US and elsewhere are also grappling with AI-generated deepfake nudes depicting their students. In some cases, fellow teenagers were found to be creating AI-manipulated images and sharing them with their classmates.
Last summer, the Biden administration brokered voluntary commitments from Amazon, Google, Meta, Microsoft and other major tech companies to put a range of safeguards on new AI systems before they are released publicly.
Biden then signed an ambitious executive order in October designed to direct how AI is developed so companies can make a profit without jeopardizing public safety. While focused on broader AI concerns, including national security, it touched on the emerging problem of AI-generated images of child abuse and finding better ways to detect it.
But Biden also said legislation would be needed to bolster the administration’s AI protections. A bipartisan group of US senators is pushing Congress to spend at least $32 billion over the next three years to develop artificial intelligence and fund measures to guide it safely, though it has largely stopped calls for those protections to be enacted into law.
Encouraging companies to step up and make voluntary commitments doesn’t change the fundamental need for Congress to take action here, said Jennifer Klein, director of the White House Gender Policy Council.
Established laws prohibit the making and possession of sexual images of children, even if they are fake. Federal prosecutors brought charges earlier this month against a Wisconsin man, who they said used a popular AI image generator, Stable Diffusion, to gradually create thousands of realistic images of minors engaged in sexual behavior. The man’s attorney declined to comment after his arraignment hearing Wednesday.
But there is almost no oversight of the technology tools and services that make it possible to create such images. Some are fly-by-night commercial websites that reveal little information about who runs them or the technology they are based on.
The Stanford Internet Observatory said in December that thousands of suspected child sexual abuse images had been found in the massive AI LAION database, an online image and caption index used to train top AI image makers such as Stable Diffusion.
London-based Stability AI, which owns the latest versions of Stable Diffusion, said this week it “did not approve the release” of the earlier model allegedly used by the Wisconsin man. Such open source models, because their technical components are released publicly on the internet, are difficult to put back in the bottle.
Prabhakar said it’s not just open source AI technology that’s causing harm.
“It’s a wider problem,” she said. “Unfortunately, this is a category where a lot of people seem to be using image generators. And it’s a place where we’ve seen such an explosion. But I think it’s not broken down neatly into open source and proprietary systems.”
——
AP writer Josh Boak contributed to this report.