A Beverly Hills middle school is the latest to be rocked by a deepfake scandal

Security stands outside Beverly Vista Middle School in Beverly Hills (Jason Armond/Los Angeles Times)

The new face of bullying in schools is real. The body underneath is the person who is fake.

Last week, officials and parents at Beverly Vista Middle School in Beverly Hills were shocked by reports of fake images circulating online that put the faces of real students on artificially generated nude bodies. According to the Beverly Hills Unified School District, the images were created and shared by other students at Beverly Vista, the district’s only school for sixth through eighth grades. There are about 750 students enrolled there, according to the latest count.

The district, which is investigating, has joined a growing number of educational institutions around the world dealing with fake pictures, videos and audio. In Westfield, NJ, Seattle, Winnipeg, Almendralejo, Spain, and Rio de Janeiro, people using “deepfake” technology have seamlessly replaced legitimate images of female students with artificial or fraudulent heads of nude bodies. And in Texas, someone allegedly did the same to a female teacher, carving her head on a woman in a pornographic video.

Beverly Hills Unified officials said they were prepared to impose the most severe disciplinary actions allowed by state law. “Any student found creating, disseminating or possessing AI-generated images of this nature will be subject to disciplinary action, including, but not limited to, recommendation of expulsion,” they said in a statement sent to parents last week.

Discouragement may be the only tool at their disposal, however.

Read more: Supreme Court questions whether Texas and Florida can regulate social media to ‘protect’ speech

There are tons of apps available online to “impersonate” someone in a photo, simulating what someone would look like if they were nude when the shot was taken. The apps use AI-powered image-painting technology to remove the pixels representing clothing, replacing them with an image that approximates that person’s nude body, said Rijul Gupta, founder and chief executive of Deep Media in San Francisco.

Other tools allow you to “swap” a targeted person’s face for another person’s nude body, said Gupta, whose company specializes in AI-generated content detection.

Versions of these programs have been available for years, but the earlier ones were expensive, harder to use and less realistic. Today, AI tools can clone real-life images and quickly create deepfakes; even using a smartphone, it can be completed in a few seconds.

“The ability to manipulate [images] democratized,” said Jason Crawforth, founder and chief executive of Swear, which verifies its video and audio recording technology.

“You used to need 100 people to create something fake. Today you need one, and soon that person will be able to create 100” in the same amount of time, he said. “We have gone from the age of information to the age of disinformation.”

AI tools have “escaped from Pandora’s box,” said Seth Ruden of BioCatch, a company that specializes in fraud detection through behavioral biometrics. “We’re starting to see the scale of the potential damage that could be caused here.”

Read more: AI-generated voices in robocalls can entice voters. The FCC made them illegal

If kids can access these tools, “it’s not just a problem with deepfake images,” Ruden said. The potential risks extend to creating images of victims “doing something very legitimate and using that as a way to extort money or blackmail them into taking a specific action,” he said. he.

Reflecting the wide availability of cheap and easy-to-use deepfake tools, the number of non-consensual deepfake porn has increased. According to Wired, an independent researcher’s study found that 113,000 deepfake porn videos were uploaded to the 35 most popular sites for such content in the first nine months of 2023. At that rate, the researcher found, produce more by the end of the year than all previous years combined.

What can be done to protect against deepfake nudes?

Federal and state officials have taken several steps to combat the fraudulent use of AI. According to the Associated Press, six states have banned non-consensual deepfake porn. In California and a handful of other states that do not have criminal laws specifically against pornography, victims of this abuse can sue for damages.

The tech industry is also trying to find ways to combat the malicious and fraudulent use of AI. DeepMedia has joined some of the world’s leading AI and media companies in the Content Initiative and Realities Alliance, which has developed standards for marking images and sounds to recognize when they have been digitally manipulated.

Swear is taking a different approach to the same problem, using blockchains to keep immutable records of files in their original state. Comparing the current version of the file with its record on the blockchain will reveal if and how, exactly, a file has been changed, Crawforth said.

Those standards could help identify and prevent potentially deep media files online. With the right combination of approaches, Gupta said, the vast majority of deepfakes could be filtered out of a school or company network.

One of the challenges, however, is that some AI companies have released open source versions of their apps, allowing developers to create custom versions of AI generator programs. That’s how the AI ​​undress apps, for example, came to life, Gupta said. And these developers can ignore the standards developed by the industry, just as they can try to remove or circumvent the markers that identified their artificially generated content.

Meanwhile, security experts warn that the images and videos people upload daily to social networks are providing a rich source of content that can be exploited by bullies, scammers and other bad actors. And they don’t need much to create a persuasive vibe, Crawforth said; he has seen a demonstration of Microsoft technology that can make a convincing clone of someone’s voice from three seconds of their online audio.

Read more: UnitedHealth blames ‘nation state’ for hack that disrupts pharmacy orders

“There is no such thing as content that cannot be copied and manipulated,” he said.

Digital sharing of photos and videos probably won’t put many, if any, teenagers at risk of victimization. So perhaps the best form of protection for those who want to document their lives online is a “poison pill” technology that changes the metadata of the files they upload to social media, hiding them from online searches for photographs or recordings.

“Poison pilling is a great idea. That’s something we’re also researching,” Gupta said. But to be effective, social media platforms, smart photo apps and other popular content-sharing tools would have to automatically add the poison pills, he said, because you can’t rely on people to do it systematically. own.

Sign up for Essential California for news, features and recommendations from the LA Times and beyond delivered to your inbox six days a week.

This story originally appeared in the Los Angeles Times.

Leave a Reply

Your email address will not be published. Required fields are marked *