A Deepfake of a principal’s voice is the latest case of AI being used to harm

The latest criminal case involving artificial intelligence emerged last week from a high school in Maryland, where police say a principal was framed as a racist through a fake recording of his voice.

The case is yet another reason why everyone – not just politicians and celebrities – should be concerned about this increasingly powerful, deep-fake technology, experts say.

“Everyone is vulnerable to attack, and anyone can carry out the attack,” said Hany Farid, a professor at the University of California, Berkeley, who focuses on digital forensics and disinformation.

Here’s what you need to know about some of the latest uses of AI to cause harm:

AI HAS BEEN VERY REACHED

Handling recorded sounds and images is nothing new. But the ease with which one can change information is a recent phenomenon. So is the ability to spread it quickly on social media.

The fake audio clip impersonating the principal is an example of a subset of artificial intelligence known as generative AI. It can create new hyper-realistic images, videos and audio clips. It has become cheaper and easier to use in recent years, lowering the barrier for anyone with an internet connection.

“Especially in the last year, anyone – and really anyone – can go to an online service,” said Farid, the Berkeley professor. “And for free or for a few bucks of the month, they can upload 30 seconds of someone. voice.”

Those seconds can come from a voicemail, social media post or continuous recording, Farid said. Machine learning algorithms represent the sound of a person. And the cloned speech is then generated from words typed on a keyboard.

The technology will only get more powerful and easier to use, including for video manipulation, he said.

WHAT HAPPENS IN MARYLAND?

Authorities in Baltimore County said Dazhon Darien, the athletic director at Pikesville High, cloned Principal Eric Eiswert’s voice.

The fake recording contained racist and anti-Semitic comments, police said. The audio file arrived in an email in several teachers’ inboxes before it was spread on social media.

The recording came to light after Eiswert raised concerns about Darien’s work performance and alleged misuse of school funds, police said.

The false noise forced Eiswert to go on vacation, with police guarding his home, authorities said. Angry phone calls flooded the school, and hate-filled messages piled up on social media.

Detectives asked outside experts to analyze the recording. One said there were “traces of AI-generated content and human editing behind the scenes,” court records said.

​​​​​​A second opinion from Farid, Professor Berkeley, was that there were “multiple recordings shared together,” according to the records.

Farid told the Associated Press that questions remain about how exactly that recording was created, and he did not confirm that it was entirely AI-generated.

But given the growing potential of AI, Farid said the Maryland case remains a “canary in the coal mine,” about the need to better regulate this technology.

WHY IS THERE INFORMATION ABOUT A FEMALE?

There are many instances of disinformation generated by AI being sound.

That’s partly because technology has improved so quickly. Human ears cannot always detect signs of manipulation, and inconsistencies in videos and images are easier to spot.

Some people have cloned the voices of alleged abducted children over the phone to extract ransom money from parents, experts say. Another pretended to be the chief executive of a company in urgent need of funds.

During this year’s New Hampshire primary, AI-generated robocalls mimicked the voice of President Joe Biden and tried to discourage Democratic voters from voting. Experts warn of a rise in AI-generated disinformation targeting elections this year.

But disturbing trends go beyond sound, such as programs that create fake nude images of clothed people without their consent, including minors, experts warn. Singer Taylor Swift has recently been targeted.

WHAT CAN I DO?

Most providers of AI voice generation technology say they prevent malicious use of their tools. But self-enforcement varies.

Some vendors require a type of voice signature, or require users to recite a specific set of sentences before a voice can be cloned.

Larger tech companies, such as Facebook parent Meta and ChatGPT maker OpenAI, only allow a small group of trusted users to try the technology because of the risks of abuse.

Farid said more must be done. For example, all companies should ask users to enter phone numbers and credit cards so they can retrieve files for those who misuse the technology.

Another idea is recordings and images to carry a digital watermark.

“You change the sound in ways that are imperceptible to the human auditory system, but in a way that can be recognized by a piece of downstream software,” Farid said.

Alexandra Reeve Givens, CEO of the Center for Democracy & Technology, said the most effective intervention is law enforcement action against the criminal use of AI. More consumer education is also needed.

Another focus should be encouraging responsible behavior among AI companies and social media platforms. But it’s not as simple as banning Generative AI.

“Adding legal liability can be complicated because, in many cases, the technology may have positive or positive uses,” Givens said, citing book translation and reading programs.

Another challenge is reaching international agreement on ethics and guidelines, said Christian Mattmann, director of the Information Retrieval & Data Science group at the University of Southern California.

“People use AI differently depending on the country they’re in,” Mattmann said. “And it’s not just the governments, it’s the people. So culture is important.”

___

Associated Press reporters Ali Swenson and Matt O’Brien contributed to this article.

Leave a Reply

Your email address will not be published. Required fields are marked *