How to see deepfake AI generated images

LONDON (AP) – AI fakery is quickly becoming one of the biggest problems we face online. The rise and misuse of artificial intelligence generation tools has led to the proliferation of deceptive pictures, videos and audio.

With AI deepfakes growing almost daily, featuring everyone from Taylor Swift to Donald Trump to Katy Perry attending the Meta Gala, it’s getting harder to tell what’s real from what’s not. Video and image generators such as DALL-E, Midjourney and Sora OpenAI make it easy for people without any deep technical skills to create deepfakes – just type in a request and the system spits it out.

These fake images may seem harmless. But they can be used to carry out scams and identity theft or propaganda and election manipulation.

Here’s how to avoid being duped by deepfakes:

HOW TO DO IT SEPARATELY

In the early days of deepfakes, the technology was far from perfect and often left intelligent signs of manipulation. Fact checkers have pointed out images with obvious errors, such as hands with six fingers or eyeglasses with differently shaped lenses.

But as AI has improved, it has become much more difficult. Some widely shared advice — such as looking for unnatural blinking patterns among people in deepfake videos — no longer exists, said Henry Ajder, founder of consulting firm Latent Space Advisory and a leading expert on generational AI.

Still, there are some things to look for, he said.

A lot of deepfake AI photos, especially of people, have an electronic look to them, an “aesthetic kind of smoothing effect” that makes the skin look super glossy,” Ajder said.

However, he cautioned that creative prompts can sometimes eliminate this and many other signs of AI manipulation.

Check the consistency of shadows and lighting. The subject is often in clear focus and looks convincing, but features in the background may not be as realistic or polished.

LOOK AT THE FACES

One of the most popular deepfake methods is face swapping. Experts recommend looking closely at the edges of the face. Does the facial skin tone match the rest of the head or body? Are the edges of the face sharp or blurred?

If you suspect that a video of someone speaking has been doctored, look at their mouth. Do their lip movements match the sound perfectly?

Ajder recommends looking at the teeth. Are they clear, or are they vague and somehow inconsistent with what they look like in real life?

Cybersecurity company Norton says that the algorithms may not yet be sophisticated enough to generate individual teeth, so the lack of an outline for individual teeth could be a clue.

Think about the BIG Picture

Sometimes context matters. Take a beat to consider whether what you’re seeing is plausible.

Journalism website Poynter suggests that if you see a public figure doing something that seems “exaggerated, unrealistic or seemingly non-existent,” it may be a deep fake.

For example, would the Pope really be wearing a luxury puffer jacket, as shown in a famous fake photo? If he did, wouldn’t legitimate sources have published additional photos or videos?

At the Met Gala, the over-the-top costumes are the whole point, which added to the confusion. But such major events are usually covered by officially accredited photographers who produce plenty of photos to help with verification. One clue that Perry’s picture was false was the carpet on the stairs, which some eagle-eyed social media users pointed out from the 2018 event.

USING AI TO TRANSMIT THE FAKES

Another approach is to use AI to fight against AI.

OpenAI said Tuesday it is releasing a tool to detect content made with DALL-E 3, the latest version of its AI image generator. Microsoft has developed an authentication tool that can analyze photos or videos to give a confidence score as to whether or not they have been tampered with. Chipmaker Intel’s FakeCatcher uses algorithms to analyze the pixels of an image to determine whether it is real or fake.

There are online tools that promise to hack you a fake if you upload a file or paste a link to the suspicious content. But some of them, like the OpenAI tool and Microsoft’s authenticator, are only available to select partners and not to the public. That’s partly because researchers don’t want to remove bad actors and give them a greater advantage in the deepfake arms race.

Open access to sensing tools could also make people think that they are “divine technologies that can let us out of critical thinking” and instead we need to be aware of their limitations, Ajder said.

THE BANDS TO RECEIVE FAXES

All this being said, artificial intelligence is advancing at breakneck speed and AI models are being trained on internet data to produce higher quality content with fewer errors.

That means there is no guarantee that this advice will still be valid even a year from now.

Experts say that even putting the burden on ordinary people to be digital Sherlocks can be dangerous because it can give them a false sense of confidence as it becomes increasingly difficult, even for trained eyes, to see tiny shoes.

___

Swenson reported from New York.

___

The Associated Press receives support from several private foundations to supplement its explanatory coverage of elections and democracy. See more about the AP democracy initiative here. The AP is solely responsible for all matters.

Leave a Reply

Your email address will not be published. Required fields are marked *