It seems that the intention was to end the speculation. But the latest image released of the Princess of Wales caused just more of it.
As soon as the picture was released, people began to notice inconsistencies: a sleeve that seemed to have disappeared, and blurring around the edges of the clothes. Many suggested it had been edited – and many UK and international picture agencies were concerned they had recalled the image, telling the world they could not be sure it was real.
The day after it was released, a new statement attributed to Kate appeared in a tweet. “Like many amateur photographers, I experiment with editing from time to time,” he read. “I wanted to apologize for any confusion regarding the family photo we shared yesterday. I hope everyone who celebrated had a very happy Mother’s Day. C.”
The post did not mention how the changes were actually made – what changes were made, or what software was used to make them. Although this has led to much speculation about artificial intelligence, there is no indication that it was or was not used in the image.
But the suggestion that it was edited in the same way as “many amateur photographers” may be a clue to the fact that altered images are becoming more widespread – and more determined. Misleading images have a long history, but they have never been as easy to create as they are now.
In fact, edited images are now so common that the people who take them may not even realize they are doing so. New phones and other cameras include technology that tries to improve pictures – but can also change them in unknown ways.
Google’s new Pixel phones, for example, include a “Best Take” feature that is central to their marketing. It’s an attempt to solve a problem that has plagued photographs since people began using them to take portraits: in any given series of photographs of a group of people, one of them is guaranteed to be blinking or looking away. Wouldn’t it be nice to be able to glue all the best bits together into one enhanced composite image?
That’s what the Pixel does. People can take a burst of similar photos, and the phone will then stitch them together and get people’s faces. They can then be swapped around: a blinking person’s face can be replaced with another picture, and it will blend seamlessly.
Recently, too, users of newer Samsung phones noticed that their cameras seemed to be superimposing different moons on pictures they took. The users found that if they pointed their camera towards a blurry picture of the Moon, there was new detail that wasn’t really there; it was only discovered after some Reddit investigation.
There was controversy, with Samsung admitting that its phones have a built-in “AI detail enhancement engine” that can see the Moon and add more details that weren’t present when the image was taken. Samsung said it was built to “enhance image detail”, but some affected customers complained they were getting images of the Moon they didn’t actually take.
It’s getting easier to change parts of a photo after they’ve been taken too. Adobe has introduced a tool called “genital mesh” in Photoshop – users can select a part of a photo, tell AI what they want it to change, and it does that. A repulsive jersey can be swapped for a more attractive one in seconds, for example.
Due to the many controversies, there have been several conversations about what a picture really is. Photos may not have been a simple matter of light hitting a sensor, but they have become much more complex in recent years. The era of “computational photography” means that devices use their hardware to process images in ways that may make them more attractive but less accurate; Readily available editing tools mean precise photo changes are no longer confined to the darkroom.
Much of the recent conversation about image manipulation has focused on generative artificial intelligence, which makes it easy to edit images or create them entirely. But concern about fake images goes back much further – Photoshop, the software so popular it became synonymous with misleading changes, was created in 1987, and the first fake image was created almost as soon as modern photography was invented.
However, the rise of AI has led to new concerns about how fake images could undermine trust in any kind of picture – and new work to try to avoid that. This included a new focus on spotting and removing misleading images from social networks, for example.
The same tech companies that are building new tools that can edit images are also trying to find ways to find them. Adobe has new tools called “Content Credentials” that allow users to highlight whether and how an image has been edited; OpenAI, Google and others are exploring adding an invisible watermark to images so people can check where they came from.
Some useful information is already hidden within picture files. Today’s cameras include information in the files they create about the equipment used to make them and when they were taken, for example – although it’s easy to remove.
Traditional picture agencies have long had rules that prohibit any type of misleading or edited picture. But they require those agencies to exercise some discretion: arranging the colors in an image, for example, is a central part of photographers’ work, and those agencies often distribute pictures from other sources that they cannot verify. necessity, as happened with Kate’s picture. .
The Associated Press, which was one of the first agencies to pull the image, says in its code of ethics for photojournalists that “AP pictures must always tell the truth”. “We do not alter or digitally manipulate photo content in any way”.
Those strong words are not necessarily as decisive as they sound. The AP allows “fine adjustments in Photoshop”, such as cropping it or changing the colors. But the purpose of these is to “restore the authentic nature of the photograph”, he says.
Likewise, the AP code allows images that are “source supplied and modified”. But he says “the caption must explain it clearly”, and demands that the transmission of those images be approved by a “senior photo editor”.
The agency has similar rules for AI-generated images: they can’t be used to add or remove elements from a photo, and they can’t be used if they are “suspected or proven to be false representations of reality “. There was no indication that Kate’s picture had anything to do with AI, and the AP and other photo agencies did not mention the technology in their statement – but, as it was edited, it emerged in a more understanding world than ever before for the issue. and risk of misleading images.
Much of the work on these standards has been done in the last year or so – since the release of ChatGPT and it has given new excitement to artificial intelligence. But misleading images gave rise to new standards, new thinking about pictures that might have been taken decades earlier, and new concerns about how easy it is to trick people. It might be easier than ever to create false images – but it might make it much harder to stop using them.