Adobe’s popular photo-editing software has long been used to manipulate media, and as digital tools get better and better it’s becoming more difficult to tell what’s real from what’s Photoshopped.
The company is trying to do something to fix this problem, which its software didn’t originate but helped propagate for decades (such as with this popular faked image of a shark swimming on a flooded freeway). On Tuesday, Adobe announced the release of a “beta” version of an attribution tool for Photoshop, with the hope it will encourage people to trust that the images they see are truthful (or at least make more informed judgments).
The feature, which is optional and will be available initially just to select users, lets photo editors append images with detailed information known as metadata that, in essence, travels around with it online. This information will go far beyond the basic details that can currently be added to pictures and may include who created the image and where, a thumbnail of the original image, and data about how it has been altered — as well as whether AI tools were used to change the picture. This data will be secure and it will be clear if it has been tampered with, Adobe (ADBE) said.
“If you have something that you want people to believe is true, then this is a tool to help you get people to believe in it,” Dana Rao, Adobe’s general counsel, told CNN Business.
The release is part of the company’s Content Authenticity Initiative to fight against dis- and misinformation, which Adobe launched a year ago with Twitter and The New York Times. Initially, the content attribution tool will be for publishing still images to Behance, an Adobe-owned social network for sharing creative work. Over time, the company hopes this kind of authenticating information will be added to different types of content, and be shared widely on social media platforms and through media companies.
“The public is going to have to understand they should expect to see this metadata if they want to trust these things,” Rao said. “And if they don’t see metadata, they should be skeptical.”
Adobe’s effort will be limited by the fact that those editing images need to use it voluntarily. And the company won’t be the first to attempt to popularize a method for making media trustworthy. However, it may have a better chance at success than others due to its reach. Adobe claims that more than 90% of “creative professionals” use Photoshop, and 23 million people use Behance. The company also has experience in popularizing digital content standards such as the PDF.
Hany Farid, a UC Berkeley professor who specializes in digital forensics and is an unpaid adviser to Adobe for the authenticity initiative, thinks the company’s approach makes sense because it makes the content producer responsible for making the image trustworthy, rather than leaving it to each individual viewer to sort out.
Siwei Lyu, a professor at the University at Buffalo, SUNY, who also studies digital forensics, believes the tool will be more effective than the limited metadata that may be connected to images today, which he said is easy to manipulate.
“This should have happened a long time ago,” said Lyu, who was advised by Farid while completing his graduate studies at Dartmouth College.
Yet, as Farid pointed out, Adobe’s content-attribution tool can’t confirm the veracity of an image before it is edited in Photoshop. To address this problem, photo- and video-verification startup TruePic, which is part of the Content Authenticity Initiative and for which Farid is a paid adviser, recently announced it partnered with chip-maker Qualcomm to create a way to securely snap pictures via a smartphone’s the built-in camera app.
“It’s taken us years to get to this problem and it will take us years to get out of it,” Farid said.