Meta Erroneously Tags Former White House Photographer’s Shot of 1984 NBA Finals as ‘Made With AI’
Frustration mounts as company’s new policy to increase AI ‘transparency’ targets images touched up with mainstream editing software.
Meta’s efforts to identify AI-generated images are backfiring as the technology platform wrongfully tagged a 40-year-old snapshot from a former White House photographer as “Made With AI.”
The picture was taken by celebrated photographer Pete Souza and was of the 1984 NBA finals game between the Celtics and the Lakers.
In response to the tag, Mr. Souza added to the post’s caption: “I had this film processed a couple of days after the game. I’m not clear why Instagram is using the ‘made with AI’ on my post. There is no AI with my photos.”
He joins the hordes of photographers who have voiced similar frustrations since Meta implemented its policy to label posts generated with artificial intelligence across Facebook, Instagram, and Threads in a bid to increase “transparency.”
“As the difference between human and synthetic content gets blurred, people want to know where the boundary lies,” Meta president of global affairs Nick Clegg wrote.
There seems to be some disagreement over what constitutes AI manipulation since photographers are claiming that the tech conglomerate is targeting images that have been edited with standard digital photography tools.
Such is the complaint of Mr. Souza, who reckons that his image was flagged because he used Adobe’s new cropping tool, which requires users to “flatten” the image before saving it as a JPEG, he told TechCrunch.
Portrait photographer Sean Scheidt called out Instagram head Adam Mosseri in a series of posts on Threads, criticizing the platform’s inability “to distinguish” between the use of AI to “to generate photo realistic images” and using generative AI to “to aide in time saving or complex removals.”
He added that the “blanket” tag of “made with AI” will cause viewers to assume that the entire image was generated through AI, even though “there is a difference between” employing AI “as a tool” and using it to “generate entire images.”
Only upon clicking the seemingly definitive “made with AI” tag will the user see the more speculative disclaimer: “Generative AI may have been used to create or edit content in this post.”
Another photographer complained that Meta flagged his post after he used an Adobe Photoshop tool to remove a trash bin from his photograph. Adobe Photoshop is one the most popular photo editing softwares for photographers.
“I did not use generative AI, only Photoshop to clean up some spots. This ‘Made with AI’ was auto-labeled by Instagram when I posted it, I did not select this option,” photographer Peter Yan wrote in a post on Threads.
Photography news site PetaPixel decided to test out the extent of Meta’s detection software by using an AI tool, Generative Fill, to remove a small speck of dust from an otherwise unaltered photo. Upon uploading to Instagram, the image was immediately flagged with a “Made with AI” tag, the news site reported.
“These are early days for the spread of AI-generated content,” Mr. Clegg wrote in his post on Meta’s newspage. “What we’re setting out today are the steps we think are appropriate for content shared on our platforms right now. But we’ll continue to watch and learn, and we’ll keep our approach under review as we do.”
AI tools have already been used by Meta to help the platform detect hate speech and other policy violations. Meta claims that the technology has helped them reduce instances of hate speech on Facebook to a little less than 0.02 percent of posts.
Meanwhile, YouTube has taken a different approach to AI-generated content, relying on content creators themselves to disclose whether their videos are altered or synthetic.
The platform seeks to target “realistic content” — which they describe as “content a viewer could easily mistake for a real person, place, scene, or event” — is made with generative AI. This includes Deepfakes — videos that appear to be real but have manipulated visuals or audio.
“We’re not requiring creators to disclose content that is clearly unrealistic, animated, includes special effects, or has used generative AI for production assistance,” the YouTube team noted in a blog post.
Further, “inconsequential” video alterations such as color adjustments, background blur, and beauty filters or other visual enhancements, will not need to be flagged.
While the video platform will be relying on the honor system for now, Youtube plans to “work towards an updated privacy process” in which people can request “the removal of AI-generated or other synthetic or altered content that simulates an identifiable individual, including their face or voice.”