AI is so ubiquitous ‘it’s more practical to fingerprint real media than fake media’


It’s no secret that AI-generated content SUCCEEDS our social media feeds in 2025. Today, Instagram’s top exec Adam Mosseri clarified that he expects AI content to overtake non-AI imagery and the significant implications of the transition for its creators and photographers.

Mosseri shared thoughts on long post about the broader trends he hopes will shape Instagram in 2026. And he offers a more candid assessment of how AI is elevating the platform. “Everything that makes creators so important — the ability to be authentic, to connect, to have a voice that can’t be faked — is now suddenly accessible to anyone with the right tools,” he wrote. “Feeds are starting to fill up with synthetic everything.”

But Mosseri doesn’t seem too concerned about this shift. He said there is “a lot of weird AI content” and that the platform should rethink its approach to labeling such images by “fingerprinting real media, not just chasing fakes.”

From Mosseri (emphasis his):

Social media platforms are coming under increasing pressure to identify and label AI-generated content. All major platforms can do a good job of recognizing AI content, but they will get worse over time as AI gets better at simulating reality. More and more people believe, like me, that it may be more practical to fingerprint real media than fake media. Camera manufacturers can cryptographically sign images upon capture, creating a chain of custody.

On some level, it’s easy to understand how this seems like a more viable approach for Meta. As we previously reported, technologies intended to identify AI content, such as watermarks, have proven to be unreliable at best. They are easy to remove and it’s easier to ignore everything. Meta’s own tags are far from obvious and the company, which has spent tens of billions of dollars on AI this year alone, admits it cannot be reliably detected AI-generated or manipulated content on its platform.

That Mosseri readily conceded defeat on this issue, however, is telling. AI slop wins. And when it comes to helping Instagram’s 3 billion users understand what is the really, that should be mostly someone else’s problem, not Meta’s. Camera manufacturers – probably phone makers and actual camera manufacturers – should create their own secure system like watermarking to “verify the authenticity of the capture.” Mosseri offered few details on how this would work or be implemented at the scale needed to make it feasible.

Mosseri also did not address the fact that this is likely to alienate many photographers and other Instagram creators who are already frustrated with the app. The exec often fields complaints from the group who want to know why Instagram’s algorithm doesn’t always show their posts to their followers.

But Mosseri suggests the complaints stem from an outdated vision of what Instagram is. The feed of “polished” square images, he said, “is dead.” Camera companies, in his estimation, “bet on the wrong aesthetic” by trying to “make everyone look like a professional photographer from the past.” Instead, he said more “raw” and “bad” images are how creators can prove they’re real, and not AI. In a world where Instagram has more AI content than not, creators need to prioritize images and videos that intentionally make them look bad.



Source link

  • Related Posts

    Pope Leo XIV warned against the spread of gambling, saying it would destroy many families

    Pope Leo XIV called attention to the ‘scourge of gambling,’ as he said it ‘destroys many families.’ The head of the Catholic Church and sovereign of Vatican City becomes pope…

    Open source Qwen-Image-2512 launches to compete with Google’s Nano Banana Pro in high-quality AI image generation

    When Google has released its latest AI image model Nano Banana Pro (aka Gemini 3 Pro Image) in November, it reset expectations for the entire field. For the first time,…

    Leave a Reply

    Your email address will not be published. Required fields are marked *