In 2023, a deepfake of the Pope in a white puffer jacket exploded online. It was surreal, funny and, for a moment, believable. But anomalies in the picture were soon uncovered: eyeglasses that faded into shadow, a strangely floating crucifix: clear signs of AI’s involvement.
A year later, a photo of the Princess of Wales triggered similar scrutiny. At first glance, it seemed ordinary. But digital detectives quickly flagged a mismatched zipper and unnatural hand placement: obvious indications of image manipulation.
The photo wasn’t AI-generated. It was just a poorly edited family snapshot. But that didn’t matter. Trust was broken. In both cases, the content didn’t need to be fake to cause damage. It only needed to raise doubt. This is the authenticity paradox.
In a world flooded with synthetic media, truth alone is no longer enough. When audiences can’t trust what they see, hear or read, the burden shifts from truth to trust.
A crisis of confidence
It’s no wonder that just 40 per cent of people globally say they trust the news, a figure that hasn’t budged in years. But the broader picture is even more disconcerting.
Nearly 60 per cent of the global population are concerned about their ability to distinguish real from fake news online. In the US, this figure climbs to 73 per cent. In the UK, 70 per cent are worried about what’s real and what’s fake on the internet overall. are worried about what’s real and what’s fake on the internet overall.
This isn’t just a media problem. It’s a human one.
We rely on shared truths to make decisions, form beliefs and navigate the world. Without them, the firm ground beneath our feet shifts and everything grows unstable: our politics, public health, democracy, even social connections. Studies show that repeated exposure to misinformation leads to higher stress, decision fatigue and disengagement.
In fact, 90 per cent of people in the UK say they’re concerned about the spread of deepfakes. And for younger generations, many of whom now get their news from their social feeds, all this doubt arrives without context or guardrails.
In a situation where you don’t know what to believe anymore, where does that leave you? Adrift.
Some tune out altogether, not out of apathy; they just don’t know who to trust. And that disconnection leads to a new kind of danger: when people stop participating, we can no longer collectively tackle shared problems.
It’s not just commentators sounding the alarm. Recently, more than 1,400 security experts told the World Economic Forum that disinformation and misinformation ranked as the most severe global risks over the next two years: above war, extreme weather or inflation.
Attribution, provenance and trust are the new battlegrounds.
From history to hearsay
The arrival of new technologies always reshapes the way we document the world. From the invention of the printing press in the 15th century to the rise of the internet in the 1990s, each shift has redefined how truth is recorded, shared and remembered.
This battle for truth isn’t new: the birth of journalism itself emerged from 18th-century struggles, when politicians tried to keep reporters away from debates. Even Robert Capa’s famous Spanish Civil War photograph, now suspected to be staged, reminds us that questions of authenticity have always shadowed journalism.
But what’s different now is scale and speed.
Herodotus, the great historian of ancient Greece, shaped collective memory, and what we now call history, simply by writing things down.
Today we see our history recorded frame by frame. And now those frames are up for debate.
Broadcasters and editorial teams are feeling the pressure. They’re urgently looking for solutions. They want tools they can use now. And the response can’t come from just one part of the industry. If we’re going to fix this, it has to start at the point of creation. That’s where trust begins.
From camera makers to broadcasters to platform providers, everyone has a role to play in restoring confidence in content. The systems we build today will determine the truths that we, as a society, believe tomorrow.
We are all responsible for the historical record.
A collaborative solution
No single brand or organisation can fix this issue alone. But the industry is moving, driven by growing executive awareness that trust is fundamental to organisational resilience. The BBC and others have been advancing this work by applying guidelines, standards and specifications from the Coalition for Content Provenance and Authenticity (C2PA), an initiative founded to establish transparency.
As Harry Keir Hughes, principal consultant at Infosys (a C2PA member), explains: “AI ethics is now a C-level concern, and trust is a key pillar of any responsible AI strategy, even more important than accuracy or security.”
What’s needed now is a shift away from retroactive correction and toward systems that embed transparency from the start.
The C2PA provides an open technical standard that works like a digital nutrition label for content, using tamper-evident records that show where media came from, when it was created and how it’s been modified. This coalition brings together founding partners like Microsoft, the New York Times and Adobe, and its membership has since expanded to include the BBC, OpenAI, Google, TikTok, Leica, Qualcomm, NHK and Publicis Groupe.
Hughes adds: “C2PA will become more robust, reliable, universally used and accepted as businesses increase their efforts with the implementation of durable content credentials, government support, and agreements and collaborations among organisations.”
The future is already in motion.
Trust at the moment of capture
As one of the world’s largest camera and sensor manufacturers, our role is to build trust at the very moment content is captured.
Functionality based on C2PA standards is being built directly into Sony’s cameras and camcorders, enabling broadcasters and news agencies to embed their own certificates and assert the origin of content at the split second the light hits the sensor. This can be the starting point for a chain of authenticity that holds from creator to audience. Sony also sits on the C2PA steering committee, helping shape the standards that underpin this work. Additionally, Sony is a voting member of the International Press Telecommunications Council (IPTC), which develops the global metadata standards used across news and media.
The technology is already being tested in practice. As Judy Parnall, head of standards and industry at BBC Research & Development, explained during early trials: “This is the first time there’s been a camcorder doing that, and it’s really exciting: the culmination of years of work with Sony to move towards content credentials.”
According to Hughes, “Provenance is superior because you have a timeline of changes made to a media file from creation to dissemination. If it’s used in disinformation campaigns, audiences can see who created it and how it changed. This increases accountability and acts as a deterrent.”
Sony has participated in the IBC 2025 Accelerator Programme as part of the Stamping Your Content (C2PA Provenance) project, alongside broadcasters like the BBC, Channel 4 and ITV. The initiative developed open-source tools to stamp C2PA metadata into media at the point of publishing.
This can help editorial teams verify the origin of a piece of content and make informed decisions on what to publish to their audience.
Image integrity
The Princess of Wales and the puffer-jacket Pope were just the beginning. They left viewers questioning what, if anything, was real. Trust is fragile. But it’s also repairable. Through transparency, technology and collaboration, the industry must restore that trust from the inside out, and at Sony we are taking a lead in doing so.
The future of media won’t rest on knowing what’s true, but on knowing what’s authentic and being able to prove it. This moment is not about solving everything at once. It’s about creating the conditions for trust to rebuild, one frame at a time. The choices we make today will shape the truths we believe tomorrow.
“Fifteen years ago, we didn’t know what the padlock symbol meant in our browser,” Parnall says. “Now it’s just part of life. Content credentials will become the same, one of those things where we just say, ‘Oh, that’s fine then. That’s verified.'”
The choice is binary: we either build systems that make truth verifiable, or we accept a world where nothing can be trusted. There is no middle ground.