Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×

Power to the people

With synthetic content becoming increasingly indistinguishable from reality, the world is drowning in misinformation. BBC principal R&D engineer Judy Parnall tells Kevin Emmott how the broadcaster is tackling the challenge head-on by pioneering open standards for digital provenance

It’s not easy to tell what’s real and what’s not anymore. We’re bombarded with content, and we’ve all seen things that have made us wonder if it really happened or not.

But while we might accept that synthetic content is ubiquitous, the manipulation of images and videos by artificial intelligence poses huge risks to the credibility of news and information. Increasingly, it’s impossible to know what to trust, even if you’ve seen it with your own eyes.

So, it’s reassuring to know that there are people already addressing these risks, and in this year’s IBC Accelerator projects, a collaboration between BBC R&D, Sony and 19 other partners sought to address some of the growing concerns around the impact of synthetic content on public trust. The project was seeking to test the output of a broader initiative called the Coalition for Content Provenance and Authenticity (C2PA), a committee whose goal is to develop open technical standards for publishers, creators and consumers to clearly establish the origin and edits of digital content. And in doing so, win back some of that trust.

Misinformation and misattribution

Judy Parnall is a principal R&D engineer at BBC R&D and has spent over three decades at the cutting edge of broadcast technology.

Judy Parnall

“Prior to 2020, many organisations were putting out content attributed to the BBC that didn’t come from us,” she says. “There was a real problem of misinformation and misattribution of content, and in conjunction with The New York Times, Canada’s CBC Radio, and Microsoft, we formed a research group called Project Origin to see if there was any way to positively assert digital provenance.

“We proved that there was, and Project Origin’s research led to a paper at IBC where we discovered that Adobe were undertaking similar research with a couple of its own partners. Since then, we have joined forces to create one solution as an open project through the Linux Foundation, with an agreement that we wouldn’t charge for Intellectual Property Rights. That’s how we created the C2PA.”

Creating a standard

The goal of the C2PA is a simple one: to enable users to verify the authenticity of content so that they can make better-informed decisions about what they consume. While Parnall asserts that the BBC approached it with news authenticity as the primary driver, there are now thousands of members from a broad range of sectors, including finance, insurance and social media.

“The aim is for C2PA standards to be integrated into devices and platforms to allow users to track details of content creation and manipulation,” she says. “In every piece of content, whether it is a video, an image, or an audio clip, we include some information in the metadata. We describe it as the who and the how; it might be where or when it was made, or even how it was made and who by, but all this metadata and content comes together to create an asset with a digital signature, and it is registered in a manifest, which makes all this information available.

“By checking the content credentials, users can see whether a video was captured by a real camera, who published it, and if it has been manipulated.”

Crucially, assets can be tracked throughout the workflow: for instance, if a journalist sells their content to Getty, and it is edited and published by the BBC and then repurposed on YouTube by a third party, all that tracking information should be available. It is these breadcrumbs that enable consumers to decide how trustworthy a piece of content is, a distinction that Parnall says is an important one.

“It’s fundamental that we make no editorial judgment on the content other than giving the user the information to make their own decision. Knowing how and where that content comes from should be all the information they need to ascertain if they trust it or not.”

The IBC Accelerator project

This year’s IBC Accelerator project approached this challenge from opposite ends of the production chain to demonstrate key parts of an authenticity workflow. As one of the world’s first use cases of testing the C2PA workflow with video content, BBC R&D collaborated with Sony and around 20 other industry leaders to enable Sony’s PXW-Z300 camera to embed digital signatures into video files to support the C2PA standard for content authentication at capture. Meanwhile, content credentials were also added at the point of publishing, preserving integrity irrespective of whatever happens in the newsroom in between. The demonstration also highlighted the potential of AI-generated content to deceive viewers. Using Runway’s Aleph model, BBC R&D combined real footage with AI-generated backgrounds, showing how easily synthetic content can be created.

IBC Accelerator project team

“Sony’s development is significant because the PXW-Z300 is possibly the first news-targeted camcorder that is capable of embedding the digital signatures into video files and addressing the growing need for verifiable content,” says Parnall. “We’re all taking small steps at this point, but people have been watching this develop for some time and there was a lot of interest at the show. Our hope is to encourage more people to trial these credentials for themselves and see where it adds value.”

A major part of the IBC Accelerator programme was to give more organisations the tools to prove that value for themselves, and this value is key to its wider adoption. The timing is perfect, and trust is a major issue on everybody’s lips. Parnall says journalists are already asking about content credentials to identify whether content they have been sent was made using generative AI, and more governments are putting protections in place to safeguard their consumers. The C2PA is not the only game in town, but it could be an effective part of the toolkit, and the existential threat of generative AI is global.

While more focused on the transparency of generative AI content, the California AI Transparency Act comes into force in January next year and calls for a visible label on AI-generated or altered content. Similar to the work that Parnall is doing with the C2PA, it states that content must carry embedded metadata, including the provider’s name, the AI system name and version, and a creation timestamp.

A global issue

“This issue affects everybody,” says Parnall. “The IBC Accelerator project proves it, with partners from Europe, Japan, the US and India; it is absolutely a global endeavour. There is a real need for it, and having some very easy tools and easy plugins for content providers is vital. The ideal scenario is for all equipment in our production chains to incorporate these content credentials, and then the certification almost comes for free.

“My hope is that people will start expecting to see these credentials because they’ve seen them on the bigger platforms, and so will gradually expect to see them elsewhere. I hope it will become a virtuous circle. “

On 14th October, Jatin Aythora, BBC R&D’s director, collected the Television Academy’s Philo T. Farnsworth Corporate Achievement Award on behalf of the department. The Emmy honours an agency, company or institution whose contributions over time have significantly impacted television technology and engineering. The Academy’s plaudits for BBC R&D included its work on C2PA.