Back in 2021, the UK’s Press Association unveiled Ananova, a computer-generated presenter who would read short news bulletins and even interviewed Kylie Minogue. [Ed note: I actually interviewed Kylie and Ananova was superimposed over me]
The discussion around the use of virtual or AI-generated newsreaders has become louder with the development of generative AI, and last year broadcasters in India, Greece, Kuwait, Taiwan and South Korea all launched AI presenters.
During the AI in the newsroom webinar, hosted by TVBEurope and Avid, the panel considered the idea of AI newsreaders and how they would impact the trust between a news broadcaster and its viewers.
“AI has emphasised the conversation about credibility, because as a news provider, credibility is something that is very, very important to us,” said Christian Rohde, chief head of production at Denmark’s TV 2.
“AI has somehow restarted or emphasised that conversation, so we’ve become both more protective and more careful around that word. The idea of an AI presenter is [opening] Pandora’s box, [will it impact our] credibility as a news provider?”
Rohde revealed TV 2 has tested the idea of AI presenters, and could use the technology to launch a foreign language news channel where the presenters would speak the local language fluently. “But what would it do to our credibility? So there is something about being afraid to step into that area where we would actually jeopardise what we have built over all these years.”
Jon Roberts, ITN’s director of technology, production and innovation, agreed with Rohde’s thoughts on credibility but added the discussion around trust between the news broadcaster and the audience is not new.
“The more we think about these things, we realise we’ve had the ability to Photoshop images on our news programmes for a really long time,” he said. “We don’t do it for editorial, production, brand, trustworthiness, and all the reasons why it’s a bad idea.”
While there’s much debate about whether an AI newsreader is a good idea, CEO of The DPP Rowan de Pomerai believes it’s the grey areas of production that need to be looked at.
“Where does it become OK to use AI, and where does it become deceptive to use AI?” he asked. “If we use some kind of AI-powered colour grading algorithm, that’s probably fine. What about if I use generative fill to remove an object from the background of a shot, probably fine in some types of content, maybe not in the newsroom. “There are well trodden conversations about what is on the right side of the line when it comes to audience trust. The problem actually, is the grey areas in between where we could get tripped up.”
Asked if viewers of ITN’s News at Ten would ever see an AI Julie Etchingham, Roberts said, “There’s certainly no live project here looking at that and we’re a long way from that at the moment, but there is something to be said for in some areas, we will do things that allow us to reach more people.
“I’m not expecting an AI Julie Etchingham, and I’m sure Julie would want to have a conversation with us well in advance of that happening. That’s one of the reasons those are less likely paths. We have brilliant news readers, who are good at creating connections with audiences [and that] is a really valuable thing, particularly in a more fragmented media ecosystem. So I think it comes down to whether or not it’s a good idea, even though the tooling might make it easier to do it.”
Craig Wilson, product evangelist at Avid, explained a recent discussion with an organisation in Central Europe working on developing an AI presenter to present news bulletins overnight. They were considering whether the presenter should look like a real person or very obviously an avatar.
“They eventually came to the conclusion that what they would produce was something that was clearly an avatar. They wanted it to be obvious to viewers that this was something that was AI generated, and I think that’s an interesting debate about the use of AI, because if it can be labelled properly, then perhaps there are use cases where AI is appropriate.”
Wilson also cited a BBC programme which used AI representations of people instead of blurring them out or putting them in silhouette. “This was a really interesting way where you’re still telling the same story, but you’re perhaps making it more personal to people because you see someone,” he said. “That programme was clearly labelled and identified [as having] AI-generated images the audience was looking at, but the words were from a real person.
“It’s a really interesting way of using AI in a creative way, and you’re still telling the same story, but you’re being open about how you’re using it, and I think transparency is a big part of this whole debate as well.”
The full webinar is available to watch on demand here.