Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×

Ethical considerations of AI in newsroom workflows

Professor Markus Kaiser, director consulting expert at CGI, looks at how AI is finding its way into broadcasters' newsrooms and why media organisations will need to understand the challenges it represents and how they can be met with equal speed

While it is tempting to think that artificial intelligence (AI) is a relatively new phenomenon in the broadcast industry, the reality is that it has been used in different capacities for nearly a decade. We tend to think of AI now as generative AI, as exemplified by the likes of Google’s Bard and ChatGPT. But, AI has been used to automate workflow processes within the industry for several years, and we have more experience with it than many realise.

From research to verification of information, production, and distribution, from accounting to workflow scheduling, AI supports routine tasks along the journalistic value chain. Indeed, it is highly likely that AI will have touched this article you are reading now at several points in its journey, whether that be via a new generation of sophisticated spellcheckers and translation engines, efficient routing of internet traffic, Search Engine Optimisation, assisting with magazine design processes, or elsewhere.

Nevertheless, recent technological developments open up new concerns regarding its implementation. We need to be aware that there are potential ethical considerations to the use of AI, which is of particular concern regarding its application within newsroom environments.

Use cases and dilemmas

There are three main ways that generative AI can be used in the modern newsroom. All current use cases concern the manipulation of text.

  • Support with text generation. AI can precis existing text for shorter broadcasts, rewrite it for different audiences (i.e., for social media platforms and different demographics), and so on. This is an already popular, everyday use case
  • Complete text generation. AI can be used to generate complete text from natural language prompts. This content must be checked by humans afterwards, preferably using the four-eye principle of two people being required for approval. 
  • Automatic publishing. AI is used to create and publish complete stories with no human checks and balances.

The latter use case does not technically qualify as journalism, and the organisations that have ventured down this path have yet to do so successfully. But even the first two uses throw up several ethical dilemmas.

For example, say an error makes its way into a news report regarding the age or nationality of a major figure. Even though it may appear inconsequential, it can cause reputational damage to any news organisation that prides itself on accurate reporting. And we have to ask ourselves, who is responsible for the error? Is it the programmer who wrote the original code? Is it the trainer who trained the programme on the data set? Is it the journalist who wrote the prompt that delivered the false information? Is it the AI itself? AIl have been prone to what is termed ‘hallucinations,’ where they effectively make up facts to fulfil the brief contained in the prompt.

The answer is all of the above. Within a media organisation, there needs to be a collective responsibility that acknowledges the complexity of AI implementations and how it is the responsibility of many individuals and departments to ensure that the productivity gains that AI brings are not undermined by an erosion of the primary truth-telling function of the newsroom.

Keeping up with progress

Media organisations also need to understand urgently that this is a fast-evolving field with many open questions. For example, where does current information come from? Generative AI is trained on datasets, but there is a cut-off point even if these are collated from the open internet. How is new information to be accessed? 

With global events contributing to increasing amounts of misinformation across more and more social channels, how do we detect fake news? And how do we prevent it from being recycled first to the public that trusts us for veracity and second to the next generation of AI that is likely to treat it as absolute fact? The old adage of ‘garbage in, garbage out’ has never been more apt, and we can even see misinformation as a virus within an AI system, propagating and spreading through it with unknowable consequences.

How do we train AIs to avoid bias, and how do we avoid bias when writing natural language prompts?

One of the central answers to that latter question is practical training. Notably, the media organisations that have had the most successful implementations of AI so far have robust codes of practice in place detailing its usage and limitations. Whether it is by writing more effective prompts that shorten the iteration cycle or understanding the technology’s limitations and how and where it can most appropriately assist in the newsroom, it is essential to have a detailed AI strategy rather than a succession of ad hoc responses.

It is also important to lean into expertise in the area. Newsroom vendors such as CGI have made significant investments in not only adding AI tools into the workflow but also incorporating guardrails into their systems to ensure that they can be used responsibly. They are designed to assist, not replace, and have been built to help journalists in their day-to-day work to create better content.

These tools are likely to evolve with dizzying speed over the next months as organisations worldwide look to leverage the next generation of generative AI models too. These will throw up more ethical questions in turn, as the capabilities of the models and the variety of their use cases increase. Both AI-generated presenters and an increase in fake videos as part of the ongoing US election cycle are on the roadmap for 2024, and companies will need to understand the challenges they represent and how they can be met with equal speed.