Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×

Why AI can be too much of a good thing for media organisations

Stuart Almond, head of Intelligent Media Services at Sony Professional Solutions Europe, considers the responsible deployment of AI in the M&E sector

In today’s digital age, new technologies are giving us the ability to go and make what was previously considered impossible, a reality. From virtual assistants on our smartphones and automated transactions in financial services, to applications in the defence space, artificial intelligence (AI) is playing a key part in creating more efficient, effective and creative workflows across all sectors.

According to the World Economic Forum, the AI industry itself is expected to grow by 50 per cent each year until 2025, by which point it is set to be worth a staggering $127 billion. The effect of this will be heavily felt on the media and entertainment sector as well, with PwC reporting that AI is set to contribute around $150 billion a year to this industry.

The potential of AI to transform the way media organisations source, edit and distribute content is huge. AI can create new efficiencies, reduce inconsistencies and add value in places it isn’t today. Essentially, combining great content and personalisation to build a more seamless and meaningful news experience for audiences worldwide.

As a future without AI seems impossible to avoid, it’s essential that we get AI right – starting with the ethics. The rush to develop or adopt the latest AI technologies is rapidly accelerating across every touchpoint – from content creation to content consumption – bringing a growing responsibility for both users and vendors of such technology to consider their ethical impact.

How do we ensure the use of AI will benefit all, from content creators to content consumers while staying true to today’s journalistic principles and values?

The fight against fake news

If we believe recent research by Reuters, almost three quarters of media organisations are currently looking into how AI can help them create and distribute content more efficiently, through the use of applications like speech-to-text, facial recognition or edit corrections. News organisations such as Bloomberg are already dependent on automation to cover “commodity news” (i.e. reports on finance markets) to free up journalists’ time. There is an expectation that, by 2027, 90 per cent of all articles in the world will be written by AI.

The potential for AI to positively transform news and content production is clear and, over the next few years, it will take on a prominent role in deciding what content we see and read in our daily lives. But what level of power and control should be given to AI? Whilst technology which ‘thinks’ is rapidly becoming more useful, it cannot be given complete free reign and needs to adhere to some form of ethical principles. This is particularly important in the fight against fake news.

Machine learning (ML), which is defined as the science of computers learning from data, identifying patterns, and making decisions with minimal human intervention, is central to AI combating fake news. The idea is for machines to eventually improve their performance over time and become progressively autonomous. It is therefore no surprise AI is being relied on to automatically generate and curate stories.

However, before we get to this point, ML algorithms must be trained and programmed by humans to improve the AI’s accuracy. This is key given machines without human input lack basic human skills, like common sense, understanding and the ability to put things into context – causing much difficulty when it comes to correctly determining whether a piece of content is true or false. If news organisations leave AI to run its own course without any human input (i.e. context), it risks the lines being blurred between what’s news and what’s opinion – potentially exacerbating rather than fighting fake news.

Personalisation without filter bubbles

The personalisation of content can create higher quality experiences for consumers as we’ve seen from streaming services, such as Netflix, forming show recommendations based on personal watching history and behaviour. Media organisations are no exception and they are already using AI to meet the demands for personalisation.

For example, a relatively new recommendation service called James, developed by The Times and Sunday Times for News UK, will learn about individual preferences and automatically personalise each edition (by format, time, and frequency). Essentially, its algorithms will be programmed by humans but will improve over time by the computer itself working to a set of agreed outcomes.

While algorithmic curation – the automated selection of what content should or shouldn’t be displayed to users and how it is presented – meets consumer demand for personalisation, it can go too far. What if consumers are only hearing and reading the news they want to hear rather than what is actually taking place around them and across the world?

Most commonly known as the “filter bubble” concern, algorithms designed by platforms to keep users engaged can lead to them to only seeing content that reinforces their beliefs and opinions. The onus is on media organisations to get the balance right between providing consumers with content tailored to their specific interests and needs, while ensuring they continue to hear both sides of a story.

AI clearly holds great promise but there are two sides to every coin. We must ensure the use of AI will benefit all, from content creators to content consumers, but it needs to be in a way which is ethical and in line with journalistic principles and values. To do that, we need to hold media organisations to account and put ethics front-and-centre of AI’s deployment. This means humans must take every step needed to make sure AI is used for the best reasons and that, if it is used, it is done so with the right ethics and controls in place – from training it correctly to collecting data transparently. Otherwise, in the long term, the use of AI might create more problems than benefits for media organisations.

This article was written by Stuart Almond, not a robot.