Artificial intelligence and machine learning technologies are gradually being deployed by media companies mainly to drive process efficiency through automation. As part of this year’s MediaTech Outlook report, TVBEurope invited Steve Sharman, director at Hackthorn Innovation, to guide us through the latest thinking.
Prior to the pandemic, what sorts of AI/machine learning deployments were we seeing by media companies?
We were seeing gradual adoption of AI and machine learning in the media industry, particularly around automated content analysis. We had a couple of years of high profile public demonstrations; Sky’s live tagging for the Royal Wedding for example. We had wider real deployments starting to quietly happen, typically to support the cataloguing and consolidation of archives in order to exploit them on streaming services; or reduce the time taken in compliance. At high volumes, saving operators a few minutes per piece of content starts to add up to significant overall reductions.
Again pre-pandemic, what were the biggest areas of challenge in terms of technology development or deployment?
Boring but important stuff: legacy infrastructure. The use of AI to process content at production scale typically requires hefty resources; it takes a lot of content and processing power to train the kind of deep neural networks typically used for video or sound processing, and then more compute plus easy access to the content to carry out the analysis. Typically, you want your content on the public cloud, and that’s been a big step for media companies. Legacy workflows typically assume that content is readily to hand, so they’re not just moving content to the cloud, but some pretty significant workflows as well. Transformative projects can be hard work in difficult commercial environments.
What effect has the pandemic period had in terms of accelerated technology adoption/deployment, or any other significant areas?
On the one hand we’ve seen mass adoption of public cloud-based and remote workflows almost overnight. It became almost a matter of survival to move production and distribution workflows into the cloud, and once there, using AI to support archive, compliance and other workflows becomes an incremental transformation. We’ve also seen some AI-powered production solutions be adopted far more quickly than the norm – for example the automated sound mixing and virtual crowd solutions from Salsa Sound have seen rapid take-ups to keep a high quality premium sports experience going in a world of empty stadia.
What would you say are the areas of AI that are going to have the most profound impact in the coming years, and why?
The non-flashy applications of AI will continue to have a significant impact; using AI to significantly reduce data-centre cooling costs at Google is a great example of this. We’re already seeing the impact of AI on compliance, archive, QC and some VFX workflows (e.g. Rotoscoping), and this will become increasingly packaged and widely available. Aligned to this we’re also going to see AI move up the value chain to increasingly help optimise and automate parts of production workflows, making it possible for smaller and less experienced storytellers to compete on production values. The wide availability of AI technologies in the public cloud also means that our industry can take advantage of new techniques almost as soon as they are released, which is a refreshingly new pace!