Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×

Why now is the time for story-centric media production to fulfil its promise

By Paul Shen, CEO, TVU Networks

Consumers are watching more video content than ever while moving from passive to more choice-based consumption models. They have become the curators of their own content, creating their own viewing experiences drawing from many more available sources and platforms than ever before. As they try to attract and retain viewer eyeballs, content providers have moved to deliver content across multiple platforms, including linear broadcast, online and mobile. However, the device or platform has become a means to an end; what consumers care about is accessing the story they want to engage with on their preferred screen.

While audiences have moved away from a linear approach to engaging directly with content, the majority of broadcasters and media organisations still think of and produce content in a linear, programme-centric fashion requiring a high degree of human intervention. To meet the demands of content hungry viewers, broadcasters and media organisations increasingly have to deliver device and platform specific versions of content, making it common for the larger broadcasters to produce over 50,000 video files an hour; a major challenge at a time when broadcasters are facing downward pressure on production budgets.

It is the aim of story-centric media production to solve these issues by enabling production teams to spend more of their time on the creative decisions that make great TV and video and simplify production for multiple devices and formats.

However, story-centric techniques have so far failed to deliver the needed step-change in efficiency. The missing link that has hampered progress is poor metadata. Without rich metadata, the assets stored in centralised MAM systems are difficult to access and utilise. In the current production workflow model, when files are ingested, the only metadata usually attached to a file is the title. This makes searching for existing assets a time consuming and often fruitless task.

The metadata issue is now being solved with AI. AI-based tools can automatically generate enriched metadata and the creation of new workflows enable this metadata to be handled correctly by MAM solutions. Highly granular metadata combined with highly accurate voice and object recognition enables video clips to be indexed, located and then shared with single frame precision. This automation provides the ability to share with different audiences as well as sell assets or share them with business partners and customers.

The combination of AI, real-time search engine and scriptable production engine technology makes it possible to create a true story-centric workflow which allows production groups to focus on producing the right content for all audiences. Story-centric workflows will allow thumbnails of assets to be accessed as the script is being written, vastly speeding the creative process, with re-editing handled in the scripting application. This approach is a key enabler of the ‘smart studio’ which will enable media organisations to maximise the value of each individual piece of content.

The integration of TVU MediaMind with MAM systems puts metadata at the heart of video workflows. By creating one centralised search engine for all raw materials, live or recorded, that feeds all distribution channels, the barriers to efficiency erected by multiple content production workflows are removed.

In today’s approach, media production is passive rather than active. Processes are driven by human interaction, where producers hand-craft the multiple versions required for a TV show. Today around 95 per cent of video captured in live productions is never used at all. Because it has historically been easier to re-shoot than find and re-use existing assets, very little of the 5 per cent used is ever re-used. In the story-centric ‘smart studio’, AI engines will deliver a step-change in efficiency by ensuring that existing videos are instantly searchable down to the exact frame, and even made available during the scripting process. AI tools will also automate the production of multiple versions and optimise content delivery to each target audience segment.

The cloud-based video and AI-powered voice and object recognition technologies needed to transition to the ‘smart studio’ are available now, enabling the ‘smart studio’ to begin revolutionising the way that video is produced, distributed and consumed.