Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×

The future of video quality has AI written all over it

Yoann Hinard, COO at Witbe, looks at how AI has the potential to transform the video industry in various ways, including real-time video quality evaluation and third-party app performance testing, ensuring that users enjoy uninterrupted and flawless streaming experiences

In today’s digital media landscape, audiences have come to expect nothing less than flawless streaming experiences. As a result, the highest priority for content and service providers is the delivery of pristine video quality. To meet their customers’ discerning expectations, it’s imperative to conduct regular testing to assess and enhance the quality of video streams. Artificial intelligence (AI) has emerged as a powerful tool in delivering this capability. 

However, while AI has already made significant strides in ensuring exceptional video quality for content providers like Netflix and Hulu, its full potential has yet to be fully harnessed for service providers that deliver linear TV and offer over-the-top apps on their set-top boxes (STBs).

The role of machine learning neural networks

Currently, the predominant AI technology being employed for assessing video quality is machine learning (ML) neural networks. These neural networks play a pivotal role in identifying defects that emerge during the content compression process by measuring the degradation in quality between the source content and its compressed counterpart. This method, in its various iterations, is rapidly gaining traction among content providers. They have been harnessing AI tools to enhance their encoding processes, particularly for premium, high-demand content. 

Neural networks allow for the fine-tuning of encoding parameters and enable per-title encoding. They find extensive use in the realm of video-on-demand content, which requires experimenting with various encoding parameters and making iterative adjustments to obtain the highest video quality at the lowest bit rate, ultimately resulting in lower distribution costs. This methodology has evolved into an industry-standard exemplified by VMAF, an algorithm pioneered by Netflix and widely adopted throughout the industry in the initial stages of encoding.

Taking a device-based approach

While the use of neural networks in assessing video quality is helpful for content providers, it comes with a notable limitation for service providers — it’s unsuitable for any content that lacks a reference stream to compare against. Consequently, it cannot be employed for testing live content due to the inherent nature of live streams, which cannot be preprocessed. So, for service providers delivering live linear content like sporting events, the primary hurdle in harnessing AI lies in the ability to evaluate content quality in real-time, including ad breaks.

A promising solution currently under development involves evaluating video quality from the perspective of an end-user after the stream has been decoded by a viewing device. This approach requires a higher level of AI sophistication, since it entails evaluating content without prior knowledge and assessing it in a manner akin to how a human would. It also mandates the utilisation of real physical devices with distinct operating systems, which introduces its own set of complexities. 

Navigating the interface of physical devices necessitates an approach that mirrors human interaction. This is a capability best achieved through AI, as scripting every conceivable action to access every piece of content is unfeasible. Therefore, the process of content retrieval becomes dynamic, calling for adaptable AI solutions capable of handling diverse scenarios. 

Once the content is reached, a further challenge arises in distinguishing between genuine defects and non-defective scenarios. AI must differentiate between blurriness resulting from compression and deliberate artistic choices, or between a black screen indicative of a defect and one that occurs during a transition. The ability to accurately identify true defects is paramount to improve the provider’s service and enhance the user experience.

Ensuring the functionality of third-party apps

For service providers offering third-party apps like Disney+ or Amazon Prime Video on their STBs, ensuring their proper functionality presents a unique challenge. The responsibility for these apps and their configuration doesn’t fall squarely on service providers’ shoulders, and they may be reluctant to take it on since it falls outside their core services of linear TV, DVR functionality, and on-demand portals. However, neglecting to ensure the proper performance of third-party apps, which account for a substantial portion of customer viewing time on their platform, is not in their best interests.

If an app fails to perform satisfactorily on a service provider’s STB, there’s a good chance viewers will opt for an alternative device. At this point, there’s no shortage of options: from Smart TVs to streaming boxes, gaming consoles, mobile devices, and more. At best, this results in a reduction in viewership on the service provider’s platform, with viewers eventually returning to their STBs to consume other content. At worst, viewers don’t return at all and decide to cancel their subscription with the service provider. This issue is of particular concern in the United States, where the high cost of cable subscriptions frequently leads to “cord-cutting,” where viewers abandon traditional service providers and instead rely on streaming devices and services like YouTube TV for their live television needs.

Service providers must embrace the responsibility of delivering superior performance for third-party apps on their STBs, as it directly affects viewing time, customer retention, and their bottom line. Employing AI with a device-based approach is essential for achieving this goal. To replicate end-users behaviours and preferences, AI-powered navigation, content access, and video quality evaluation are imperative. It is only through the detailed evaluation offered by AI that service providers can sustain user engagement and protect their audience from shifting to alternative platforms.

Conclusion

While ML neural networks have played a pivotal role in enhancing video quality, the scope of AI’s capabilities extends much further. AI has the potential to transform the industry in various ways, including real-time video quality evaluation and third-party app performance testing, ensuring that users enjoy uninterrupted and flawless streaming experiences. As technology continues to evolve, we can look forward to even more significant advancements in AI-powered video quality assurance, enhancing the viewer’s journey in the digital realm.