2021 will be a pivotal year for video streaming services. For the first time in history, more people will pay for online streaming services than for traditional pay-TV. However, the reality is that the streaming market is reaching the point of saturation as more and more services launch. In recent years, high-quality original programming has been the primary way streaming providers enhance and differentiate their services from competitors. Netflix is estimated to house around 1,500 TV series and 4,000 films, Amazon Prime Video is home to almost 20,000 titles, and a subscription to Disney Plus adds around 7,000 more TV episodes and 500 films for viewers to choose from.
High-quality programming alone is no longer enough to keep consumers subscribed to a service. In fact, one of the most common problems today’s audiences face is being able to find something they actually want to watch.
As recently as 2017, viewers were spending almost an hour a day searching for content. This daily dilemma often results in endless scrolling before the consumer settles on something that they’ve already watched before or something that only vaguely interests them, because they do not want to waste more time searching for something that truly captures their interest. The challenge video streaming providers must now overcome is offering a superior user experience that can serve viewers the content that not only interests them, but also reflects the mood they are in at that specific moment. The only way a streaming service can achieve this is by using AI and machine learning to gain a deep understanding of its content’s emotional data and serve recommendations that match the viewer’s mood.
Creating recommendations that are based on users’ emotions is something that’s already being used by music streaming services. Spotify recently revealed that it has patented technology that will allow it to analyse the user’s voice and suggest songs based on “emotional state, gender, age, or accent”. With this technology, Spotify will be able to play and recommend music that reflects the user’s mood or social setting. For example, a user could say “I’m feeling happy and I’m hosting a big party” and Spotify can then play or recommend classic, upbeat party anthems. Video streaming services need to take note of Spotify’s approach of using mood to provide recommendations, because it provides users with greatly enhanced personalisation that will help boost brand loyalty and reduce churn.
What lies beneath the metadata?
However, before streaming services can start categorising content by mood, they need to better understand their content. Currently, many content discovery systems rely on basic metadata, which broadly labels content based on data points such as genre, the actors starring in it, or even just picking up on keywords in content titles. Metadata doesn’t provide streaming platforms with enough knowledge about their content, so it’s no surprise that their recommendations will be poor. To take recommendation systems to the next level, streaming providers need to harness AI and machine learning technologies to gain a deep understanding of the content in a scalable way by analysing the audio and video file itself.
Content analysis based on AI and machine learning can have different neural networks to identify patterns in colour, audio, pace, stress levels, positive/negative emotions, camera movements and many other characteristics. It can then evaluate how similar each asset is to every other asset and combine this information with an AI engine that analyses a household’s watchlist, drawing together a more advanced and nuanced understanding of the content asset and its relevance at any particular time.
A user who watches a fast-paced thriller on a Friday night may well want something more light-hearted immediately after, and a recommendation system that is being fed this type of detailed content data can offer this level of intuition. Over time, it can analyse each viewer’s consumption patterns and data points – not just each device, but each individual user profile – and perfectly tailor recommendations for their watch preferences, suggesting the right content, at the right time.
Last night, a mood category saved my life.
Understanding the content itself goes way beyond just understanding similarity, it opens the door to a whole range of new use-cases that traditional metadata won’t allow you to tap into. With the emotional data of the content coming from the audio/video file itself, streaming providers can automatically curate entire mood categories and channels for viewers. The type of content we want to watch is often strongly related to how we feel at that particular moment, so grouping content by mood makes the user experience more intuitive. With an advanced AI engine, we can analyse the intrinsic emotional profile of each content asset to create nuanced categories. For example, parents with young children who want to watch a film on a Friday night will likely click on a mood category or channel that says, “family friendly comedies”. Additionally, in group settings when there’s a lot of debate over what to watch, it’s much easier to find something that interests everyone by asking, “what is everyone in the mood for?” and then using that to find the appropriate category and narrow down the options.
Right now, there are a huge number of streaming platforms vying for consumers’ attention. Forward-thinking players that want to differentiate themselves from competitors and boost brand loyalty among consumers need to place a greater focus on providing a unique and personalised user experience. AI and machine learning are the best tools to help streaming providers gain a better understanding of their content so they can not only deliver personalised recommendations, but recommendations that perfectly reflect the user’s mood and keep them watching rather than searching.