With artificial intelligence (AI) experimentation and implementations in the video industry now commonplace, we’re all pretty familiar with AI use cases aimed at everything from enhancing efficiency to improving user experiences. The conversation has however, moved on again from AI and Gen AI, to Agentic AI, which introduces autonomous AI agents that decide how to act to accomplish a specific goal. Agentic AI is undoubtedly the AI du jour; the question is, how will this new breed of AI bring real value to the video industry?
From traditional software to Agentic AI
To understand what Agentic AI is and why it matters, it is useful to look at both how traditional software is built and how AI has evolved. By its very nature, software is rigid and prescriptive; it follows explicit, rule-based instructions laid out in its code about how it should behave in predetermined scenarios as well as how it can communicate with other software components via APIs. This structure forces developers to anticipate every possible scenario in advance and hardwire responses into the system.

AI works differently to traditional software because it learns patterns from data, can adapt its behaviours, and can make decisions without predefined, explicit instructions. It can quickly and easily analyse huge volumes of complex data to provide useful insights, which can for example be used to power recommendation engines. Gen AI, which adds yet another layer, provides the ability to create new content based on a user request. Crucially, it also allows users to interact with software using natural language and is also context-aware.
Agentic AI moves beyond all of this. Instead of prescribing every step and specifying which action to take with each eventuality as is the case with traditional software, or generating a response based on a request as is the case with Gen AI, in Agentic AI, individual AI agents are given a mission to detect, analyse and act, enabling humans to design, govern and create. They decide how best to achieve their mission, can adapt based on feedback, and can improve over time. And crucially, they’re not just reactive; they are proactive collaborators within a system and can communicate with one another directly. This creates a flexible ecosystem where multiple agents pursue interconnected goals, each optimising performance toward outcomes rather than just following instructions.
Agents at work
This new approach could see video services with whole armies of agents working to achieve specific business outcomes, such as combating churn, stabilising quality, or increasing customer lifetime value. For example, one agent may be tasked with monitoring a customer’s journey, identifying issues such as with onboarding or playback, then acting accordingly to resolve, while another may be given the mission of detecting declining engagement on an individual level and acting to prevent that customer from churning. Agents could also be tasked with monitoring and optimising UX by adjusting layout and messaging dynamically based on context, like time of day or frustration signals, or on the monetisation side, agents could surface upsell offers precisely when users are most likely to convert.

The key thing is that each of these agents works not only independently but also collaborates with other agents, bridging the gap between different expertise and departments, keeping the customer experience at heart. This unlocks the compound impact that traditional development cycles struggle to match. In this model, success is measured not by whether the software followed the right steps, but by whether agents have achieved their missions, which are closely aligned to key business objectives. For the video industry, and other industries alike, this model has the potential to be genuinely transformational.
An Agentic future
Another big change ahead will come when agentic AI communication protocols, such as the newly developed MCP (Model Context Protocol), are widely adopted, enabling agents to access external data sources, tools and services. This is set to have a huge impact because currently, while video service providers can control what happens inside their own service, many remain dependent on external services for aspects such as advertising, acquisition, or personalisation, with little visibility or control into how those systems behave.
Agentic AI has the potential to shift this balance. Theoretically, MCP will allow agents from different vendor solutions or services to collaborate directly, bypassing the need for hard-coded APIs. In practice, this could mean that advertising campaigns can be matched with the right audiences in real time, without exposing sensitive datasets or building complex integrations. It also makes hyper-personalisation, which has, to a degree, been more of an aspiration than a reality, far more achievable.
Crucially, Agentic AI will give video providers the ability to take on challenges that were previously beyond reach. Many frustrations in streaming have less to do with lack of ambition and more to do with human bandwidth. Personalisation, optimisation, and audience engagement all involve too many moving parts for individuals or teams to manage in real time. Agentic systems offer a way to extend this capacity because they open the door to adaptive, collaborative systems capable of reshaping how video services operate and innovate.