Michael Lantz, CEO, Accedo
Macroeconomic recovery has unfortunately been slower than expected and consumer demand did not see the significant pickup that had been hoped for. Although the industry as a whole has experienced growth, this has come largely from major investment in data centres and other infrastructure as part of the AI gold rush. While streaming ad revenue has grown over the course of the year, but not at a fast enough rate to plug the gap left by declining advertising revenues from traditional TV.
There’s been a lot of industry focus and innovation around agentic AI, the specific flavour of AI which involves the use of AI agents working autonomously to complete specific missions. These agents work across multiple data layers, such as UX, onboarding and monetisation flows to complete their assigned missions. AI agents will enable providers to deliver an improved video experience and at the same time operate more efficiently, which will reduce the resources needed to run a video service.

While there has been much innovation around agentic AI, the industry is really only at the very earliest stages of implementing it and seeing an impact. In my humble opinion, agentic AI is going to be truly transformational for the video industry, enabling a quality of customer experience and levels of efficiency previously unimaginable.
If the last few years have been the era of generative AI, 2026 will be very much the era of agentic AI. We’ll see the rapid expansion of agentic AI across the video service, with AI agents first handling basic tasks and then moving on to more complex tasks. 2026 will also see the launch of Accedo’s agentic AI solution, Accedo Compose which has been designed to help streaming providers operate more efficiently and transform their customer journeys from static paths to continuously adaptive experiences. I personally am very excited to see where agentic AI takes the industry over the next 12 months.
André Rosado, head of product for AgileTV
In 2025, the trends that stood out most were the acceleration of TVaaS adoption, the surge of short-form content, and the increasing importance of AI and secure delivery. Operators leaned heavily on TVaaS platforms like AgileTV to stay competitive, favouring modular, scalable solutions that handle everything from linear to VoD and now the booming wave of short, bite-sized video experiences is shaping audience behaviour. At the same time, AI moved from experimentation to core infrastructure, powering automated playout, subtitling, metadata enrichment and smarter operations that reduce complexity and improve personalisation.

In 2026, these trends will accelerate rather than stabilise. TVaaS will evolve into fully modular, multi-tenant ecosystems, allowing operators to mix premium, FAST, short-form and hyper-local content with far more flexibility. Short-form will move beyond companion content into true premium storytelling, requiring platforms to support vertical video, adaptive UI modes and personalised short-form rails across devices. AI will shift from assisting operations to orchestrating them—powering predictive monitoring, automated page layouts, dynamic pricing and even AI-generated content variations tailored to user context. On the delivery side, security and multi-CDN strategies will become standard as operators face more piracy, higher traffic peaks and greater regulatory pressure. CDN orchestration will grow more policy-aware, factoring in rights, location, QoE and cost. Altogether, 2026 will push the industry toward smarter, automated, highly personalised TV experiences built for scale.
We expect several new trends to take hold in 2026. AI-native content packaging will expand rapidly, with platforms automatically generating multiple versions of promos, previews and layouts based on user context. The convergence of long-form, live and creator-driven content will accelerate too, pushing operators to adopt more flexible ingestion and rights-management models that can handle everything from premium sports to hyper-local events. We’ll also see a rise in context-aware UIs that adapt dynamically to device, time of day or viewing habits, an evolution many TVaaS platforms, including AgileTV, are already preparing for. And as streaming matures, QoE-based delivery guarantees will become a defining standard, increasing the importance of multi-CDN orchestration, anti-piracy protection and real-user monitoring. Overall, these trends point toward a more adaptive, AI-driven and experience-optimised TV ecosystem in 2026.
Croi McNamara, global lead, studios & creators, media & entertainment, games & sports, AWS
At a high level, AI innovation transformed how films, series, and events were produced, edited, delivered, and monetised this past year, with the cloud instrumental to scaling efforts. Content providers began leveraging AI more to enhance the consumer viewing experience. Sony Group Corporation recently announced that it would be using AWS AI services to build an engagement platform and create deeper connections between fans and creators. Many other companies, such as the National Football League and Formula One, also used AI powered by AWS to deliver unique, standout viewing experiences.
One immediate benefit we’ve seen with generative AI is that it is helping content owners revive their libraries and seize new monetisation opportunities. Once assets are more visible and organised, it becomes easier for them to map their resources to viewer demand. Metadata tagging and searchability are crucial to achieving this, and generative AI makes the process simpler and more affordable.

AI-powered tools and features are now table stakes, at least for operational efficiency, and are increasingly important across all aspects of production. As an example, Weta FX recently revealed a plan to develop AI tools for visual effects in collaboration with AWS, with an eye on improving efficiency and creative iteration. In a similar vein, Adobe is leveraging AWS infrastructure to help their teams innovate faster, from generative AI model training to AI agent deployment. Agentic AI adoption and experimentation are also quickly on the rise. M&E companies are exploring how they can tap AI agents to manage petabytes of content and prepare those assets for distribution in a faster, more streamlined way. They’re also leveraging the technology to help automate previously complex features, such as detecting ad breaks and semantic search, enabling companies to refocus resources to other areas.
Signs point to 2026 being a big year for agentic AI. I anticipate it will work its way into more studio pipelines, and it’s not too far-fetched to envision billions of AI agents working alongside humans in the near future. As part of a cloud-based ecosystem, they’re poised to accelerate innovation and help professionals work more efficiently and effectively. Imagine creatives empowered to bring their boldest visions to life in days or even minutes, with unprecedented freedom to iterate, experiment, and explore their craft. This acceleration amplifies human creativity, giving artists and storytellers more time to focus on what they do best—crafting compelling narratives and pushing creative boundaries further than ever before. The possibilities are enormous.
Michael Thielen, VP consulting services – media solutions (Radio) and Alan Dickie, VP consulting services at CGI contributing jointly
In 2025, three major industry trends have stood out. First, AI has moved firmly from experimentation into production, yet many organisations are struggling to realise value at scale. At the DPP Leaders’ Briefing it was noted that up to 70 per cent of AI initiatives still fall short of expectations, highlighting the need for realistic ROI models and a focus on where AI truly adds value.
Second, the biggest obstacles are increasingly people-centred. Cultural alignment and skills development now outweigh technical challenges, with change management consistently cited by CEOs and CIOs as a top concern. At CGI, we see daily that without structured, well-supported change programmes, even the strongest technologies underdeliver.

Finally, the decline of traditional broadcasting has accelerated as audiences shift decisively toward digital, on-demand and mobile platforms.
Together, these trends show an industry undergoing rapid transformation, where success relies on aligning technology, people and evolving audience behaviours.
AI continues to influence newsrooms, shaped by trust, transparency and the need for oversight, but what strikes us most is the balance of curiosity and caution. Editorial teams’ use of AI-enabled journalism is growing, yet audience experience and editorial integrity still dominate decision-making. As operations become more content-centric, it’s increasingly evident that strong storytelling remains the one constant, regardless of platform.
In 2026, AI adoption will become far more disciplined as organisations apply early lessons and focus on realistic ROI. Our hope is that we’ll finally see fewer “AI for AI’s sake” projects and more targeted deployments that prove their worth.
Wider use of agentic AI is also expected. We foresee more autonomous and semi-autonomous agents supporting tasks such as localisation, metadata generation and workflow optimisation. Our company anticipates these tools to become trusted co-workers rather than experimental add-ons, enabling more efficient production and new creative possibilities.

AI agents will also begin shaping interactions in new ways. Instead of simplifying tasks, they will multiply content output, helping teams produce more variations, more formats and richer, context-aware storytelling. For us, this is likely to be where some of the most exciting creative experimentation happens.
Finally, sustainability is likely to return to the forefront. Rising energy costs, cloud expansion and tighter regulation mean broadcasters will scrutinise the environmental impact of their technology choices. Our expectation within the group is that sustainability will shift from obligation to innovation, pushing the industry toward smarter automation, secure content chains and more responsible production.
Fraser Jardine, director, global business development for Dot Group
2025 saw AI moving into real workflows, the shift towards data-driven operations, pragmatic hybrid-cloud adoption, and intensified pressure to monetise content more efficiently. These trends stood out because they directly impact how broadcasters manage, move, and exploit their content.
What’s been particularly revealing is watching organisations realise that visibility drives everything else–you can’t optimise workflows, accelerate monetisation, or reduce operational footprint without first understanding what’s happening within your infrastructure.
There seems to be an acceleration in workflow optimisation, which exposes legacy bottlenecks and pushes organisations to treat metadata and rights as commercial assets. We’re also seeing tighter and stronger alignment between tech, operations and revenue teams as automation becomes revenue-critical.
The interesting shift is that operational data is becoming strategic currency–it informs purchasing decisions, infrastructure planning, and increasingly, how organisations demonstrate efficiency to stakeholders. Cross-pollination has become key and teams that rarely spoke to each other are now sharing dashboards.

AI will become embedded end-to-end in 2026, data unification will become a board priority, and hybrid infrastructure the norm. Technology investments in 2026 will be judged on their ability to reduce waste and speed up content monetisation. But “waste” now means more than delayed revenue – it’s redundant storage, inefficient rendering, over-provisioned infrastructure. The organisations instrumenting their operations are discovering margin trapped in places they’d never looked before. Operational intelligence reveals where money and resources are leaking simultaneously.
But here’s the new trend that will surprise people: operational sustainability becoming a competitive advantage rather than a compliance requirement. Organisations with instrumented infrastructure can prove to partners, advertisers, and stakeholders that they run lean operations – which means both cost-efficient and carbon-efficient. The broadcast operations that can demonstrate this dual efficiency will have a commercial edge. It’s not about reporting obligations; it’s about operational excellence that happens to be measurable in both financial and environmental terms.
Mathieu Mazerolle, director of product – new technology at Foundry
AI continues to be a dominating trend, with most companies experimenting with generative AI and many even using it in production. We’ve quickly moved from it being a novelty to a widely accepted practice. The technology itself and its applications have evolved rapidly this past year.
In VFX specifically, AI advancements have already led to more tools supporting innovative features like Gaussian splats, GenAi nodes, and smart roto. Interestingly, we’re seeing compositing take on a defining role in AI-powered workflows. While generative AI can be immensely beneficial for initial concepts or big picture ideas, it’s not as useful for final pixel outputs. Human artistry remains key to the creative process and compositing is where productions are leveraging the best of both worlds, with artists taking AI-enhanced assets and polishing them into production-ready deliverables.

I think companies will become more intentional about their development and adoption of AI resources. Implemented appropriately to solve specific challenges, AI can be a powerful workflow accelerator. Having the right foundational technology in place to support sustainable AI tools is also important.
I anticipate AI will continue reshaping traditional production processes and workflows. We may also see a resurgence in virtual production as the release of more purpose-built technologies make the filmmaking approach accessible to productions on a range of budgets, instead of just the high end.
Jean Macher, senior director, Global SaaS Solutions at Harmonic
In 2025, three trends stood out as especially transformative for the media and entertainment industry: the shift from 24/7 channels to event-based streaming for live sports, the evolution toward impression-based monetisation and the growing adoption of AI in video workflows.
After years of experimentation, AI is becoming operational and a natural part of the workflow. A few of the use cases we are seeing include automated sports highlights, real-time translation, automated metadata and video compression efficiency. These are no longer prototypes — they’ve become a central part of video streaming and broadcast workflows.
Furthermore, AI is having a measurable impact on video streaming and broadcast delivery. Service providers are leveraging AI for everyday tasks that historically required significant manual effort, such as real-time highlights, speech-to-text, dynamic brand insertion, multilingual audio and advanced compression. These capabilities are improving efficiency, lowering distribution costs and enabling stronger viewer engagement.
An emerging trend is the accelerating rise of AI for live event streaming, where AI tools help operators automatically assemble pop-up channels, provide multilingual versions of a live event based on audience interest and more. As AI frameworks mature, this kind of dynamic event packaging will become far more practical.
Anupama Anantharaman, Interra Systems, VP of product management
In 2025, AI became deeply integrated into video workflows—from encoding and upscaling to content analysis—both to improve perceptual quality and to automate operational decisions. Vendors are increasingly using machine learning for denoising, adaptive bitrate ladder optimisation, automated subtitling and captioning, logo and content classification, and deep codec analysis across HEVC, AV1 and legacy formats. This acceleration is driven by the rise of higher-resolution formats (4K/8K, HDR), the push to reduce bitrates without sacrificing quality, and the need for leaner operations teams to manage growing content volumes.
Another major trend is the transition toward hybrid processing models, which blend cloud, on-prem, and edge compute for cost, latency, and regulatory reasons. While cloud remains essential for elasticity and global reach, particularly for AI/ML workloads, certain functions are staying on-prem or at the edge. These include dense transcoding, contribution handling, and portions of playout and monitoring, which help control OPEX and ensure deterministic performance requirements are met. The expansion of 5G and edge CDNs is also pushing more low-latency and interactive processing closer to viewers, redistributing workloads from centralised cloud to distributed edge nodes.

These trends are pushing media companies toward a data-driven, “smart-software” approach that reshapes cost structures and organisational design and redefines where value is ultimately created. Meanwhile, hybrid infrastructure is becoming foundational. This shift is prompting studios, streamers, and broadcasters to rethink not only how they balance technology and content investments but also how they collaborate across increasingly global teams and workflows.
In 2026, AI-driven video processing will advance from enhancing workflows to orchestrating them. Not only will AI become the intelligence layer guiding how content is encoded, optimised and validated, but it will also make autonomous decisions on quality-versus-bitrate tradeoffs, tuning encoding ladders, fixing quality issues, generating metadata, and improving caption accuracy in real time. Perceptual and context-aware quality models will allow systems to evaluate video the way humans do, detecting subtle visual defects, ad-marker inconsistencies, and scene-level issues that traditional QC often misses. Multi-modal AI combining video, audio and text will further strengthen compliance checks, highlight creation, and content classification, enabling operations teams to process more content with less manual intervention.
Hybrid architectures will also evolve from being a transitional model to the default for both live and linear workflows. Cloud will remain essential for elasticity, packaging, multi-region delivery, and large-scale analytics, while dense transcoding, contribution workflows, deterministic latency tasks, and rights-sensitive operations will stay on-prem or at edge locations to control cost and risk. The continued expansion of 5G and edge CDNs will push low-latency and interactive processing closer to viewers, requiring monitoring systems to extend beyond traditional data centres and cloud regions into distributed edge nodes.
Together, AI and hybrid models will transform video monitoring into a predictive, distributed, and data-driven discipline. Monitoring will evolve from reactive alarms to anticipatory insights, correlating issues across ingest, cloud, CDN, and device in real time. Operators gain unified QoS/QoE visibility across all environments, enabling automated workload placement, faster root-cause analysis, and more resilient service delivery. In short, the combination of AI and hybrid infrastructure will become foundational to achieving high-quality, cost-efficient streaming at global scale.
Vinayak Shrivastav, CEO Magnifi
In 2025, the most defining shift was the maturity of Generative AI and the immediate operational gains it delivered. Once GenAI became embedded in production workflows, content moved faster and costs dropped sharply. This created a clear imbalance. Organisations enjoyed stronger returns on the same or smaller budgets, yet the volume of content in the market rose so quickly that competition for engagement intensified.
To stand out, companies had to lean heavily into personalisation and localisation, using more precise algorithms to deliver content that felt relevant at the moment of consumption. The year was ultimately shaped by this combination of dramatic efficiency gains and the strategic need to tailor every output with far greater intent.

The impact of these trends on the media and entertainment industry has been both clear and significant. On the business side, companies can now scale creation and distribution at a pace that was not possible before. The cost savings have allowed many to redirect budgets into premium assets like top talent and live sports rights, which in turn expands their competitive advantage. This has created a widening gap between companies that successfully implement AI-driven efficiencies and those that don’t.
For consumers, the shift is all about experience. With more advanced personalisation in play, audiences now expect content that feels precise and timely, whether it is a streaming recommendation or a perfectly formatted highlight delivered seconds after a big moment. This expectation forces every content provider to operate with greater speed, accuracy, and relevance. The bar has been raised for the entire industry.
In 2026, basic automation will evolve into intelligent agents capable of making complex decisions across the entire chain. Instead of completing one task at a time, these agents will manage the full lifecycle of a video. They will create, package, distribute and optimise it for monetisation, all within a single streamlined process. Within Magnifi’s ecosystem, the focus is on building this unified workflow so stakeholders can operate on one intelligent layer rather than coordinating multiple vendors.
Another development that will accelerate in 2026 is the need for fast and accurate metadata. High-quality viewer experiences rely on it. Whether it is search, recommendations, highlights or monetisation, nothing works well without the right tags in the right moment. To meet this demand, we will see broader adoption of automated metatagging powered by micro LLMs that run directly on production consoles. Instead of waiting for cloud processing, these models can tag players, actions, and key moments instantly as the content is being created. That same metadata can then power features like richer descriptions or shoppable cues wherever the viewer is watching.
We will also see the rise of editorial AI maturity, where models move past generic templates and are rigorously trained on a publisher’s specific tone, legal standards, and style guide. This allows for trusted automated content at scale.
Ivan Verbesselt, chief strategy and marketing officer, Mediagenix
AI is cautiously making its way from lab experimentation to business operations, but still slower than is often touted, and automation is the real low-hanging fruit at this point.
While GenAI is for sure the most disruptive change agent (and awareness vehicle) in AI land, it is not the panacea for every use case, and great attention will need to be given to selecting the right (AI) tool for the job at hand—horses for courses.
Smart automation is the real low-hanging fruit and may have GenAI for lunch. As opposed to automating the incumbent processes (and just doing the same thing faster), there is a real opportunity here to improve both audience engagement and catalogue monetisation by effectively closing the loop and informing content strategy, content bundling, and scheduling in a self-optimising way.
This goes hand-in-hand with cases where GenAI will be the undisputed efficiency engine, like for the generation, enrichment, and quality assurance of metadata, where it will become the real game changer (for non-scripted and scripted alike)
While there is a huge potential in agentic orchestration of collaborative AI tools, both the architecture and deployment models need further work to achieve predictable reliability at scale. Human-in-the-loop will be an essential guardrail for quite a while still.
The audience-centric supply chain will become the real growth engine – it calls for smart content curation all along the content lifecycle which crucially depends on smart content discovery, not just for consumers but even more so for internal teams (and the agents supporting them as a digital workforce) in their endeavour to match editorial intent with demographics and available rights to optimise the monetisation of bundles.
Heidi Shakespeare, CEO, Memnon
In 2025, AI breathed a completely new life into the content industry. It not only expanded what content can do, it created an entirely new demand for it. On one side, AI-driven discovery has made it easier than ever for people to find, search and enjoy content. In turn, this is raising audience expectations for deeper context and personalised viewing experiences. On the other side, content owners are beginning to unlock new revenue streams by licensing their archives for AI training. This shift is providing a powerful commercial incentive to digitise, preserve and modernise materials that previously had no clear ROI.
AI has reshaped–and continues to transform–how content is valued, discovered and monetised. The industry is only just beginning to understand the scale of this opportunity.

Archives are no longer being viewed as cost centres; they’re now being recognised as revenue-generating assets with significant strategic importance. Content libraries that once sat quietly below the profit and loss line are now considered valuable investments with clear opportunities for ROI. This shift marks a promising development for media and entertainment organisations. In addition, it’s also driving a new urgency amongst organisations to digitise and preserve legacy materials before they are lost forever.
As more and more companies recognise the revenue potential of their content, we are expecting to see a significant increase in investment in content preservation, with leaders placing strategic importance on their digitisation. However, with finite content processing capabilities and specialist expertise, we’ll also begin seeing capacity constraints and bottlenecks becoming a challenge. Organisations that delay may find themselves at the end of a very long queue making early planning essential.
As the value of content continues to grow, we can expect to see a resurgence of roles and new opportunities across the industry. As organisations adapt to evolving technologies, entirely new job categories and areas of expansion are expected to surface. One of the most notable shifts will be the rise of hybrid creative/technical roles. With AI becoming embedded into everyday workflows, the industry will increasingly need professionals who can simultaneously blend deep content expertise with data literacy and technological fluency. These capabilities will be critical to unlocking the expanding value of archives and content.
Jonas Michaelis, CEO at qibb
In 2025, the most striking trend was the shift from experimentation to practical execution. Media and broadcast companies are no longer asking whether technologies like AI, automation or open systems are viable, they’re learning how to deploy them meaningfully at scale.

These trends are fundamentally reshaping how value is created. Automation is beginning to unlock growth rather than just cost savings, enabling teams to expand capabilities, reach brand new audiences and tailor content more precisely without additional output needed. Additionally, AI-driven tools are improving reliability and speed across workflows, from content handling to distribution decisions. We’re seeing a more flexible, responsive industry that can move faster and operate at greater scale with existing resources.
In 2026, these capabilities will become more deeply embedded and less visible as additional ‘features’. Agentic AI and automation will increasingly operate in the background to improve accuracy, efficiency and responsiveness in real time. We’ll also see stronger pressure toward modular, interoperable systems, as companies realise that adaptability is impossible with rigid, closed architectures.
Rob Chandler, Starting Pixel
AI has been a major trend throughout 2025. It was a year of innovation and movement for sure. But the real innovation sits behind the scenes, focusing on productivity and speed instead of the headline-grabbing footage that can be produced. Areas such as location capture, how AI supports virtual production lighting and setups, allowing for rapid capture to shoot of existing locations and sets. The use of AI prompts to construct 3D environments is starting to emerge and a bunch of cool applications that will increase what’s possible, whilst also streamlining production pipelines.
The traditional M&E industry is under pressure from a vast number of factors, let alone the AI revolution. AI allows the industry to pivot in new ways, helping the struggle to compete with new platforms and formats. Vertical short dramas, video podcasting, AI slop will only increase to compete for our attention, so it’s key that the art of film and HETV becomes way more efficient without compromising exceptional creativity.
2026 will see a major lift in proven AI tools as they come to market and deliver faster and smarter ways to work. We’ll see the true financials behind them rather than the very low costs that have hidden the true cost of AI. Expect to see way more integration with existing aspects of virtual production, location scanning, digital humans and gaming development platforms.
Once creatives can fully understand the art of the possible with the new tech convergence – and it really needs them to understand the entire stack, not just what one AI tool can do – then we will start to see very exciting new formats, and ways to bring creative visions to life, at speed and quality the audience expects. Very interesting times ahead!
Gwendal Simon, senior director of technology at Synamedia
2025 was a consolidation year because there were no major sports events, so platforms were able to focus preparation for 2026. Sports continued to drive innovation and resilience became a priority after recent outages.
Cloud-native architectures made automated multi-CDN switching and elastic scaling essential. Content Steering and CMCD/CMSD standards gained wide adoption. MOQ entered early production as the new foundation for low-latency, scalable distribution.

AI hype was everywhere, but real breakthroughs were seen in operations where MCP-enabled LLM-driven automation have improved reliability and reduced manual intervention. Social media continued to reshape sports engagement with the growth in AI generated short-form clips which increase reach but often reduce long-form viewing.
Operational AI will become standard, handling more workflow tasks and improving reliability. Multi-CDN and hybrid IP-first models will become more embedded in operational strategy. Short-form clips and social feeds will continue to shape sports engagement, maintaining high reach but with the risk of cannibalising long-form viewing. During the FIFA World Cup in 2026, we’ll see how an abundance of personalised data drives tailored fan engagement.
We’ll see AI innovation in production in 2026: perceptual compression is emerging as ROI-based encoding allows providers to render different parts of the video stream according to perceived visual value – for example, prioritising a player with the ball while rendering spectators at lower quality.
Backstage, machine-to-machine (M2M) systems will become more prevalent: for example, compression transforming the multimedia streams into formats AI can process fast and we’ll see the growth of LLM agents handling large parts of operations, even between vendors, with humans as collaborators.
In the US, broadcasters will continue migrating to IP contribution using MOQ to support efficient, scalable, automated workflows.
Michael Demb, VP product strategy, TAG Video Systems
Three major trends defined 2025: sports rights consolidation, AI moving from experiments to real use, and automated live operations. We’re all being asked to scale without adding headcount, often while cutting it, which demands smarter workflows. Data is king: whoever has the clearest operational visibility wins. I think we’re finally accepting that manual control doesn’t scale for live, global, event-driven content.

These trends are pushing M&E companies to focus on their ROI in the short term, while investing in new technologies for the longer term. AI-driven triage and predictive monitoring will be major contributors to reducing outage time and reshaping operator roles. The line between contribution, playout, and streaming is blurring, with workflows becoming far more unified.
2026 will take these trends from “nice” to “must” to have. AI will handle more supervised-but-automated actions, especially around switching, QC, and anomaly detection. Ecosystem partnerships will accelerate: no one can do this alone, and the platform model will dominate. Workflows will become dynamic rather than static, reacting and adapting in real-time.
I expect that data-sharing alliances will form because isolated data isn’t competitive anymore.
Dirk Noy, partner, general manager Europe, WSDG
In 2025, standout trends included the rise of ‘everybody as a broadcaster’, more intelligent and AI-driven automation, and a stronger focus on the seamless integration and usability of complex systems. Additionally, immersive experiences, from entertainment and education to museums and experiential installations, are becoming central, reflecting the industry’s push towards more engaging, multi-sensory environments.
These trends are reshaping the media and entertainment industry by enabling more creators and organisations to produce high-quality content, often with smaller teams and smarter workflows. AI-driven automation and improved system integration are streamlining complex operations, while immersive audio-visual experiences are raising audience expectations for engagement, interactivity and emotional impact.

In 2026, WSDG expects these trends to deepen: more individuals and organisations will act as broadcasters; AI and automation will become more sophisticated and context-aware; and system integration will continue to improve. Immersive experiences will expand across entertainment, education and experiential spaces, with a stronger focus on seamless, intuitive interaction and holistic design.
In 2026, emerging trends around personalised experiences, where content and environments adapt dynamically to individual users. There may also be a stronger emphasis on sustainability in AV and media installations, and further convergence of AV, broadcast, and IT infrastructures to create fully networked, intelligent ecosystems.