André Rosado, head of product, Agile Content
The last 12 months have seen a range of important trends emerging that have shaped the media and entertainment (M&E) industry. They include the rise of ad-supported streaming, especially FAST and AVoD models. Their growing popularity is being driven by factors including consumer cost sensitivity, platform diversification and increased content availability on ad-supported platforms. It also allows for diversification of monetisation options, which is especially critical in low-ARPU markets.
Elsewhere, we are seeing an increasing emphasis on more efficient content repurposing, particularly in the form of Video-on-Demand (VoD) into linear channels, which is really important for developing new markets. This also includes the use of automated speech recognition and AI-driven metadata enrichment to enhance content accessibility, especially for foreign-language audiences.
Efficient content repurposing has delivered broader audience engagement, particularly because it gives M&E businesses the option to localise content and make it more accessible through automatic subtitling and metadata generation.
In 2025, we will see ad-supported streaming benefit from deeper personalisation, with the use of more sophisticated AI-driven targeting enabling service providers to tailor ads according to individual viewer preferences and behaviours. This will help drive the global expansion of ad-based models and solutions, which will increasingly target new markets with affordable services.
As far as content repurposing is concerned, technologies that transform Video-on-Demand (VoD) content into a continuous, linear viewing experience will become more important. In addition, AI-driven metadata enrichment will play a bigger role in making content accessible to new and niche audiences. The use of automation across repurposing workflows will streamline these processes, helping to reduce costs while enabling faster time-to-market for localised content. Integration with other video add-on features such as Personalised Content Scheduling, Automatic Metadata extraction and subtitling/CC generation will be enablers for new use cases.
In 2025, we are likely to see the industry double down on advanced personalisation, with AI and machine learning driving tailored content delivery and targeted advertising. In this respect, Automatic Content Tagging will also enable new use cases, such as smart bookmarks or sharper credit “markers” creation.
Consumers should also benefit from enhanced content security driven by anti-piracy technologies. Multi-CDN Control Plane will also become a cornerstone for media delivery, combining next-generation technologies such as AI/ML models to further simplify routing policies and active switching mechanisms. As said, Content Security is going to drive a lot of investment in the Industry and diversification of network security approaches is going to be mandatory.
Sri Hari Thirunavukkarasu, SVP sales, EMEA, Amagi
Media companies are deploying artificial intelligence and machine learning to streamline workflows, optimise costs, and enhance efficiency of content creation and delivery. 2025 will be a year of transformation for the M&E industry.
With the popularity of OTT content showing no sign of abating, media companies will continue to focus on expanding their streaming portfolios to boost streaming revenues. As a result, we’ll see increased investment in content, personalisation, and audience acquisition strategies. AI is practically tailor-made for just this purpose. We can expect AI’s role to expand in two critical areas:
- Efficiency Gains: Enhancements in media supply chain processes – such as content creation, post production, and distribution – can accelerate and/or automate these tasks for greater efficiency.
- Revenue Generation: AI-driven tools for hyperpersonalisation, audience engagement, and dynamic advertising models improve content discoverability and enable precise ad targeting to increase revenue.
Besides the continued use of AI to increase efficiency and generate revenue, we predict that media companies will also implement AI to enhance creativity. The adoption of generative AI for creating dynamic and interactive content could redefine storytelling and user engagement. Another new trend will be the emergence of niche streaming platforms. Specialised streaming services catering to hyperfocused audiences (e.g., specific genres, cultures, or age groups) could see growth alongside mainstream platforms.
Finally, as AI adoption grows, media companies will likely face scrutiny over data privacy, requiring new strategies to address consumer trust concerns. Therefore, there will be a new focus on data privacy and ethical AI.
Stephanie Lone, global leader of solutions architecture, media, entertainment, games and sports industry business unit, Amazon Web Services (AWS)
M&E organisations continue to lean into generative AI to support everything from monetisation to personalisation. AWS customers are using generative AI to support highlights clip generation, assist with video summarisation and search, extract archive metadata to inform FAST channel creation, and localise content. Generative AI is helping take away some of the undifferentiated heavy lifting and accelerate processes that previously proved cost prohibitive. For instance, it’s now easier than ever to create slow-motion clips for a broader range of sporting events, whereas before, these might only be reserved for tier 1 broadcasts.
Generative AI has also helped speed up the creation of FAST channels, allowing content providers to easily pull and prepare content from their archives to better monetise libraries. Other customers began using AI to mine their content and identify the most natural places to insert ad breaks that augment rather than detract from the viewer experience. As experimentation with generative AI continued in 2024, one thing became clear: organisations need a solid data infrastructure strategy to enable successful AI solutions.
As 2025 kicks off, we expect to see more customers embrace a mixed entertainment model, wherein live events, film, TV, and streaming content is more interconnected with games or immersive experiences and vice versa. The opportunities are near infinite, and it’s exciting to think about what’s to come.
As for generative AI, I think we’re going to see more M&E companies consider their use of the technology for things like automating video formatting. Imagine being able to create one stream and use AI to automatically reformat it to any broadcast, streaming, or device specification; that could soon be possible on a large scale. I also see AI playing a bigger role in automating video highlights and summarisation packages that an editor can quickly review, tweak, and send out for social publishing. These are all common challenges facing content providers today that involve a lot of manual effort that generative AI can solve.
Sam Peterson, COO, Bitcentral
Generative AI has begun to expedite news production workflows by automating content discovery, tagging, and repurposing, enabling faster and more efficient editorial processes and giving news producers more time to focus on creative storytelling.
Generative AI, with its ability to analyse and enrich metadata, is making vast content archives more searchable and accessible. This allows media companies to surface relevant clips quickly, repurpose content for different platforms, and focus more on creative storytelling rather than labour-intensive cataloguing tasks.
In 2025, generative AI will continue to evolve, becoming even more integral to workflows. Metadata-driven tools will enhance real-time content creation capabilities and support increasingly personalised direct-to-consumer models. By leveraging richer and more contextually relevant metadata, broadcasters will gain deeper insights into audience behaviour, enabling them to offer highly targeted recommendations and engagement strategies. As costs continue to be optimised in using more advanced visual analysis the gains for metadata creation will be further enhanced.
Jan Weigner, CTO of Cinegy
AI has dominated discussions, but the focus is shifting from hype to practical applications. We’re seeing organisations concentrate on specific, measurable AI implementations rather than broad, unfocused initiatives.
Fixed-function AI will gain traction over generative AI, particularly in broadcast applications, due to clearer legal frameworks and more predictable outcomes. The industry will need to carefully navigate copyright and liability issues surrounding AI-generated content. We’ll see increased focus on practical implementations that solve specific problems rather than broad AI initiatives.
Andrea Marini, CEO, Deltatre
The whole industry has been experimenting with AI with results that have increased everyone’s appetite for more and raised expectations for its potential to move us forward. 2025 will be the year when AI further reveals its impact on efficiency and, therefore, affordability.
AI will change how user data is processed in real time, personalising streaming and digital experiences for users, delivering unique content, advertising, and user interfaces that go beyond what was possible historically. Dynamic factors such as weather, current events, news, and other contextual elements will be considered to make every interaction relevant and timely. Additionally, AI-enabled features will unlock the potential of deep content libraries, reducing user fatigue through fresh, tailored discoveries and driving retention and revenue.
Andy Waters, head of studios, dock10
One of the big trends we’ve seen emerge during 2024 is the industry’s deepening interest in AI. As this new technology starts to really impact our wider lives, we’ve seen organisations from across the industry exploring how they can use AI to improve efficiencies. There’s certainly a lot of excitement and expectation for this new technology.
Despite all the interest and anticipation around AI, productions are not yet coming in and asking for AI cameras or kit. There are lots of ideas, conversations, and even experiments in using AI to help make entertainment television shows more efficient, and with so many people exploring the new technology, it’s only a matter of time before AI becomes part of the production process in some way.
While we don’t expect our customers to be demanding AI in 2025, it’s just too soon for that, we do expect the interest around AI to continue. As it touches ever more areas of our everyday lives, the expectation will grow that AI can bring benefits around cost and efficiency for television production. But I think the industry will start to scrutinise the costs much more closely to ensure that an investment in whatever AI tech emerges is more than a technology trend and is genuinely financially viable and worthwhile.
Rob Delt, CEO of Fabric x Xytech
2024 marked a pivotal year for generative AI tools in scriptwriting, animation, and VFX. Studios leveraged AI to accelerate production pipelines while maintaining creative integrity. The rise of tools like ChatGPT and Stable Diffusion adapted to industry needs allowed creators to experiment with storylines, characters, and visual designs at a fraction of traditional costs. AI reduced barriers to entry for smaller creators and independent studios, democratising high-quality content creation.
While AI-enabled faster production, it also sparked debates about intellectual property, plagiarism, and the role of human creativity. Studios had to establish clear guidelines for AI usage to maintain audience trust.
By 2025, AI tools could become more participatory, allowing audiences to influence story outcomes in real-time. This evolution may redefine fan engagement and content personalisation.
Entire franchises developed by AI—spanning TV series, games, and merchandise—could challenge traditional notions of creativity and intellectual property.
Platforms might employ AI to analyse audience data and predict trends, allowing real-time adjustments to storylines or even crowdsourced green-lighting of projects.
Jean-Christophe Perier, chief marketing officer, Globecast
What many have referred to as artificial intelligence (AI), has more accurately been the growing trend of greater intelligence. Across the live production industry, people have begun to embrace greater intelligence for improved quality control (QC) and operational efficiency. Embedding this in monitoring tools and augmenting human repetitive tasks to create space for more human creativity, is a trend that will be ever-increasing in the new year also.
Through integrating AI capabilities at Globecast, it has enabled our customers to boost their revenues and engagement and launch new products and services. While production will – or at the least should – always have human components to it, delivery, particularly in the monitoring and control environment is being transformed by AI. The ROI extends to the automation of tagging and categorising content which not only makes content easier to manage and retrieve but in doing so, reduces operational costs. People are not inherently being replaced, but their previous workloads are being reduced by automated tasks which then frees up time to dedicate to more creative and engaging decisions.
The integration of AI and Generative AI in OTT platforms will continue to streamline the process of content archiving, indexing and retrieval. Its cost-effectiveness will have an impact across the media supply chain by automating the allocation of computational resources based on anticipated or forecasted demand. This will be particularly useful for live events and new content releases that typically see spikes in viewer numbers. Here, AI systems will allocate additional resources in advance to ensure a smooth streaming experience without the need for manual intervention.
Jay Hajeer, CEO and Founder of ioMoVo
2024 saw significant strides in generative AI adoption, where platforms went beyond special effects to revolutionise entire production pipelines. With ioMoVo’s AI-powered Digital Asset Management (DAM), we’ve witnessed how AI can optimise metadata tagging, automate content recommendations, and offer advanced insights into media assets.
In 2025, ioMoVo envisions AI at the asset level becoming standard, driving efficiency across the M&E value chain. ioMoVo’s tools, like AI-driven content linking and personalised asset recommendations, will enable organisations to innovate in content creation and distribution.
As audience preferences diversify, ioMoVo’s AI insights extraction will empower companies to analyse audience behaviour and craft tailored content. From metadata-driven insights to personalised media curation, ioMoVo will lead the shift toward hyper-targeted storytelling.
Innovations in AR/VR and Generative AI will drive demand for immersive experiences. ioMoVo’s AI-led enhancements at the asset level will help media companies reimagine storytelling in ways that deeply connect with users. ioMoVo is committed to aiding eco-friendly media production, minimising resource-intensive workflows through efficient asset management and AI-automation. In 2025, greener DAM solutions will set new standards for ethical media creation.
Narayanan Rajan, CEO of Media Excel
AI has moved from concept to production. Despite LLM token pricing dropping by orders of magnitude, AI-related revenue has grown by orders of magnitude, highlighting the velocity of technology uptake. The promise of AI as a tool for efficiency in an industry struggling to reduce costs makes the conclusion almost inevitable. Areas of focus in 2024 have been in media workflow optimisation and the reduction of bandwidth costs through intelligent compression, but the future promises faster inroads of AI to the creative side.
AI-powered tools continue to lower the barriers for smaller teams and independent creators, enabling them to produce high-quality content that was once the preserve of larger studios. In tandem, the larger platforms are also starting to integrate more diverse content, and we’re seeing regions like Korea, China and India become content exporters. This is broadening the diversity of voices in the industry, allowing for a more varied and innovative range of content.
As generative AI tools become more integrated into the content production pipeline, we’ll see a wider range of creative applications – from animation and VFX to localised dubbing and real-time personalisation of content. The financial incentive to reduce content cost is large enough to ensure that AI will make steady inroads into the creative side. As media workflows become both more automated and flexible, more niche content will get integrated into regional and global platforms, enabling even more unique and personalised experiences. Immersive technologies, such as augmented and virtual reality, may move beyond niche applications into the mainstream if devices become more accessible, becoming more common in daily media consumption. Interactive gaming and the rise of AI as a tool in different industry verticals will continue to drive hardware like VR and AR headsets. As these devices continue to improve and get cheaper, these technologies will become more integrated into media industrial and retail platforms, leading to richer and more interactive experiences.
As we look to 2025, we will see content-on-demand marketplaces emerging, allowing businesses and brands to source ready-made, hyper-targeted media assets generated by AI. These platforms will dramatically reduce production timelines for advertising, enabling brands to stay ahead of trends and quickly adapt their content strategies. We also anticipate the continued rise of autonomous synthetic personalities. Some will be extensions of real personalities, and some will be uniquely synthetic. These AI-driven figures will move beyond their traditional role as brand representatives and will become dynamic, interactive entities capable of developing real-time personas that adapt based on audience interactions. This will have far-reaching implications for social media, marketing, and even entertainment.
Chris Wilson, head of product marketing, MediaKind
The evolution of AI continued to dominate in 2024, sparking widespread debate around its future impact on the media and entertainment industry. AI is already transforming processes like audio generation, closed captioning and automated highlight creation, driving significant operational efficiencies. 2024 also saw advances in Natural Language Processing (NLP), enhancing workflows with intuitive user navigation and real-time system updates for complex systems.
Another notable trend has been the broader adoption of machine learning (ML) for operational enhancements. Innovations like up-conversion of formats and sponsor logo detection are maximising sponsorship value and streamlining workflows, particularly in sports. These technologies reflect a broader focus on tangible, user-centric solutions that emphasise operational efficiencies, automation and personalisation.
AI-driven technologies like NLP are driving significant changes in how the industry operates, by simplifying navigation and enabling multi-language support to reach global audiences. On the personalisation front, advancements such as multiview streaming are transforming audience engagement by offering tailored experiences, such as customisable camera angles or real-time analytics during live events. For example, these tools enable dynamic interaction with high-stakes content like sports events or election-night coverage through access to multiple concurrent streams.
AI is also opening up new monetisation routes, including personalised sponsorships and targeted advertising. Tools such as sponsor logo detection and viewer analytics optimise advertising strategies, while encoder bitrate management improves video quality at reduced data costs. These developments both enhance audience experiences and create scalable opportunities for content providers to refine their offerings.
In 2025, the focus will likely intensify on practical AI applications, particularly those that refine workflows and interfaces to improve operational efficiency and user satisfaction. AI’s role in automation and content repurposing will also expand significantly. Automated workflow tools, such as AI-assisted routing and configuration systems, will likely become more sophisticated, enabling content creators to streamline processes without extensive investments in custom solutions. AI-driven upscaling of older content from SD to HD will modernise legacy media, unlocking new revenue streams for streaming-first audiences.
The need for AI-driven analytics and scalable infrastructure will be essential to enabling these advancements. Seamless entitlement systems will address growing demands for personalisation and adaptability.
2025 will likely see further strides in personalisation, with AI-driven insights enabling hyper-targeted content delivery tailored to individual preferences. Real-time audience data will enable greater interactivity and customisation. Automation and adaptive systems will also gain prominence. AI-powered flow control APIs will increasingly handle complex workflows, reducing costs and improving efficiency.
Dror Mangel, VP of Products and Services at Viaccess-Orca
In 2024, a significant trend was the rapid adoption of personalised content experiences powered by AI and machine learning. We also saw an acceleration in the deployment of hybrid cloud solutions, which is reshaping how media companies handle scalability and operational efficiency. The convergence of AI-based recommendation systems with content distribution has transformed the way viewers interact with platforms, driving deeper engagement and improving satisfaction.
These trends are significantly reshaping the M&E landscape, where smaller players are finding it increasingly difficult to compete. What used to require unique technologies for personalisation and video tagging is now available ‘out-of-the-box’ through the infrastructure provided by cloud giants — making differentiation harder. However, companies that can master these technologies, and optimise their cloud usage to reduce costs, will find new opportunities to prosper. AI-driven personalisation still allows for enhanced viewer engagement, and those who leverage it effectively will stand out amongst competitors in terms of user retention and monetisation.
In 2025, these trends will mature, becoming mainstream across media enterprises of all sizes. AI-powered personalisation will increasingly be used for purposes beyond content recommendations, playing a role in content creation, editing, and dynamic storytelling. In addition, we expect that hybrid cloud adoption will become the default architecture, with more focus on optimising workloads between on-prem and cloud to drive cost-efficiency. This evolution will pave the way for better integration between content production, distribution, and monetisation, creating more cohesive ecosystems.
We expect 2025 to be a breakthrough year for generative AI, both in content creation and user engagement. Generative AI tools will enable content providers to create short-form content automatically, which will enhance personalisation further and allow for quick adaptation to viewer preferences. Additionally, immersive experiences like augmented reality (AR) and virtual reality (VR) will become more tightly integrated with traditional content platforms, offering new layers of interaction for viewers — especially in live sports and entertainment.
Josh Pine, CRO, XL8
The M&E industry has clearly shifted from treating localisation as an ancillary step to embedding it as an essential part of the content creation process. This proactive approach not only aligns with the fast-paced, global reach of streaming platforms but also allows for strategic planning. The integration of AI-powered machine translation into the production phase can achieve up to 80-90 per cent of localisation efforts before the final product is complete. This saves time, optimises costs, and improves operational efficiency. Furthermore, it democratises access, enabling smaller studios to distribute globally without incurring prohibitive expenses.
Real-time AI translation technologies, such as XL8’s EventCAT, have redefined live broadcasting and event coverage, exemplified by their use in international sports and political events. This allows audiences to stay connected and engaged by understanding not only the content but also the emotional and cultural undertones. The democratisation of content through real-time AI-powered solutions means that global audiences can now enjoy simultaneous access, closing the gap between regional and international viewers.
In 2025, AI-powered translation tools will likely advance to include more sophisticated emotion and context analysis. These tools will go beyond basic translation, using AI trained on specific cultural and linguistic nuances to deliver content that feels human and localised. This shift will make real-time, context-aware translations a cornerstone in global content strategies. Innovations in natural language processing (NLP) and AI customisation will help capture regional dialects and humour, paving the way for more culturally aligned content.
Emerging trends are set to include AI-powered tools that translate live, culturally rich events such as political speeches, debates, and sports. These tools will evolve to provide more than just translated words; they will interpret and convey the emotional intensity and context, crucial for viewer trust and connection. We can anticipate this technology being pivotal in high-profile global events, such as major elections or international tournaments, to create authentic, real-time experiences for audiences worldwide.