Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×

Delivering a new era of automated graphics production

Billed as the broadcast industry’s ‘first agentic and multimodal AI platform’ for graphics production, Highfield AI is intended to reduce time-consuming editorial tasks, writes David Davies

There has been a steady stream of notable artificial intelligence product launches throughout 2025, but the recent announcement by Highfield AI of an eponymously-named graphics production platform feels especially significant in terms of its potential to streamline everyday workflows. Through direct integration with newsroom and graphics systems, the platform can take on repetitive, time-consuming tasks, such as populating story templates and sourcing visual assets, thereby allowing journalists to spend more time on storytelling and other key tasks.

With AI being such a fast-moving area of technology, clarity of terminology is especially important. Invited to unpick Highfield AI’s description of itself as “the broadcast industry’s first agentic and multimodal AI platform for automating graphics production”, co-founder Ofir Benovici replies that it is “built around the concept of agentic and multimodal AI–meaning it’s not just a tool, but a system of intelligent agents that can reason, decide and act across various data types. In short, we created a set of AI agents that are tuned specifically for the production needs of media. ‘Agentic’ refers to AI agents that can autonomously handle tasks, make decisions and collaborate. ‘Multimodal’ means the system understands and processes different types of inputs–text, images, video and structured data–making it ideally suited for the rich, fast-paced environment of media production.”

And it is specifically the repetitive aspects of graphics production that Highfield AI is targeting with its platform: “Highfield AI dramatically streamlines the graphics creation process, focusing specifically on the tedious process of populating graphics templates with content,” states Benovici. “Currently, this process is manually handled either by journalists, graphics operators or outsourced to external companies. Our solution is to extract the story from the newsroom system, choose the right templates, match media assets, and generate fully assembled graphics ready for editorial review. It’s important to mention that our solution is not a generative AI model, meaning we will not create content that is not there, but we will help broadcasters monetise their own content.”

Of the cited potential 75 per cent reduction in manual work related to graphics production, Benovici elaborates: “In discussion with our customers, the average time it would take to populate a template with content is around 30 minutes, which includes selecting the appropriate template, copy/paste information, searching for media, formatting, approvals, and so on. The 75 per cent efficiency improvement comes from automating all of those steps and reducing them to seconds, and gives various options available for journalists to approve before going on air. This adds up quickly in daily workflows with dozens or hundreds of graphics, and the result is a much higher efficiency in news but, equally important, better and more tuned content delivery.”

Interaction with broadcasters has been a major element of the developmental process, and is still ongoing. “When designing the solution we interviewed and brainstormed with editorial, graphics and engineering teams from key broadcasters to understand their pain points and workflow intricacies. Early on, we thought the value was primarily in time savings, but user feedback showed just how critical it is to also preserve editorial control and context. This shaped our agent design–they assist, but always keep the journalist or producer in the driver’s seat.”

This collaborative ethos has also ensured the platform has hit the ground running, with integrations already confirmed with leading graphics and newsroom solutions from Vizrt, Unreal Engine, CGI OpenMedia, Avid iNews, ENPS and Saga.

Core editorial canvas

Benovici cites a number of scenarios in which Highfield AI could make a major difference to graphics production, of which perhaps the most intriguing is its potential to optimise the use of studio video walls so that they can move “from occasional use to core visual storytelling”.

He explains: “In many 24/7 newsrooms, large studio video walls are high-potential storytelling tools but often go underused. Why? Because creating custom visuals for these screens–such as dynamic explainers, data dashboards or contextual backdrops–is time-consuming and requires a dedicated design team. As a result, they’re often limited to generic loops or reused graphics. With Highfield AI, producers can now generate tailored, high-quality visuals for video walls in real-time as part of the editorial workflow.”

Cited examples include AI pulling live market data and generating a multi-part visual during a segment on rising energy prices, including a price chart, a map depicting affected regions, and a comment from a government source, all formatted for the video wall resolution. Or a broadcaster might have a health bulletin in which the system builds a dynamic explainer showing the spread of a virus over time, employing animated maps and statistic panels that update automatically with new data. For political coverage, it could create interactive backgrounds with candidate profiles, polling trends and timelines, allowing presenters to provide “deeper narratives” without additional preparation time.

Benovici says: “What used to take a full day of design work can now be produced in minutes, directly from the editorial script, enabling the newsroom to visually enhance more stories throughout the day–even ones that previously wouldn’t have justified the effort. This transforms the video wall from a static background to a core editorial canvas, elevating the viewer experience with context-rich, real-time storytelling.”

Freeing human creativity

In terms of getting journalists up-to-speed with the platform, Benovici says: “When designing our solution, it was critical for us not to disrupt the current workflow. Therefore, training is minimal because the system integrates into existing workflows–NRCS, graphics systems, and media management. Users interact with Highfield AI through tools they already know. We do offer onboarding and optimisation sessions for editorial and tech teams to get the most out of the platform.”

Whilst it’s not difficult to see how the Highfield AI platform might be applied to other aspects of broadcast production, it seems that the focus will remain on graphics for the foreseeable future. “Our current plans for both the short and mid-term are focusing on graphics. We want to make sure that we deliver excellence, and that requires focus. We have big plans to continue evolving our graphics solution, further enhancing the value we deliver to our customers,” says Benovici. “That said, our platform architecture is modularly designed so we can further expand our solution and automate other areas in the media production chain that are repetitive or time-consuming tasks handled by agents, freeing human creativity for journalism.”