Dynamic 3D content has always had a scaling problem. Broadcasters and media companies have spent years exploring real-time 3D to create more immersive experiences—from virtual studio environments to live performance delivered specifically to XR (Extended Reality) devices. Capture and rendering have advanced significantly, but volumetric 3D has struggled to move beyond trials and controlled demonstrations.
The core limitation is technical: while tracked mesh compression works well for predictable, animation-like motion, the industry has lacked an efficient way to handle non-tracked dynamic meshes. In simple terms, natural, real-world 3D content does not stay structurally consistent from one moment to the next, making it difficult to compress and stream reliably at scale.

Why dynamic meshes broke existing pipelines
Each captured moment in dynamic 3D content generates a 3D mesh made up of thousands, or millions, of vertices. When capturing people or dynamically complex environments, those vertices do not remain consistent. A person walking does not move as one fixed object: posture changes, clothing crumples, and surface details shift continuously. With no stable correspondence from 3D frame to 3D frame, each frame effectively becomes a new, irregular 3D shape.
For broadcasters and content creators, this creates a clear operational problem. Compression becomes inefficient and unpredictable with no control over the bitrate, and large volumes of data must be transferred repeatedly between the CPU and GPU in order to have the GPU-based AI algorithms generate the 3D meshes in real-time. That transfer overhead increases processing cost, introduces latency, and makes real-time streaming difficult to sustain at scale. These constraints explain why most volumetric content in broadcast and XR contexts is pre-recorded, heavily simplified, or restricted to controlled environments.
A new ISO standard to solve the problem
Video-based Dynamic Mesh Compression (V-DMC) changes how dynamic 3D data is delivered. Instead of transmitting a full, detailed mesh for every moment, the standard converts original meshes to a simplified base version, with time-varying detail – such as motion or fine surface change – mapped to 2D video frames. This approach works with the strengths of the existing video ecosystem. Video decoding is already fast, widely supported, and designed to operate at scale. With V-DMC, decoding can take place directly on the GPU using hardware-accelerated video pipelines. This reduces decoding complexity and minimises the need to move large amounts of data between the CPU and GPU.
This shift delivers two major benefits. Firstly, it enables high-quality, real-time rendering while preserving the detail and fluidity required for immersive XR, volumetric media, and digital twins. Secondly, because V-DMC is built on established video coding technologies, it ensures broad compatibility and can run on existing devices without specialised hardware.
By leveraging technologies designed for hardware acceleration and large-scale delivery, V-DMC offers a practical route to making live and on-demand volumetric media deployable across broadcast, streaming, and XR platforms. For real-time 3D workflows, this shift is crucial. It lowers latency, improves responsiveness, and makes dynamic volumetric content far more practical to stream and render.
V‑DMC also delivers a significant improvement in compression efficiency, reducing multi‑gigabyte 3D mesh sequences to just a few megabytes, with typical compression ratios in the range of 250:1 to 300:1
Why standards unlock deployment
Until now, volumetric content creation projects have relied on bespoke pipelines, specialist hardware, and custom software stacks. While effective for experimentation, this approach does not fit mainstream broadcast operations, where repeatability, cost control and long-term support are critical.
V-DMC removes much of this friction. Built on existing video coding technologies, it can be integrated into existing production, delivery, and playback workflows with fewer changes. Volumetric content that is encoded once can be decoded across a wide range of devices, improving interoperability and reducing deployment risk. Standardisation increases confidence for broadcasters and service providers. It enables planning around a specification designed for broad adoption and ongoing development, rather than isolated or proprietary implementations.
Driving value
For media organisations, the implications are significant. By aligning more closely with video-based operational models, volumetric content can move beyond demonstrations and into real-world deployments. Broadcasters can begin to use live or on-demand 3D content for XR platforms, special events, and interactive viewing experiences, allowing audiences to move around and engage spatially. The same technology is valuable in other industries, such as digital twins, simulation and real-time monitoring. Reduced decoding complexity and lower processing overhead improve responsiveness, which is essential for time-sensitive and mission-critical applications. Across these use cases, the common outcome is the same: fewer technical bottlenecks and a clearer path to scaling dynamic 3D content.
What’s next for dynamic 3D content?
While V-DMC addresses a major technical barrier that has held dynamic 3D back, widespread adoption depends on how quickly the ecosystem matures. Interoperable implementations, reference tooling, and integration into existing, multimedia frameworks, engines and production pipelines will determine how fast and efficiently broadcasters can work with volumetric formats at scale.
As with earlier transitions in broadcast technology, success depends on shared standards rather than isolated deployments. With V-DMC, real-time 3D is closer to operational reality than ever before. The challenge now is to translate this technical foundation into production-ready workflows that broadcasters and content creators can rely on.