Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×

Dimension develops ‘world-first’ robotic solution for virtual production

Dimension CTO Callum Macmillan tells TVBEurope about the company's new Vectored Imaging Volume (ViV) and how it can be used in virtual production

Dimension has announced the launch of Vectored Imaging Volume (ViV), a robotics-powered virtual production system that aims to make it easier to capture “impossible” shots.

ViV has been developed by Dimension Futures alongside the company’s CTO Callum Macmillan and has been built in collaboration with industry robotics expert Dickon Mitchell.

The solution uses two robotic arms; one mounted with a camera, the other with a 2m x3m LED wall that is scalable. Together, the two motion-controlled components form a dynamic, LED volume, moving in precise synchronisation with the virtual environment, in real-time, said the company.

TVBEurope spoke to Macmillan to find out more about the system.

How has VIV been developed, and how long did it take?

The concept of ViV – or the Vectored Imaging Volume – emerged early on as Dimension established itself as a pioneer in virtual production, around mid-2019. That being said, our thinking of how it could be done in different incarnations does go back many more years.

The idea was motivated by the fact that to do more with virtual production, to open virtual production up to the maximum possible addressable market, VP’s constituent technologies needed to evolve. 

Firstly, we knew it needed to become a useful solution for every part of the production process from script to screen, including all the visualisation stages, post production and rendering. And then secondly, it needed to become as adaptable and flexible as possible at the physical production stage of filmmaking; that is how virtual production can most benefit shooting live action content.

And it was this second point—to make physical production using VP as adaptable and flexible as possible—which motivated the thinking around ViV.

ViV in use on The Cliff

In early 2024, one of the co-founders of Dimension, Steve Jelley, introduced me to motion control specialist and physical production supervisor, Dickon Mitchell. This was the point where we really really realised the potential of developing ViV as a system. We were subsequently joined by James Dinsdale, one of Dimension’s virtual production supervisors who has a strong grounding in production technologies and hardware.

Together we set up some proof of concept tests, which were conducted at Dickon’s studio Robomoco. Our first objective was to establish that micron-accuracy alignment could be achieved between a live action camera mounted on one robot and a section of LED wall mounted on a corresponding robot, completely in unison.

In addition, real-time 3D image data was presented to the LED wall from an nDisplay render cluster, whilst positional data of the camera was fed back to the same nDisplay cluster which showed a perfectly aligned, view dependent representation of the 3D content to be presented on the LED wall. The good news is that it worked really well, which gave us confidence to move forward and develop the solution.

How easy is it to use?

Like any technology which has to operate in the high-demand realm of physical production, we aim to make solutions like ViV, along with our broader virtual production offerings as versatile, functional and stable as possible.

Robotics—like filming on an LED wall—provides a great solution for specific use cases. We want to broaden those use cases by combining robotics and virtual production. The goal of this is to expand the landscape of who virtual production is a viable option for and unlock its potential to further enhance media and entertainment production.

That means we take our learnings from VP in terms of how we are agile, how we get involved early on in the pre-production stage to help de-risk shots and how we visualise the setups and camera moves as carefully as possible prior to any physical production taking place. What’s great about visualising those robotics is we get a very accurate virtual representation. When you subsequently translate that into the real world as actual shoot days, you can have a high confidence that what you visualised will be what you capture in camera.

You also get the best possible camera tracking data turned over for any further VFX  post-production and the option to shoot high-quality clean plates and other lighting passes if so desired due to the repeatable nature of the camera path.

To help with ease of use, we’re using the term ‘robotics at the brain bar’ – essentially we’re integrating Dickon Mitchel’s robotic’s team into the brain trust of operators which make up our virtual production brain bar. This allows us to be adaptable, react to directorial input on the day,  and develop a comprehensive data and control pipeline for dynamic LED volume management.

Who do you envision using VIV, and can they use it off the shelf, or do they need to work with Dimension?

We have a few different target audiences who we see as users of ViV. These include: filmmakers/production teams already using a large static LED volume and ViV sits alongside it as a complementary setup for specific shots that require a level of camera or performer movement, which would usually be ‘impossible’ using a virtual production setup. 

The Dimension team worked on two proofs of concept, including The Office

Then there are filmmakers who don’t have a need for a large volume LED setup, or don’t have access or budgets for a full VP setup. There are the same sorts of ‘impossible’ shots which these filmmakers will need to complete, or a handful of shots where a small LED panel can replace a green screen. This is where we see ViV playing a key role in making virtual production more accessible. 

And then beyond the filmmakers mentioned, we think there’s a lot of opportunity for brand and digital content producers to use ViV as an introduction to virtual production. For brand content, or advertising, there again might not be a need for a large static volume, but ViV can play a key role in achieving some shots that are hard to pull off. We even see online content creators as a group who would use the system. It’s clear many with an online platform would be looking to introduce more high-end technology to their workflows. 

We see the introduction of ViV—at least in the short-term—as a delivered service from Dimension. Like any virtual production setup, there needs to be upfront planning, and we’ll work with clients and Dickon Mitchell, our robotics partner, to ensure the system delivers what it needs to once on set. 

Can you give us details of some of the POCs that you’ve undertaken?

Our first test was done back in August). The primary objective of this was to prove we could get micron-accurate alignment between camera and screen to ensure there was no drift or slip as the dynamic led volume moved in 3D space.

With this being the objective, the panel used was a ROE CB5, which we had available and the artwork for the volume was pretty basic in terms of design. When we saw how solid the lock was between the different robotics systems and the content, we were satisfied that our hypothesis was correct and we could proceed with a more advanced round of testing and development.

The next step was to take the proof of concept and develop that to a minimum viable product. We chose two test setups with virtual environments which we considered strong candidates to highlight the system’s unique capabilities: The Cliff and The Office. This next phase also included a larger LED volume; 2m x 3m, which was a final pixel quality 1.9 millimetre dot pitch ROE Ruby panel.

The Cliff represented an extreme location, which would be difficult to film whilst maintaining a decent level of both creative and cost control. We had our art teams work up an expansive vista of close-up cliff face and distant mountain ranges with jungle canopy below in a valley.  We conducted multiple dynamic camera moves where the LED volume traversed 180 degrees either vertically, up and over, or horizontally, panning from the cliff face into the vista. Shooting without the robotics, both types of shot would require a large quantity of LED panel and additional visual effects work to compensate for seams when panning from floor to ceiling. We were really happy with the results; especially the vertical movement of both the panel and the virtual environment. It really felt seamless as the camera moved up and over. 

The Office provided a more contained location and a test that demonstrated lateral movement and saw a performer getting up from a desk and walking through an office. It illustrates the versatility of ViV in that we could shoot quite an immersive scene with virtually no physical set and traversed multiple meters of travel horizontally whilst only using a minimum amount of high quality LED panel. In reality they were moving against a 3m x 2m panel that was tracking their movement as they walked, and again, the dynamic movement that was achieved in the scene really did work as a proof of concept for ViV. 

The moment when the performer sits on the sofa is one of my favourites; she’s actually just sitting on the empty flight case for the camera. Also it was this setup that showed us just how good the 1.9mm dot pitch ROE Ruby panel is. Sometimes the camera was less than 2 metres away from the volume and we encountered zero moire and the fidelity of the images held up entirely. This is a scenario that we certainly would not be confident in doing with a static wall using a more coarse led panel.