Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×

‘Navigating a new methodology’: Painting Practice on the challenge of creating the VFX for Doctor Who’s Wide Blue Yonder

VFX supervisor Dan May tells TVBEurope about the challenge of previsualising the VFX featured in the Doctor Who 60th anniversary special

When it was broadcast in November, the second of the Doctor Who 60th anniversary specials was praised for the work of the VFX team in creating “the longest corridor the Doctor has ever run down”.

Wide Blue Yonder saw the Doctor (David Tennant) and Donna Noble (Catherine Tate) dealing with a giant slowly destructing spaceship and some monsters who looked eerily familiar.

Much of the VFX work on the episode was carried out by RealTime VFX, with design studio Painting Practice overseeing the pre-visualisation and VFX. Dan May, VFX supervisor and co-founder of Painting Practice, talks TVBEurope through the process of creating that super-long corridor.

How exactly did the workflow work?

The workflow breaks down as follows:

Two separate versions of the corridor were required: a virtual camera asset with simple lighting data, and a high-resolution version with an identical layout for the final shots.

I worked with production designer Phil Sims and director Tom Kingsley to conceptualise, design and then build the detailed Ghost Ship corridor scenes in the Unreal Engine for camera testing on set.

Once the set design concept had been settled, we started to work up a more detailed version of the corridor set in Unreal which we could share as v1 with Mo-Sys and Realtime VFX for the camera test.

Once it had been decided what the art department was going to build and ultimately what was best to build in terms of matching with lighting, we had to lock in our plan for the test. So we were effectively testing a low res version of the main plan a few weeks before shooting the corridor for the episode with David Tennant and Catherine Tate.

One of the main objectives for the test was figuring out an affordable lighting rig that could best match the virtual lights and make the real actors feel bedded into the plates.

We had to ensure our real floor on set matched exactly with the size and scale of our digital set. We also had to work out inventive ways of shooting the actors and stunt performers on the relatively small area comparable to what it was supposed to represent in the story world. This involved a lot of testing with rigs and a travelator to get all the shots that Tom needed to tell Russell’s story.

We filmed with Mo-Sys’ complex camera tracking system. At the time of shooting the camera had to be physically wired to the Video village. Similar to LED volume work you have to optimise the sets and lighting so they play around 30fps, ideally more, so you have some wriggle room. The beauty of this method over full-blown virtual production (other than budget) is that, when shooting, the UE4/5 project does not have to be final pixel ready. It’s still a shooting reference with all the benefits of being able to frame up the shots on a green screen. Actors and crew can understand the bigger picture rather than acting and shooting in a vacuum. A big green screen stage often has issues, in that way. You want the project to be as close to the final as possible so you have a better chance of matching the shoot lighting with the VFX backgrounds.

Myself and RealTime VFX’s James Coore supervised the physical green screen shoot with David Tennant, Catherine Tate and the robot character. We used the digital corridor asset with Mo-Sys for the on-set camera tracking, creating instant post-vis and an essential visual guide for both the actors and production for the shoot. The challenge in planning and post was ensuring the corridors blended with the real sets.

This hybrid virtual production workflow enabled incredibly efficient planning and execution for the large green screen sequences on set where full-blown virtual production would have been cost-prohibitive. It enabled the episode to be locked quickly with minimal notes.

Painting Practice also previsualised all the big VFX shots in and around the more standard in-camera two handers in the sets. So Tom and the Editor had a full deck to cut the story with upon wrap. So very early on the execs were able to review the episode and judge VFX costs and how the story was working.

The other major advantage of the virtual hybrid VFX method, was that in the background of on-set renders was another Unreal project – this contained the high-resolution version of the corridor and was in the cloud. So once we had the camera data, in a very small file we were able to upload that file to the server. It then spat out high-resolution renders super fast to a frame.io project for review. This was a crucial part as the shot cost could be super competitive for the volume of shots we had to complete. The goal was to have a very minimal compositing time. So a lot of the look development was done between Painting Practice, Realtime and Mo-Sys. Once we locked the look of the lighting and render settings we could literally smash through hundreds of shots with a fairly minimal amount of compositing time.

As with all new technology there were some kinks along the road, mainly with syncing of frame rates and getting the render quality right in Unreal 4.27. With the improvement in Unreal 5 now a lot of render problems are going to be easier to do.

Ultimately I found working with this hybrid feels like a very good workflow for large green screen set sequences where full blown virtual production isn’t an option for the cost or blocking.

How did that help the Doctor Who production team?

Firstly, it helped the producers in terms of cost for the physical art department set-builds and VFX shots. It became apparent early on that it would have been cost-prohibitive to build a bespoke large-scale sci-fi physical set for the infinite corridor. It would have also created problems for us in terms of lighting. Also due to the sheer volume of VFX shots, we needed to find a way to maximise the budget. We could only do this if we got a lot of the tracking work and assets done by the time we shot it. The combination of Painting Practice, Realtime VFX and Mo-Sys working on the asset before the shoot enabled us to achieve this.

Secondly, our approach enabled the cast, the director Tom Kingsley and the camera team to see what all the shots were going to look like in the monitor, on the day of the physical shoot. The real time corridor asset had a good amount of detail and texture work done to it and we tried to lock the lighting, this enabled the DoP to match the physical set and cast with our complex digital background asset. It also allowed for much nicer and more ambitious camera moves as the camera and grip team wanted to see the lovely set even though it was virtual. Often on green screen shoots with small set pieces the tendency is to shoot what looks pretty and is real so it later becomes a problem to extend shots if you don’t have the right plates to work with.

Thirdly, we had instant postvis and layout once we wrapped the physical shoot. All the shots were uploaded to a Frame.io cloud and had time stamps for each live take. This enabled editorial to lock the edit more quickly as they weren’t having to figure out all the coverage against green screen. This speeds up VFX production too as a lot of the per shot prep, layout and lighting work was more or less pre-done on every shot.

How did it help the actors?

This approach enabled the actors to understand the space they were in from the outset which helped with eye lines and the general sense of scale they were acting in. It was especially useful for them to react to the corridor changes – when the walls of the corridor moved and shifted in the storyline.

What was the production team’s reaction to the workflow?

Most loved it and found it quite refreshing. We did a full camera and lighting test a few weeks before the shoot to work out some of the issues, such as how to tackle the Doctor and Donna’s long walk and talk sequences. This allowed the crew to get to grips with how it would work.

We also had to do a full pre light to balance our digital set extensions with the real set elements and the cast. For me it is a more competitive and in some cases more useful workflow than conventional virtual production with LED volumes. As you can still programme interactive lighting, it’s not as expensive but you get all the on set camera and post production benefits of seeing the VFX worlds before locking your cut in the online.

How likely will the workflow be used on the show again?

If there were to be an episode with a significant amount of large scale or complex world building work we would propose a similar workflow. I would also propose using it for interior car or spaceship work, especially if there were the possibility of working on the background plates upfront, so you could drive better on set lighting on the shoot day.

What was the biggest challenge of the project?

Bringing everyone on board and navigating a new methodology like this always has its challenges. Realtime and Virtual camera work is still relatively new technology and comes with its problems to iron out but I really enjoy working on projects that have lots of these creative challenges and need new solutions to resolve them. Especially when they pay off!

What are you most proud of achieving?

Delivering it! This was by far the weirdest and funniest script I’ve worked on in a long time and I enjoyed the challenge of helping to bring it to life.