Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now


Everything we know about: NeRFS and why they’ll be crucial for production and post in 2024

Addy Ghani, VP of virtual production at disguise, takes TVBEurope through a key AI technology to watch out for in 2024

In recent years, we’ve seen so many companies highlight artificial intelligence. The truth is many AI technologies aren’t ready to stand up to the rigorous demands of production and post. There is one AI technology, however, that has already stood out to me as a game-changer: Radiance Fields. It comes in many flavors and as I am writing this, it’s already iterating. You may have heard of NeRF, MeRF and SMeRFs.

Below, I’ll take you through what NeRFs are, why they’re important, and how they will revolutionise our workflows in 2024 and beyond.

What are NeRFs?

If you’ve not yet heard of them, NeRFs are essentially an AI-driven alternative to generating 3D meshes from scratch. Anyone can create a NeRF by taking a series of pictures from different viewpoints, and then feeding them to a neural network like Volinga. The neural network has been trained to predict how the object or environment is “3D”, that is it provides a novel camera point of view based on the movement of the observer. The incredible fact is that it generates that new perspective without any traditional 3D framework like meshes, shaders or vertices. 

This means it can generate impressive 3D ‘digital twins’ of the real-world objects or scenes captured in 2D images. For creatives, this will help generate realistic 3D meshes in a fraction of the time it would take a team of artists to model, texture, rig, light, and render them from scratch.
At this time NeRFs are only a still representation of the real world, that is, it’s not moving. However, advances in motion NeRFs are coming. 

NeRFs vs. Photogrammetry

NeRFs might sound similar to photogrammetry — the technology modern mapping tools already use to generate street views we can virtually walk around — but NeRFs are far more advanced. This is because, with photogrammetry, you are converting a series of images into a polygonal 3D mesh with materials baked in. This means the final generated meshes are static: all the lighting and reflections they are captured with are in the final 3D view. This makes photogrammetry capture an often lengthy process that’s complicated by reflections, which often need to be carefully edited out by an artist.

With NeRFS, all this can be avoided thanks to machine learning, which helps ensure the final model isn’t static. That means you can move around the 3D model and see highlights, textures and reflections change, independent of the view. 

NeRFs can also use machine learning to calculate more about the space than what is given in the source material — helping to identify and fix reflection errors, for example. If you’re capturing a scene and you notice crew members are accidentally reflected in a mirror, NeRFs can use machine learning to automatically paint out the crew in the final 3D mesh of the scene.

Introducing MERF and SMERFs

If you’re working with a live television show or real-time virtual production stage, NeRFs might seem like dauntingly massive files, making them a seemingly unrealistic solution. But lately, neural radiance field technology has developed impressive new ways to achieve smaller file sizes, while still allowing anyone to generate 3D meshes from 2D images using machine learning. 

Last year, for instance, a new paper was published introducing Memory-Efficient Radiance Fields or MERFs, which aimed to solve the problem of how much memory strain NeRFs place on a device and improved it to the point of enabling real-time rendering in a web browser. In the demo video, a 3090 GPU could display the MERF at 60fps without issue. 

Then in December, SMERFs were announced. These actually achieve Zip-quality NeRFs, operating at a remarkable 60fps on everyday devices like smartphones and laptops. You can learn more about how these work at

How NeRFs will change production and post in 2024 and beyond:
  1. Faster 3D Workflows

This one is obvious: If you can recreate real-world objects and locations within minutes, you’ll reach usable results much faster than you would by creating them digitally using Maya, Unreal Engine or any other tool. That will make everything from pre-production through to final delivery of visual effects, games, television shows, and VR/AR worldbuilding much faster and cheaper to produce. As the quality of NeRFs and its successors increase, it will be more and more appealing to use for ICVFX(in-camera visual effects) applications. 

  1. More Cost-Effective Virtual Production

With NeRFs, filmmakers and broadcasters can generate plug-and-play virtual production environments — without spending weeks building them from scratch. Plus, because the user input required is minimal, all you’d need to do is capture your next location is walk through it using a 360° camera. 

This opens up virtual production technology to indie studios and smaller broadcasters as it makes it much less expensive to produce. You could be shooting in a location thousands of miles away without needing to travel there or hire a team of artists to recreate it. Think of NeRFs as a supercharged version of a video plate.

  1. Encouraging AI Blending

Tools like Midjourney have already changed the way images can be generated with AI. The only problem? Images that don’t have a basis in reality can look unrealistic, or be in the uncanny valley.

NeRFs are a representation of real life. You’ll be able to use NeRFs to instantly create 3D models based on the real world. You could then use other AI tools to modify these models with completely AI-generated imagery, while still retaining a realistic final result. 

In the future, if you wanted to swap a New York street in summer for a winter one, all you’d need to do is create a video of the real summer location and then turn it into a NeRF. You could then use AI text-to-image tools to add snow to the scene in minutes. 

Already, big stock image sites like Shutterstock are working to make this a reality by partnering with Luma Labs AI, RECON Labs, and Volinga AI to create an AI tool to make NeRFs more approachable for anyone to use. The site even has an opt-out for artists who don’t want their work being used by others or excluded from AI training. 

But this is just a small example of the power NeRFs have to adapt their 3D scenery. Large objects can be taken out of the set using the same principle. You could capture a street scene then turn it into a NeRF so that with AI processing, every car can be automatically removed. You could then use AI image generators to introduce whatever the script calls for — whether that’s creeping vines for a post-apocalyptic setting, or vintage cars for a period production.

The immediate usability of NeRFs is very promising, and with fast iterations and improvements, the future of this technology for media and entertainment is very promising. We’re only scratching the surface at this moment and I’m very excited for this journey.