XPLOR at Production Park in West Yorkshire has created what’s believed to be the first-ever fashion campaign made in virtual production using AI-generated locations.
Fashion house MKI worked with XPLOR to launch their latest collection, using a series of virtual environments portraying a sun-drenched marble quarry.
The shoot took place in XPLOR’s virtual production Studio 004 at Production Park, with the team creating an imaginary Mediterranean backdrop using generative AI, displayed on a 12m x 4m LED volume made with over 250 LED panels.
The team at XPLOR opted to use AI to create the virtual production background as it enabled them to create the backdrop without having to send a photography crew to a marble quarry.
“It was about being able to dial in the environment directly towards the briefs that we’d been given from the client,” Richard Blair, head of production services at XPLOR, tells TVBEurope. “More broadly, it’s also about bringing the cost of entry down for virtual production.”
However, Blair is keen to stress that while AI helped create the backdrop, XPLOR is still very focused on using human production crews.
“We definitely don’t want to go down the route where absolutely everything is in the virtual, but for this case, it was the ideal way for us to create a series of backgrounds without having to spend ages going and sourcing locations,” he adds. “Whatever we end up doing with AI, we are not looking to remove the human element.
“Being able to blend that with virtual production, I think is something that could be really valuable for the future but we have no intention of doing everything in AI. I think ultimately we will be able to tell stories better and we’ll be able to end up with better projects if we have this blend, rather than doing everything in software.”
XPLOR chose to use Cuebrick to generate the background due to its direct links with disguise, which is the media server element they use to display content on the LED screen.
“It’s also really excellent at creating landscapes,” adds Blair. “We tried to do the same thing with Dall-E and Midjourney and the results that we were getting just weren’t as as good. They weren’t as photorealistic.
“Cuebrick allows you to change little elements in the scene as well. Once you’ve generated an image, you can take certain elements out or add other bits in, and you can change lighting conditions on the existing image. If you do that through Midjourney, it’s almost like you have to roll the dice every single time and hopefully you get the result that you want.”
The team also preferred Cuebrick as it allowed them to use both positive and negative prompting. For example, the positive prompts for this project were European white marble quarry, cinematic 4K photoreal.
“In the negative prompting, we would put things like cartoon people and any of those other things that we didn’t want to be represented in the image,” explains Blair. “You put things like cartoon or animation because then it automatically discounts anything in its training data that would make the image look like it was made in Unreal Engine or like it was a cartoon. That’s one of the main reasons why we use Cuebrick because it allows us just that level of control that we wouldn’t get if we used one of the other models.”
Blair adds that XPLOR is currently working with the Alan Turing Institute to build its own model so that in the future the team can use a proprietary system rather than one that’s off the shelf.
The background for the MKI project was static, but Blair believes the ability to use AI to create moving backgrounds isn’t far away thanks to the launch of OpenAI’s text-to-video tool, Sora.
“We’re really interested in the idea of of text to video because most of what we do in virtual production isn’t shooting stills, it’s shooting video content, and the the way to get the best out of virtual production is by having camera tracking and being able to move around the scene and get that parallax effect and that depth of field. Certainly going forward, having moving images is definitely the plan.”