Creative software developer Foundry has added a machine learning (ML) toolset to the latest iteration of its node-based compositor, Nuke.
Developed by Foundry’s Artificial Intelligence research team, the ML toolset’s applications include upres, removing motion blur, tracker marker removal, beauty work, garbage matting, and more.
The toolset’s key components include:
-
CopyCat – creates an effect on a small number of frames in a sequence and trains a network to replicate the effect. The shot-specific approach enables the creation of high-quality, bespoke models relatively quickly within Nuke without custom training environments, complex network permissions, or sending data to the cloud.
-
Inference – runs the neural networks produced by CopyCat, applying the model to the image sequence or another sequence.
-
Upscale and Deblur – two new tools for common compositing tasks developed using the ML methodology behind CopyCat and open-source MLServer. The ML networks for these nodes can be refined using CopyCat to create high-quality shots or studio-specific versions in addition to their primary use for resizing footage and removing motion blur.
Nuke 13.0 also includes the introduction of Hydra support within Nuke’s 3D viewport, offering a higher quality image much closer to Scanline Renderer’s output, enabling users to work closer to their final image.
Speaking about the latest release, Christy Anzelmo, senior director of Product, Foundry, said: “Nuke 13.0 combines what we’ve learned from studios over the last year by introducing new technologies that expand what’s possible within Nuke while maintaining the creative workflows and technical control that artists love. With the new machine learning toolset, we are putting the power of machine learning directly into the hands of artists as they can now create bespoke tools to enable them to stay creative, while also addressing the most common VFX challenges for creating high-quality shots.”