Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×

disguise, Move.ai partner on high-fidelity motion capture

The two companies are developing a custom AI technology based on real-time markerless motion capture software, Invisible

disguise is working with with markerless motion capture technology company Move.ai to “democratise virtual production and metaverse experiences”.

The partnership will marry advanced markerless motion capture with graphics processing for film and episodic TV, broadcast and extended reality studios around the world, said the companies.

To achieve this, the two companies are developing a custom AI technology based on real-time markerless motion capture software, Invisible.

The software will be integrated into the disguise platform with the aim of removing the need for mo-cap suits. The technology works by extracting natural human motion from video using advanced AI, computer vision, biomechanics and physics to automatically retarget the data to a character rig and create a virtual character that can mirror human motion in real time, said the companies.

“When it comes to virtual characters and real-time effects, movement needs to be translated into data,” said disguise chief product and technology officer, Raed Al Tikriti. “Motion capture and recreation are key pieces of the puzzle. We want to make this technology as accessible and scalable as possible, enabling our community of partners to enhance shared experiences, entertainment and storytelling.”

Invisible integrates with the scalable processing capabilities of disguise hardware, with motion capture data directly integrated into creative workflows in the disguise Designer software. Meanwhile, disguise’s RenderStream protocol ensures the transfer of skeleton data across the disguise Unreal Engine rendering cluster, allowing for greater synchronicity of content and tracking data across the production workflow and the merging of the physical and virtual world.

The combined solution can be used to power:

  • Avatars in metaverse experiences
  • Digital characters in virtual productions and AR players in broadcasts
  • Realistic shadow casting for talent onstage
  • Gesture-triggered 3D graphics and scene changes
  • Movement-triggered particle effects, such as smoke and fire