Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×

The shape of things to come

Could advanced 3D scanning techniques hold the key to the way we create and interact with virtual three dimensional images? It is under investigation at Digicave.

Could advanced 3D scanning techniques hold the key to the way we create and interact with virtual three dimensional images? It is under investigation at Digicave, where CEO Callum Rex Reid believes it’s a new art form.

“Currently 3D scanning is used as a tool for computer games creation, CGI and to augment 2D video content but we are interested in how it can be used to capture the world not just in image but also in shape,” says Rex Reid. “Instead of using flat light, which is how most 3D scanning techniques work, we are interested in artistically lighting subjects and scenes to recreate natural environments. If we do start to capture and deliver 3D as shape or form, then perhaps we should be we talking about the development of a new art and a new language applicable to lighting three-dimensional scenes.” Digicave is working with camera array specialists, Timeslice Films, to devise a realtime scanning process using two or more cameras combined with photometrics, a computer algorithm that takes data from the viewpoints and recreates the scene from another viewpoint. The three images show artistic lighting on a 3D scan capturing both the light and the form at the same time and enabling the ability to capture a ‘real’ moment. Rex Reid sees initial applications in experiential attraction films, augmented reality and e-commerce for distribution over new web 3D devices to 3D-ready smartphones, PCs and tablets; or for viewers wearing head tracking devices such as Hasbro’s My3D.
 “3D scanning is to CGI what photography was to painting,” he says. “We have sculpture, painting, photography, motion pictures, TV … Why not also have sculptural photography?”
 A related approach has been adopted by a team at UK-based Microsoft Research including members at Microsoft Research Cambridge, Imperial College London, Newcastle University and Lancaster University (with the University of Toronto).

In August, it demonstrated in very practical terms how the infra-red tracking technology in the Kinect controller could map a user’s surroundings and make it part of the live on-screen experience. KinectFusion uses the Kinect as a camera to monitor the depth of known objects in its view, and with that information, the 3D data can be given absolute measurements, producing a static map of the room.
 According to Microsoft Research, ‘it allows the user to scan a whole room and its contents within seconds. As the space is explored, new views of the scene and objects are revealed and these are fused into a single 3D model. The system continually tracks the six degrees of freedom of the camera and rapidly builds a volumetric representation of arbitrary scenes.’
 Among the applications for this suggested by the research team: extending multi-touch interactions to arbitrary surfaces; advanced features for augmented reality; real-time physics simulations of the dynamic model; novel methods for segmentation and tracking of scanned objects. 

www.digicave.com/
http://blogs.technet.com/b/next/archive/2011/08/11/microsoft-research-shows-kinectfusion-it-s-jaw-dropping.aspx