3D is moving from the preserve of big budgets and bespoke technology into the mainstream but crucial lessons need to be learned. Adrian Pennington reports.
“One of things that disturbs me, and disconcerts many of my colleagues, is that 3D presents an opportunity for people to proclaim themselves experts,” says Michael Bravin, chief technologist for equipment dealer Band Pro Film & Digital. “Part of the problem is that there’s lots of misinformation and people walking around with parts of the jigsaw. Training is a big problem in all areas of digital capture but the knowledge gaps are being exacerbated with 3D. There really isn’t any sort of training programme.”
Headquartered in LA with a European outpost in Munich, Band Pro’s involvement in 3D is limited, he admits, but it sees a role in helping to bring together technology, techniques and ideas “to help move to a position where the industry can have a single workflow, a single set of off-the-shelf technologies.”
In a day-long symposium on 3D best practice held at Technicolor’s Burbank facility, guest speaker Howard Postley, COO/CTO of 3ality Digital Systems agreed that “3D is hard to do right and it would help if there were a lot more people trained to understand what it takes.
“Essentially 3D boils down to two things: a lot of expertise and a lot of precision,” he explained. “Every problem from eye strain to disorientation is caused by human error, not by the equipment. Producing good or bad 3D is not about the gear per se – but the person in charge of it had better know what they are doing.”
Jeff Blauvelt, who manages rental house HD Cinema, suggested that the high-end kit market was locked-up by Pace, 3ality and Paradise FX “but there’s nothing at the lower budget end. At first it seemed simple – you get a 45-degree mirror, put a couple of cameras on and record which we did for one project using EX3’s, an Inition StereoBrain (for live 3D monitoring) and record to HDCAM SR. By far the hardest problem to solve was the rig. Very few mechanical rigs are optically aligned but if you’re designing one yourself it will never work.”
The critical importance of the initial stereo capture was emphasised by Postley. “First of all make sure the cameras are genlocked. That’s basic but I see those errors being made. Secondly do absolutely everything you can to get it right at the point of acquisition.
A key source of problems was a ‘fix it in post’ mentality among people thinking that they can get away with location mistakes as with 2D.
“This mentality is caused by a fundamental misunderstanding that the 3D effect is governed by parallax (or interocular — the distance between the two cameras). The 3D effects are caused by that but also by 20 other depth cues which take a different prominence depending on the geometry, lighting, subject position, focal length and so forth, per shot. If you have to rotate an image in post because the cameras were aligned wrongly in the first place you will not only lose resolution and colour information but begin to impact on the other depth cues within that frame.”
Parts of the workflow
There was wide agreement that ‘gentle 3D’ (acting as a window into the picture) as opposed to ‘extreme 3D’ (which breaks out of the frame to assault the audience) was the preferable editorial technique.
“To be gentle you have to choose the parallax very carefully,” noted Postley. “Calculating the correct distance is not simple because it changes depending on parameters determined by the camera lens, distances in the set and the screen size it will ultimately be displayed on.”
The event’s audience of imaging technicians and cinematographers recognised that the choices for buying or renting 3D gear are often expensive and include prototype equipment, specialised crews and a rigid post path.
“The problem with 3D is that it is a very project-oriented technology,” argued Band Pro founder and CEO Amnon Band. “We’re trying to commercialise it by packaging the hardware and software together and making it accessible to everyone. Our customers want to pick up a camera, turn it on and go shoot so we’re trying to simplify that process.”
Key parts to that workflow (as presented by the event) include Element Technica’s new range of 3D rigs. It has built both beamsplitter and parallel configured camera platforms intended to be as simple to use as current 2D systems.
“We want to make it possible for any filmmaker to create 3D content using their regular 2D crew,” explained 3D Technician Chris Burket, demonstrating the full-size Quasar platform that can accommodate digital imagers like Red or Sony 1500s. “We are developing a series of hardware/software tools to automate stereo calculation. These tools, available in modules for the core systems, enable the director or DP to control how much or how little the subject comes off the screen without requiring complex IO (interocular) and convergence calculation techniques.”
The Quasar can be configured as either a beamsplitter for close work with wide lenses or alternatively set up as a converging side-by-side system for use at live broadcast events with extreme focal lengths.
Technica’s mid-sized Pulsar mounts box-style cameras such as the Scarlet, Epic and the SI-2K. The forthcoming Neutron is designed for 2/3-inch or 1/3-inch cameras and weighs just 13lb for handheld or steadicam work.
“All the rigs require only a set of Allen wrenches and a mirror gauge for complete camera/lens installation and precise alignment, which can be completed in less than 15 minutes as opposed to the hours required with other professional 3D systems,” claimed Burket.
Silicon Imaging showed off its integrated 3D camera and stereo visualisation system. The SI-3D shoots uncompressed raw imagery from two synchronised cameras and encodes directly to a single stereo CineFormRAW QuickTime file, along with 3D LUT colour and convergence metadata. The stereo file can be instantly played back and edited in 3D on a Final Cut timeline, without the need for proxy conversions.
“Combining two cameras into a single control, processing and recording platform enables shooting and instant playback like a traditional 2D camera with the added tools needed on-set to analyse and adjust the lighting, colour, flip orientation and stereo depth effects,” noted David Taylor, CEO of CineForm. “In post, a unified stereo file plus associated metadata can be immediately graded for dailies, edited, and viewed in either 2D or 3D.”
IRIDAS’s colour management technology (via a Speedgrade interface) is also integrated into the recording software which allows cinematographers to create, save and embed looks right into the output file, so they don’t have to be re-created in post.
A final panel session speculated that the fundamentals of all this emerging technology has a limited life-span
“Binocular stereo is an artefact of the way we use cameras today to mimic the eyes,” noted Tim Sassoon, creative director, Sasson Film Design. “Yet there are an infinite number of viewpoints in the realworld. In twenty years we’ll be using a completely different model driven by the type of display in which the 3D image will move relative to the viewer as they move around the room. Certainly we’ll see images become more computational with more basic imaging tasks shifted to post.”
Multiview video, in which sequences are recorded simultaneously from multiple cameras to allow the viewer to observe a scene from any viewpoint has already been incorporated into the standardization activity of MPEG. “Binocular capture is a thing of the moment,” agreed Postley. “It may not be here in five years time. But we need to get to grips with this before advancing further.”