Notes for preparing to shoot 3D20 June 2011
Dave Blackham, 3D consultant and managing director of Esprit Film and Television (pictured), offers some advice for producers embarking on a new stereo documentary. · Considerations for multiple screen sizes: Always shoot with an Interxial (IA) appropriate for the larger screen. If there is a shot with very small disparity that works for the large screen it may not work for the smaller screen as there won’t be much depth. You may choose to re-shoot an alternative version with an IA suitable for the smaller screen too. Do it at the time and it won’t take long to achieve.
· When gripping for 3D its essential movements are smooth and fluid and don’t have any jerky movements which may cause the viewer to fail to fuse the 3D image. Consider jibs, cranes, sliders first and Steadicam and shoulder mounts thereafter. Some operators are very good off the shoulder some less so. Use movement as a tool to extend the shot and move into an object of interest rather than cut into it with an edit – every edit causes the viewer to refuse the 3D image sequence.
· Always prepare as best you can for filming with a good idea of the storyboard in mind to create a depth script chart. However that’s not always possible with some genres. Good documentary 3D cameramen that shoot unscripted sequences are those that are good at assembling edited sequences in their head on the fly so they can work out suitable edit points to preserve depth continuity in 3D. It’s important the shot depth isn’t changed unintentionally across shot cuts or if it is changed then it’s changed to an appropriate IA and convergence setting to make the edit work to help tell the story not work against it.
· Know your Depth Budget. Let’s say its 3%. If your on-set monitor doesn’t have disparity grids then mark the monitor with a piece of gaffer tape with two lines marked at 3% of the horizontal width of the screen apart. It’s a rough guide but very useful.
· 3D tends to work best shot on lenses wider than 30mm on a 35mm camera. Underwater objects appear a 1/3 closer when using a flat port which is needed as dome ports can’t be used for 3D camera systems. Ultra wide lenses are unworkable in a beam-splitter as the rig would have to be huge due to the size of the mirror box. Some underwater housings use side by side rigs with wide lenses but at the expense of having a large IA and therefore some objects close to camera can’t be filmed as the disparity would be too great. This causes some operators to converge the cameras on the object of interest and can result in huge back ground disparity. A possible solution is if the object of interest is on the seabed and you are looking down at it there’s effectively no back ground to diverge so the technique may work. This also works for normal surface filming too.
· When working underwater, 3D monitoring is difficult to achieve. Consider using a 2D display and a mix of the L and R images. Better – set the display to anaglyph and view in 2D. This provides an immediate way of checking how much disparity there is. You can also insert a small cut gel, red and cyan, into your diver’s mask at the bottom of the lens so it covers only about 1/4 of the eye piece. This allows a rough but effective means of viewing the image in 3D just by viewing the display through the lower 1/4 of your mask. For safety and viewing in 2D just view normally through the top clear eyepiece of your mask.
· Most aerials for a 3D show are shot in 2D, if you think about it there’s virtually no disparity presented to each eye in natural vision over about 130 feet, so no 3D perception.
· There’s a current move towards larger sensor cameras particularly for features work, however the larger sensor can work against us for 3D. You need an extended depth of field to create shot volume, there’s about 2.5 stops difference between a 2/3 sensor and a S35mm sensor for equivalent depth of field at a given distance to the subject, so for 3D in this instance a 2/3 camera may be more appropriate for the 3D medium or even cameras with smaller sensors if they are high enough quality for the production. This also raises the question of light. There is light lost through the beam splitter mirror and given the smaller the aperture required, more light needed.
· If you are using a camera such as an Si2K shooting for a 16 x 9 (1.77) 1920 x 1080 deliverable the camera can shoot at 2048 horizontals pixels and have an additional 6% in post to just convergence without losing resolution on the deliverable if shooting parallel. This technique is also applicable to emerging 4K and 5K cameras too.
· The key to converting 2D into 3D is asking if the shot is irreplaceable and unable to be reshot, perhaps, the first man on the moon or the Royal Wedding. If so clearly it’s worth a go but it may work to varying degrees of success and may be super expensive. If it’s a repeatable event, then it may be cheaper to reshoot and also yield a much better result.