Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now


How to shoot stereo 3D

3DTV Primer - Understanding where to place the action within the 3D frame is vital to successful television broadcasts. Quantel Director of Marketing, Steve Owen, is your guide to shooting parallel or converged.

3DTV Primer – Understanding where to place the action within the 3D frame is vital to successful television broadcasts. Quantel Director of Marketing, Steve Owen, is your guide to shooting parallel or converged.
The question of whether to shoot stereo3D ‘parallel’ or ‘converged’ is best handled by stereographers and results are what count, not theory. However at the risk of stirring up a hornets nest, let’s look at some of the shooting concepts that have a direct bearing on how stereo3D is handled in broadcast production and post.
In human eyesight, convergence is the ability of our eyes to divert eye optical axes horizontally in an inward direction. Convergence can be simulated by ‘toeing’ of the cameras — focusing on a depth point in the scene, either in front of, behind or at the point of interest. The ‘convergence point’ is where the axes of toed-in cameras align on the Z-axis and can be adjusted in post by horizontal movement.
The placement of objects of interest in a shot, either on, behind or in front of the screen plane is one of artistic judgment. Placing objects behind the screen (positive parallax) gives a ‘window’ effect i.e. the viewer is looking through a frame to a scene behind. Placing objects in front of the screen (negative parallax) gives the appearance of action happening in the viewer’s room.
Zero — or neutral — parallax (sometimes referred to as the point of convergence) is often used for the object of interest in the shot, therefore establishing zero parallax is an important creative decision in guiding the audience. For example, the director may choose to have text at zero parallax, which will locate it on the screen plane.
Some stereographers prefer to shoot converged i.e. with the cameras toed in so that they can choose where an object is. This reduces the need for adjustment either in a truck (for live events) or in post (recorded). However, shooting converged can sometimes lead to a keystoning effect. This can be modified by devices with corner pinning or skew (most DVEs can do this).
Other stereographers shoot parallel, placing all objects in front of the screen plane, leaving the placing of convergence to someone else, either in the truck or in post. Parallel in one sense is ‘safer’ because the images are not already set, however converging in post requires repositioning a whole programme’s worth of shots. That’s fine on a drama or something not on a very short deadline but very problematic for close-to-air content. It means work for someone. Shooting parallel with the cameras set at normal eye distance can be tricky for close ups.
When shooting parallel or converged you can also choose to increase or decrease the distance between the centres of the camera lenses. This is normally set somewhere around or slightly below the average distance between the centres of the eyes of an adult (approx 65 mm).
If an interocular distance relative to the focal length of the camera lenses is much less than adult human eyesight (eg less than 50mm) it can cause gigantism in which figures appear outsized. If standard cameras are used, the minimum interocular distance is typically limited by the thickness of the cameras so a mirror or beam splitter system is often used, enabling interoculars down to millimetres.
However special interocular camera set ups of 5mm or less have been used (Hypostereo) to reduce the distance stereo effects can be seen and giving the viewpoint of a smaller observer (a nice trick for showing how a child sees the world). Moving the cameras apart (Hyperstereo – beyond 70mm interocular) increases the distance stereo effects can be seen and gives a ‘giants’ eye’ view. It is also a nice trick for wildlife and scientific documentaries to give stereo3D effects on distant scenes.
Negative parallax is problematic if the objects intersect the edge of frame, as contradictory depth cues are sent to the viewer. Essentially one cue is saying that the object is in front of the screen and another is saying that the object is behind it.
This problem can be reduced in post by a technique known as a ‘floating window’. This involves applying a partially transparent mask/masks on the left and/or right eye signals. The masks have the visual effect of repositioning the screen without changing the displayed depth position. Another similar issue is caused by objects moving backwards and forwards over the edge of frame which can be also be addressed using the floating window method.
Objects breaking the frame aren’t necessarily a problem. It happens in IMAX all the time and also is common in conventional stereo films — the audience is encouraged to concentrate away from such an object by well thought out shooting.
Who is right? In our own experience, one argument that sometimes gets overlooked is the ‘fourth dimension’, which is time. Some parts of the converged versus parallel debate goes back to the 19th century and came out of shooting stills and other parts came out to the 1950’s film boom where stereo post production was slow, expensive and not always accurate. Now we have tools to prevent, detect or fix excessive background parallax during a shoot — which is a common criticism that parallel advocates make against converged shooting.
The reality is that both techniques can produce great Stereo3D when the people involved know what they are doing. For live or close to air events there isn’t really any choice anyway as there is no time to adjust shots in post.