Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×

Expert timing: Meeting one of the key challenges in the move to IP for live broadcasting

SipRadius founder and chief scientist Sergio Ammirata explains why timing isn't necessarily everything in the move to remote production

As soon as we started using more than one camera to shoot live television, timing became critical. The downstream system expected a new picture precisely every 50th of a second, so all the sources had to be synchronous to ensure smooth switching between them.

As analogue television architectures developed, so did the idea of a station clock creating a master time reference, which was then distributed using sync pulse generators to every device in the system. Analogue became digital, and the nature of the signal changed, but not the fundamental structure of a master time source distributed everywhere.

Then came the move to IP-connected infrastructures, which drove transformational shifts in the way that media is created and distributed. One of the most significant changes was in the matter of timing.

In an analogue, or SDI, infrastructure, a signal goes from one device to the next in the chain over a dedicated cable that has no processing, so introduced no latency. In an IP environment, signals travel multiplexed on network cables via switches, which are fundamentally non-deterministic. Timing information has to be carried over the same network – it is clearly impractical to move all signals to IP but retain an SDI network just for the SPG – so the timing control data has to allow for the nature of the transmission.

Out in the wider IT world there was also a need for precise timing information. So the industry set to work on the challenge, and in 2002 published the first version of the IEEE 1588 standard, which introduced PTP, the Precision Time Protocol.

There is no space here to go into the detail of how it achieves accuracy, but for our purposes it is reassuring in that it involves a grand master clock, generating the reference signal, with as many “ordinary” clocks – receivers – as necessary.

All is fine in broadcast and production centres, where PTP – adopted as SMPTE 2059 – acts just the way the old master clock system used to work.

However, as the industry moves away from monolithic broadcast centres, remote production has become increasingly common, especially for high-profile events like major sporting occasions. Engineers have found that distributing PTP over uncontrolled networks, such as those used for feeds from multiple venues, is challenging. Setting up PTP in these environments often demands significant engineering time and resources on-site, with the added risk of introducing considerable latency into the production environment.

While PTP’s origins in the IT industry have made it ambitious in scope — offering sub-microsecond accuracy — this level of precision requires a well-managed network, something that is not always feasible in media production, especially when using public internet infrastructure. The media industry, aiming to make use of widely available and affordable network solutions, has found that such precision can sometimes be excessive for the real-world needs of live broadcasting.

For a 50p production, where a new picture is produced every 20 milliseconds, microsecond-level accuracy is sufficient. We do not need picosecond precision. Timecode sources that offer microsecond accuracy are widely available. GPS transmits them, and cellular networks receive them. These freely accessible time references are often “good enough” for live production.

However, setting up a GPS antenna in a remote location can often present a real-world challenge. To achieve nanosecond accuracy, a typical PTP grand master must link with at least 20 satellites and have an atomic clock source to hold the time with that accuracy for many hours or days when it cannot see the satellites as it is often the case with a remote production. 

The idea of moving away from a single, centralised source of reference timecode to localised PTP grand masters may seem counterintuitive. To the traditional engineer, it feels wrong to rely on a remote and unrelated source of timing information rather than something under the broadcaster’s direct control.

However, by adopting timecode sources that are accurate enough for most media production needs, broadcasters can reduce complexity and focus on delivering reliable, synchronised production across diverse, decentralised environments. This approach not only simplifies the transition to IP-based systems but also offers a practical, cost-effective way forward in a rapidly evolving industry.

Ultimately, the goal is to strike a balance between precision and practicality. While PTP provides sub-microsecond accuracy, live production environments often do not require this level of exactness. Practical, localised solutions like those incorporating GPS and cellular timecode can maintain high-quality production without overengineering the timing infrastructure, easing one of the key challenges in the move to IP for live broadcasting.