The recent internet blackout caused by a software bug in Fastly’s code led to some of the internet’s biggest websites going offline.
While the likes of Netflix, Hulu and Spotify were hit, more traditional broadcasters escaped unharmed. But, as the broadcast industry moves closer to IP workflows, what kind of impact could such an outage have?
TVBEurope spoke to three experts to get their views on how broadcasters and streaming companies can arm themselves against global internet outages, and how much damage such a thing could cause.
The best way for media companies to arm themselves against a global internet outage, is to understand what causes the outage and therefore what alternatives might survive those outages, says Bruce Devlin, SMPTE standards vice president. “The most recent outage was caused by an error in the supply capabilities of a single large supplier. Obviously one solution to this kind of outage might be the simplistic, don’t allow your business to rely on a single large supplier,” he continues. “That might also equate to cost surcharge to ensure you have a redundant supply chain that reaches all your customers that will impact your business. So, in the end, it comes down to a financial and business-risk decision for each company. Those long term risks might be mitigated by promoting large multi-vendor networks with standards-based interfaces rather than mono-vendor solutions.
“Each segment of transport, from contribution through distribution and delivery must consider the use of multiple paths, multiple providers,” agrees Mahmoud J. Al-Daccak, CTO and EVP product development, Haivision. “Techniques such as network path redundancy must be used to provide for service continuity if any single link fails. Path redundancy in combination with using multiple providers at the transport, datacentre, and CDN level is the practical approach to continuity.”
So, as the industry moves to IP workflows, how much damage could an internet outage actually cause?
“That’s difficult to answer without knowing the dependencies on the network,” says Devlin. “An internet outage may stop a football match from being live-streamed, but if local-capture is available, then the content still has value for later showing. In the end, it all comes down to what level of redundancy the industry is willing to pay for to preserve immediacy/interactivity.”
“When you look at live events in particular – where many people are relying on feeds delivered via IP, or consumers are watching internet-delivered services, this damage to a users’ quality of experience can be pretty significant,” adds Johan Bolin, chief technology officer, Agile Content. “But these weren’t non-existent in a pre-IP world and can actually be mitigated faster and more easily if they’re being managed over an IP network than they would have been using more traditional production workflows. Especially with the right tools in place.”
Al-Daccak points out that there have been some interruptions in the past for TV services, and viewers learned how to mitigate it and managed to survive it. “That said, we must invest ahead of the curve in the reliability and multi-tier redundancy of the internet network and data centres to avoid a catastrophic interruption over wide areas for a prolonged period of time,” he says.
What other redundancies can broadcasters put in place if such an event occurs? “Again, if the internet outage is global, broadcasters must maintain some parallel emergency networks where they can still reach their audience,” states Al-Daccak. “This adds to costs, but until the internet is fully mature, parallel systems must be maintained at least in the mid-term. Beyond that, multi-tier routing redundancy strategies are important – over different network paths for failover when transporting point-to-point live video, and across multiple CDNs to avoid delivery challenges. These approaches must be considered and become an integral part of any system design. Relying on a single vendor or a single outlet must be avoided. A standardised and seamless way to switch between vendors or providers must be established and instituted.”
Bolin cites the need for monitoring that can provide date and insight into a network’s performance and automatically move affected users to available capacity if there are any quality issues; acting before anyone notices anything is wrong. “Automated and real-time switching to another CDN to avoid quality-impacting issues is key since broadcasters and service providers will not know when and where an issue will occur and automated processes can spot them faster than if they were to be done manually.”
Finally, how can CDNs and work with the media industry to ensure services are not affected?
“There may need to be cooperation between CDNs to mitigate against attacks in the networks layer; cooperation between media companies to mitigate against attacks at the media layer; and collaboration between all companies to find working practises that protect against and mitigate the impact of attacks between layers,” states Devlin.
“It’s possible to have a good, multipurpose CDN delivering different kinds of media and information but for the contribution and delivery of high-resolution video content, it’s really important to understand the specifics of this type of content,” says Bolin. “The monitoring and management tools that we’ve spoken about are designed specifically for those where delivering high quality video is a key part of their business. This means broadcasters and content providers can better mitigate any potential quality issues for their end customers.
“Internet outages aren’t just a media problem, but an internet problem. So a similar approach should be taken for any CDN deployment, whether it’s for publishers’ news sites like the New York Times or The Guardian, application software updates or the BBC’s iPlayer services.”