Intelligent Monitoring

As the number of channels and consumer demand for new services continues to grow, there is increased pressure on the cost of operations to maintain high output quality. This has led to the need for greater levels of automation in channel delivery both in playout and in the monitoring of services.
Author:
Publish date:
Rob_Rowe_482

By Robert Rowe, managing director, Live Television at Snell

As the number of channels and consumer demand for new services continues to grow, there is increased pressure on the cost of operations to maintain high output quality. This has led to the need for greater levels of automation in channel delivery both in playout and in the monitoring of services. Advances in playout automation allow more channels to be run by a single operator. However, this introduces the challenge of one person having to monitor the status of many playlists, devices, and transmission feeds. Conventional monitoring solutions often rely on an operator having to spot an error amongst a vast array of status information, and possibly having to identify a root cause when a single fault triggers multiple alarms.

As all broadcasters and broadcast service providers face the drive to be more efficient and cost effective, more sophisticated monitoring technologies can provide more informative alarms, resulting in faster fault identification and resolution. The addition of ‘schedule aware’ intelligence within the detection of fault conditions can also further enhance the role of automated monitoring technology.

One of the primary ways to help operators is to reduce the amount of information they are presented with. If fault detection systems are sufficiently sophisticated, then it becomes unnecessary to show an operator all status reports. Added sophistication enables the system to highlight when something needs looking at – and only when a potential fault condition is identified. This approach is known as ‘monitoring by exception’. With increasing monitoring intelligence comes the ability to auto correct faults whilst raising an alarm such that the fault condition can be investigated without service interruption.

Monitoring by exception
While monitoring by exception involves some initial challenges, it delivers real benefits to the monitoring process. The initial challenge in monitoring by exception is to collect comprehensive information on the content of playlists, equipment, and signals. Control and monitoring systems need to be able to exploit a range of data sources from the media and device status within the overarching automation system, to content analysis tools within signal processing systems.

By monitoring all of this status data, it’s possible to ensure that alarms are consolidated into customised GUI displays with logic applied to the alarms to assist with fault resolution and root cause analysis.

This facilitates a level of intelligence in alarm-based systems. Logic can be applied such that upstream alerts are prioritised over alerts downstream in the signal path to assist the operator in identifying the root cause of a problem. In the context of monitoring by exception, the combination of logic and flexible GUIs means that an operator can literally be shown a visual illustration of a problem in detail.

Advanced algorithms which allow the monitoring of visual impairments, means that processes that required human subjective judgements to recognise, can now be detected by an automated system. For example ‘stillish’, a frozen image may oscillate between two fields or have an animated logo super imposed, either of which would be sufficient movement for a simple ‘still’ detection algorithm to be deceived. Algorithms that enable monitoring by exception are able to uniquely discern these types of failures.

A range of error conditions, from missing media, and scheduling errors, to peripheral device failures can be detected. A monitoring by exception system can be designed to give operators a view of all playlists within an alarm screen to provide them with additional context of an error condition.

Along with the ability to detect faults and apply logic to the error condition, there is also the possibility of a system enacting or suggesting corrective action. This is most commonly implemented in a main/guard failover situation where the primary feed is backed up with a secondary feed using intelligent monitoring switches and systems. Increasingly finer granularity is becoming possible with sub components of the media being protected and switched, such as audio and metadata.

A layer of intelligence built into signal protection systems makes it possible to detect a failure on a primary feed and at the same time assess if the backup feed is valid. At this point, it can be configured to automatically switch to the backup or, in the context of monitoring by exception, alert an operator to a specific problem so that they can manually make the switch or investigate the cause of an automatic switch.

Signal analysis parameters defining an error can be set based on the type of content being broadcast. For example, for live sports, continuous audio and video would be expected, whereas the conditions for stationary video or silence in audio for TV drama programming might be quite different.

Schedule aware monitoring
No matter how sophisticated the system and signal analysis, or the associated business logic to define an error, there are always likely to be cases where false error conditions are reported because the context of the broadcast content is not known by the monitoring system. This can be addressed by linking key schedule information with the status from equipment and signals to bring new levels of capability to a monitoring system; we call this ‘schedule aware monitoring’.

Time-bound metadata from a traffic system when translated into secondary events can be used to set the parameters for monitoring. For example, if a piece of media has a period of silence, then secondary events can be scheduled such that the alarm system is told not to flag an error during that period of broadcast. However, it is rare that such detailed time-bound metadata is available for a traffic/scheduling system; ideally, it would automate the generation of the metadata in the upstream media preparation workflow.

A new dawn in monitoring systems
New levels of content aware monitoring are now being enabled, using a combination of algorithmic techniques to create and read signatures for video and audio streams. These signatures can be used to identify a media stream, and associated content such as branding.

Various manufacturers have historically deployed systems that make use of fingerprinting technology, however, slow reaction times and limitations on the types of suitable content have resulted in less than ideal results. Snell has been working to resolve these issues and is now launching systems that promise to deliver on both the reality of monitoring by exception and making media monitoring in a whole facility a primary achievable goal as opposed to a secondary afterthought. True ‘Intelligent Monitoring’ is just around the corner.

Related