Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×

Iridix optimiser simulates human retina

A live video version of the Iridix image optimiser found in professional stills cameras made its Eurpean d_but at the recent Broadcast LIVE & VideoForum show in London. First unveiled at Japan'sInterBEE last November, the 'retina-morphic' D-Rex LCP-100 uses an algorithm based on research into human visual perception orchestrated by Leicester University eight years ago, writes Richard Dean.

A live video version of the Iridix image optimiser found in professional stills cameras made its Eurpean d_but at the recent Broadcast LIVE & VideoForum show in London. First unveiled at Japan’sInterBEE last November, the ‘retina-morphic’ D-Rex LCP-100 uses an algorithm based on research into human visual perception orchestrated by Leicester University eight years ago, writes Richard Dean.

The self-contained video processing unit is claimed to simulate the human retina’s capability to variably adapt dynamic range across a scene to optimise highlights and shadows, without the usual compromise of one at the expense of the other.

The system will launch first in Japan, where pre-sales units are already being used for sports OB by Japanese state broadcaster and co-developer, Fuji TV. Manufactured by StoreNet of Japan, the unit uses two powerful FPGA (Field Programmable Gate Arrays) programmed by UK technology specialist Apical. “Processing live video has been made possible by dramatic improvements in the power of FPGAs and the higher dynamic range of modern video image sensors,” noted Michael Tusch, MD of the London-based company.

The Iridix algorithm at the heart of the technology works by first performing an analysis of the image region around every pixel in the input frame, eschewing the traditional image processing practise of dividing the image into squares. The colour and brightness statistics are then used to construct an optimal tone curve for each pixel – while preserving the contrast of edges and other textures – with curve shapes governed by overall parameters set by simple user controls.

The subsequent image is claimed to closely resemble the way the human eye would see the scene under ideal viewing conditions, and was previously unachievable in a realtime video system. Processing is completed on-the-fly within six video lines, with a small in-built delay synchronising audio.

“The best results are achieved with highest-quality 10-bit input material, but even sources effectively below 8-bits are suitable,” said Tusch. “For lower-quality sources, the unit’s temporal and spatial noise reduction capabilities are more important. Very highly compressed sources – much more compressed than DV – may present limitations, but even consumer-quality MPEG video sources can be processed,” he added. Both the input and output are via industry-standard SDI connections.

“The controls are designed to be intuitive, so it is possible to operate the device fully by following the instruction manual,” claimed Tusch. “We do provide on-site training, but it’s not so complicated that an operator needs to attend a multi-day course.”

In the future Apical hopes to incorporate the technology directly into a video camera, and is also working on a closed-loop encode/decode variant where a downstream decoder is controlled by metadata, possibly for DI post and d-cinema distribution markets.

In the meanwhile, European supplies of the standalone unit are expected to ship by Q2 this year at less than _10,000 apiece in the UK, followed by a rapid rollout across Europe.