Towards motion control with eMotion5 May 2010
Serial post production industry entrepreneur MC Patel explains the key technology behind his new signal processing venture eMotion. Interview by Adrian Pennington.
The ability to estimate motion is vital for the high quality manipulation of digital video media. A new software tool is claimed to create far more accurate motion information than conventional methods. The company behind the breakthrough, eMotion Engines, is also notable because its co-founder is a recognised serial entrepreneur in the post industry.
MC Patel was a driving force behind Abekas in the 1980s, going on to establish Alpha Image (which he later sold to Dynatech). Patel moved to Discreet as director of hardware technology before leaving in 1999 to set up HD workstation developer Post Impressions. Since selling PI to Snell & Wilcox in 2001 he has provided business development consultancy to high tech companies in the industry including a stint as VP sales at Digital Vision.
The core technology behind eMotion is the brainchild of Dr Anil Kokaram, an image processing expert and professor at Dublin’s Trinity College, who won a Scientific and Engineering Academy Award in 2006 for his part in the design and development of The Foundry’s visual effects suite Furnace. MC Patel met Kokaram in 2004 and began to think of other ways Kokaram’s algorithms could be applied.
“My first thought was to use the technology to create new frames for super slow motion capture from regular cameras and therefore speed up the turnaround times,” says Patel. “If you can create new frames from 24 to 25fps or 25 to 30fps you can also create a standards converter.”
The duo established eMotion two years later with a three-point brief. “We wanted to go for highest possible picture quality and make the product very easy to use – which meant creating a file-based platform, not hardware,” says Patel.
The original slo-mo device didn’t gain commercial traction but it did prove a trigger for the pair to focus on applying the technology elsewhere. “If you can identify areas of an image that have moved between frames you could make a noise reducer,” he says.
According to Patel the standard process of Block Matching (BM), used to compress MPEG-2, is prone to errors and is unable to estimate correctly the motion when the image material has no texture. “The underlying idea in Block Matching is to directly compare patches of picture in two frames until the lowest picture difference for a patch is found,” he explains.
“The resulting displacement is then assigned as the motion vector. This explains why, in flat areas of the picture, the presence of noise can generate spurious matches and hence poor quality motion vectors. In addition, in regions where there are uni-directional features such as a vertical edge, BM generates motion that estimates correctly in one direction but not the other. This is a well-known problem called the aperture effect.”
eMotion’s technology first applies a global motion estimator to measure the apparent camera motion in the scene and is able to decide which areas of the picture are undergoing global motion and which are not. Secondly, a pixel resolution motion algorithm is used to directly quantify the motion in the remaining areas. “Our technology is able to combine motion information across a wide area of the picture,” he says. “This allows it to solve the aperture effect and also be more robust to noise levels in the image.”
eMotion’s algorithm is able to do this is because it exploits a probabilistic view of image sequence processing, he says. The process is not realtime — although multiple engines can be ganged together to speed work in the background. “It avoids hard decision making but instead combines soft decisions at several pixels to yield a more intuitive result. In addition, it takes a fully volumetric view of motion estimation and directly incorporates spatial and temporal motion smoothness. This means that it can estimate both occlusion and uncovering. This is crucial for handling picture behaviour at the edges of moving regions.”
eMotion’s current suite features Pure, which provides noise reduction, image enhancement and deflicker; DigiCrank, a retiming application allowing variable speed output; and Transformer, a standards converter for any format and in any direction. The tools could be applied to reduce flicker in images produced by high resolution electronic cameras like SI-2K and Red. “Because they capture at such high frame rates the lighting is not synchronised to the shutter and can often cause inadvertent underexposure and strobing effects,” says Patel. “So we could offer this service as a background process for footage which may have noise and flicker.”
The software has been incorporated into Quantel’s Pablo, iQ and eQ products and used at the coincidentally named E-Motion facility in Genoa to degrain and retime several 4K Red One shots for a Bollywood feature. Can Communicate employed Pure, on Pablo, to restore 3D archive footage for Channel 4 documentary The Queen in 3D. Patel says he is talking with the BFI, owner of multiple 3D film reels from the 1950s, about restoring a further 15-20 short films.
In addition Kokaram is researching ways to apply the motion estimation technology to 2D-3D conversion. “When you perform 2D to 3D conversion the simplistic method is to automate any changes you make in the left eye to the right but when objects move between frames, perhaps revealing objects behind the moving object, the process of matching geometry becomes more challenging,” adds Patel.
Patel is also CEO of European operations for Venera Technologies, an Indian-headquartered file-based test and measurement specialist. Patel hopes to help introduce its lead product Pulsar into the European market.
“Converting file-based content to video is not economically viable,” he explains. “An increasing number of file formats, wrappers, codecs and metadata need to be supported and integration with broadcast automation and media asset management systems is essential.”