Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×

Germany’s NDR standardises workflow

IT Broadcast Workflow 2010: Standardised workflows for all productions; installing a test system for news; using BID for tracking at NDR

IT Broadcast Workflow 2010: Standardised workflows for all productions; installing a test system for news; using BID for tracking at NDR

Norddeutscher Rundfunk (NDR), Hamburg, has been working with network- and file-based production for more than a year, with a single technical system being used for all workflows and all programme genres. NDR not only plays out content but produces it too, including news, sports, feature films, and magazine production. “The network production for such different workflows was the major challenge of this project,” said Sascha Molina, NDR’s head of post production.

It is also necessary to simplify and downsize the workflows, “because when you have such different workflows, like news, sports, feature films, and all these workflows are very, very different, it is very hard to design a system with the same components. So we said we wanted to standardise our workflows to make it possible to work for these different types of programme.”

NDR also wants to eliminate multiple work steps, as it has to deliver TV, radio and internet content with a single workflow. “We want to produce more output [in] less time. Maybe in the same time would be fine too.”

Another aim was to have parallel working on content by using the same video material, not only for news production but for feature films and other productions, all of which have time constraints. Digital archive integration was a further necessity, as were staff cost savings. Although this project is only SD, NDR is also preparing for a move to HD in the next year.

It installed a test system initially for news. “We learned much about network production from this test system and much about metadata,” he added. “We defined the process to work with metadata and our first step was not to make a data model for our company.”

What was more important was to look at the metadata workflow. It made a roadmap for this project and those to come, defining how it wants to work with metadata. It wanted to approach this without dogma, but “that is very hard” because of the different workflows. “All the journalists say: ‘My workflow is the most important workflow’,” he said, and it took a “very open discussion with the journalists” to get it agreed. To achieve a business model that worked, it was important to have a single technical concept for all production processes, with no area of production having its own special systems.

“We want to design a very slight system,” said project manager Michael Tißen. Its core is an Omneon MediaGrid server with about 5,000 hours, but mirroring gives 2,500 hours in reality. It also has Omneon Spectrum ingest and playout servers. The ingest has about 16 ports. Playout shares two Spectrums for its four studios, plus a back-up server, with about 12 ports. It has a Gigabit network and a 10Gb Cisco switch, linked with post production. It mirrored the high-res server for proxies, using MPEG-1, linked to about 2,000 journalist stations. All are under the control of the VPMS system from S4M.

One way to get the journalists using the system was to take away their options. So NDR closed the old ingest room about a month after installing the system and reduced any tape workflows.

It began the project in 2007, by setting up a five-person project team (one from each department), which discussed and decided the workflows, in consultation with staff. “We wanted an equitable partnership between the journalists, the production, the technical and the business people.” They didn’t use a systems integrator, doing it all themselves.

“We don’t want to make the best system for every workflow, but one slight, standardised technical system,” said Molina. “We use open systems with APIs and web services on each system and — very, very important — a uniform file format for all workflows. That’s a very hard discussion, because the MXF OP1a is not very good for each process. But we discuss the performance for the actual news production and for fast production it is very necessary to reduce the need for transcoding processes.”

To track everything it has BID: the Beitrags (contribution) ID number. When the journalists had tapes, they didn’t think about the media, but about the content, the stories. “They have many tapes for one story: tapes from the acquisition, from the archives, from the line feeds, from the file transfers, and these tapes they put in a kind of post box,” which they took to the edit. It contained everything, including music CDs and scripts, and meant that journalists knew where everything was for their story.

Having taken that away and replaced it with a server, which holds all the material for all the journalists, they had a problem. “What we need is not a material ID. Material IDs we have a lot. What we need is clustering, like this post box, so we get the story ID to group this material.”

For the journalists this ID goes through the whole process — not just for the material, but also the despatch process, for satellite time, etc. Once they type in the BID, they get access to all the metadata and other related material. It also makes it easier to make a structured search, especially as story titles can change between planning and archive. It also has an advantage for rights clearance, as the relationship between the original material and the finished story is easier to track.

Molina hadn’t initially thought this would be so important, but one goal for the project was to avoid manual typing, and in the past a journalist might have had to type the title of the story 15 or 16 times in the various IT systems. With BID they only have to put the number. “A very strong driver for change management is the laziness of people, so they don’t have to type so often as in the past.” Journalists can generate a BID very quickly, and only have to add new metadata as they go through each process.

It also uses BID to reduce the storage problem on the central server. Each programme has a storage budget and it is possible to control the ratio between raw material and stories, “because when you are searching for a BID and you see that the story is two minutes and used 10 hours of raw material, you see that the journalist has a problem. Now it is possible to know this,” said Molina.

“Deleting is a very hard thing. Our idea is that the journalist makes it on their own.” They have a budget, say 100 hours per editorial office, and if they have too much material they decide what to delete. It works better for some offices than others.

It had wanted to move to a network system and learned, in the process, that the main theme of their discussions was the reorganisation of the process. In this file-based production workflow, there are many ways to do something. “It is not only becoming more flexible, but more complicated.” This can require greater supervision at both journalistic and production levels.

There are about 2,000 journalists editing low-res proxies from the production server and digital archive, using VPMS, as well as three Final Cut Pro editing suites in the newsroom, which are able to work directly with the server. There are also 24 Sony XPRi editing systems (outside the existing network), but these have to import and export material, “and that’s not good.” It also has five Fairlight systems outside the network for audio post production, “and they work very well.” It has IMX tape-based camcorders, with ingest via the craft editors or office VTRs.

Its next step will be to extend the VPN, and move to tapeless acquisition using XDCAM HD, during 2011. It will move to FCP for all of its craft editing, with an additional 24 systems, “so import and export is not necessary at all.”

NDR wants to simplify the workflow in the field, so it intends to put planning metadata on the XDCAM via despatch. It discussed adding GPS, but the unions objected. Having the metadata already on the stories means that the camera ingest can be done fully automatically. “So then it is possible to go VPN over the whole of the process.” The last problem it has is browsing in the field. But it hopes to address this by taking proxies from the camera via USB stick, and then browse via a laptop and make a batch list for ingest.

At the moment FCP doesn’t have project administration, so it will be working with S4M to address this, and to improve the handover of metadata between the NLE and other systems. There will also be a need for more training, for journalists and production staff. “Network production never ends. We enhance and optimise the process, every year, every month, every week,” said Tißen. “The MXF flavours are very different, so we need a test centre,” which it is building this month. – David Fox