Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×

Red Bee target live subtitling with Subito

Red Bee Media has built a new live subtitling platform integrating speech recognition technology, writes Adrian Pennington. It is also participating in a new EU-funded project aimed at developing automatic transcription and translation technology.

Red Bee Media has built a new live subtitling platform integrating speech recognition technology, writes Adrian Pennington. It is also participating in a new EU-funded project aimed at developing automatic transcription and translation technology. The company, which provides closed captioning services for the BBC’s entire output including iPlayer, is to introduce a new live subtitling platform Subito (Italian for immediately or real time) this summer. The system, which is for internal use only, introduces the ability to auto-align pre-prepared text. “In a news scenario, for example, quite a high proportion of words are pre-scripted or repeated from previous half hours so it’s possible to use audio and other metadata to automatically repurpose and transmit text that already exists,” explained David Padmore, Director of Red Bee’s Access and Editorial Services (pictured). “This will reduce the delay in delivering subtitles on the screen and increase the accuracy of realtime captioning.” Live captioning systems now use fewer stenographers, instead using re-speakers, experts who listen to the live programme and re-speak words into a speech recognition engine, typically Dragon’s Naturally Speaking or IBM’s ViaVoice. Such systems are definitely a step up from the error-strewn live closed captioning of previous years, but are still not perfect. “The Holy Grail is to get to a point where we can generate text automatically directly from a soundtrack for live and pre-recorded programming,” said Padmore. “The problem is that this may work with very clear audio but as soon as there are any acoustic issues, such as people talking over each other, background noise, or ungrammatical speech, then the text becomes pretty unintelligible.” Red Bee is part of an EU-funded project that kicked off at the beginning of February looking at developing functional automatic speech recognition and realtime translation in multiple languages. Project partners include the Karlsruhe Institute of Technology, Hong Kong University of Science and Technology, Alcatel Lucent, the University of Edinburgh and the Polish-Japanese Institute of IT. The project will provide streaming technology that can convert speech from lectures, meetings, and telephone conversations into text in another language, making it useful for captioning translation for broadcasts, lectures, European Parliament debates and communication between mobile devices. The website further describes the goals as: “We will advance spoken language technologies so they process and transmit human information content from one language to another, in situations that could so far not be handled by automatic techniques. This includes specialised but varied topics (lectures, seminars, presentations), highly disfluent, conversational, accented and noisy speech (meetings, telephone calls).” With consumer electronics manufacturers like LG and Samsung building voice control into their TVs and tablets and Apple rumoured to be incorporating its iPhone voice recognition system Siri into a new TV product, speech is fast becoming a mainstream way of interacting with technology. Google even has a free for auto-description tool for YouTube videos, although the technology is far from perfect. www.redbeemedia.com