The fourth generation of Enco’s enCaption software-defined speech-to-text voice recognition engine, enCaption3 R4, adds the ability to distinguish between multiple speakers, reducing the labour of live captioning in broadcast workflows.
Like previous generations, enCaption3 R4 needs no re-speaking, voice training, supervision, or real-time captioners, thereby eliminating human error. Building on this foundation, enCaption3 R4 integrates an algorithm with the intelligence to manage complex captioning situations where multiple subjects are speaking at once, said Enco. enCaption3 R4 achieves this by isolating each speaker’s microphone throughout the live programme.
Multi-lingual support is also built into the algorithm, and includes personalised and/or localised spelling capabilities to ensure greater accuracy.
Ken Frommert, general manager of Enco, said: “With our new multi-speaker identification feature, hearing-impaired viewers will not only know what is being said, but also who is saying it.”
8.D74