Associated Press has announced that it will enrich descriptions and enhance search for video within its ENPS broadcast news production system using TVEyes' innovative speech-to-text technology.
In the large-scale production environments common in creating video for television broadcast and the web, TVEyes reasons that video without a thorough description is reduced in value, or can be lost entirely. Automated processes which improve descriptions increase the value of content, allowing it to be more easily located, produced and repurposed.
"Integration of TVEyes speech-to-text within ENPS will add value to any customer's workflow, allowing journalists to more easily locate relevant video through ENPS integrated searches," said Lee Perryman, deputy director, broadcast services, and director of broadcast technology at AP. "When a journalist can quickly and easily find relevant video, the value of the entire production chain grows."
TVEyes will automatically generate timecode aligned transcriptions for all video stored on MOS v2.8.3 compliant video servers which provide proxy images. These transcripts are then incorporated into the description of the video clip stored in ENPS. ENPS users are then able search all video content for specific words or phrases. Once the video is located, users may not only view the transcript associated with video, but also jump to the location where a specific word was spoken.
Additional integration with ENPS will allow TVEyes to automatically segment off-air recordings into individual story clips for use on websites. Previously, live newscasts were manually cut into story segments after the news programme was complete. ENPS and TVEyes now automate this process by aligning scripts in the ENPS system with the spoken word in the recorded newscast. TVEyes then automatically cuts the newscast into story segments and makes them available, with descriptions and transcripts, for publication.