This project will use natural language and visual content processing technologies to facilitate the production of concise video stories - sliding news. The prototype will allow for combining news and photos into slideshows with automatically generated captions. By exploiting various natural language processing methods, including named entity extraction and syntactic parsing, in order to extract informative yet concise textual content, and to retrieve relevant visual content from a labelled photo database. This will provide reporters and editors with a self-service facility to generate and post-edit sliding video news by simple mouse clicks.
The media environment has changed: consumers prefer social networks over traditional news sources, and visual content over text. Concise video stories is the new content that LETA partners and customers demand. However, it takes a lot of manual efforts to produce visual news. The process involves text summarization, photo selection, creation of captions, etc. This project will create a software prototype that will help editors to produce such content semi-automatically from LETA news and photo data sources. It is expected to reduce the production time from currently several hours per minute to less than an hour per minute.
Google serves cookies to analyse traffic to this site. Information about your use of our site is shared with Google for that purpose. Find out more.