TailoredMedia - Tailored and Agile enrichment and Linking for semantic Description of multiMedia

The project aims to use state-of-the-art artificial intelligence methods for the automatic analysis of audiovisual content, and to combine them with user interfaces. This enables efficient processes for detailed semantic description and content search, while also addressing the following problems.

Short Description

Film, television and video have become the dominant means of communication in our world today. However, video content must be accurately described so that it can be searched and reused for reprises or as illustrative elements for other purposes. This task is currently performed manually by information specialists, which is very time consuming and only feasible for a limited amount of content. Despite the huge effort, it is still not possible to describe content in detail, e.g., to indicate in which sections of a film or where in the frame a specific politician appears.

This situation is further aggravated by the fact that contextual information related to particular content is often distributed across different sources with different structures (e.g., script, journalist's notes) and/or different modalities (e.g., notes as text, audio recording of an interview). Although most content and information sources are already available in digital form, they are often processed independently. This means that information from different sources cannot (or only partially) be combined, making the description incomplete.

The project aims to use state-of-the-art artificial intelligence methods for the automatic analysis of audiovisual content, and to combine them with user interfaces. This enables efficient processes for detailed semantic description and content search, while also addressing the above problems.

The results of the automatic analysis will be used to develop novel methods for fusing multimodal information and storing the information obtained in a knowledge graph. This makes it possible to enrich and enhance content descriptions with semantic metadata (both automatically and involving information specialists) in an intuitive, simple and efficient manner.

The multidisciplinary composition of the consortium led by JOANNEUM RESEARCH ensures the expertise required to achieve the project goal. The project involves two different media organisations, the Austrian Media Library (Österreichische Mediathek) and the Austrian Broadcasting Corporation (ORF). While the Media Library focuses on archiving, the ORF covers the entire media life cycle. Both partners can thus contribute a broad range of workflows, best practices and technical requirements to the project.

St. Pölten University of Applied Sciences will use these inputs to create a user-
centred design process for developing user interfaces and workflows. Redlink and
JOANNEUM RESEARCH will contribute expertise in semantic technologies and audiovisual content analysis.

The first six months of the project were dedicated to identifying current practices and requirements and exploring methods for automatic analysis together with the two media organisations. These results will be used as a basis to collect and flesh out ideas for user interfaces in a series of co-creation workshops over the next few months.

Project Partners

Consortium lead

JOANNEUM RESEARCH Forschungsgesellschaft mbH

Project coordinator

Dipl.-Ing. Georg Thallinger

Other consortium partners

  • St. Pölten University of Applied Sciences
  • Austrian Broadcasting Corporation (ORF)
  • RedLink GmbH
  • Vienna Museum of Science and Technology with Austrian Media Library

Contact Address

JOANNEUM RESEARCH Forschungsgesellschaft mbH
Dipl.-Ing. Georg Thallinger
Tel.: +43 (316) 876-1240
E-Mail: georg.thallinger@joanneum.at