Network of Excellence Peer-to-Peer Tagged Media

MTV 2.0

This use scenario is about making it possible for a user to find and consume multimedia content by minimizing interaction with their TV, mp3 players, etc.

 

Research Challenges 

Personalized content is selected and recommended to a user who supplies little active information. The recommendation is based on personalized analysis of his (her) implicit physiological reactions to the presentation of multimedia content, and his (her) patterns of interaction with the interface. In this research, we focus on the recommendation of music videos, based on the user’s physiological state.

Data: Music Videos; professionally generated content.

Use Scenario: I am sitting in the couch tired after a long day of work. Content should be recommended semi-automatically based on my measured physiological state and (affective) reactions to the multimedia that is being displayed.

Example PetaMedia Research:

  • Personalizing recommendations based on affective reactions.
  • Exploiting brain signals to create a user taste profile.
  • How can EEG analysis be used to semantically tag multimedia content?
  • Which labels can be reliably predicted?
  • Can we tag more reliably by using multiple taggers?
  • How can self-assessment from multiple participants with different backgrounds be utilized for the evaluation of implicit tagging methodologies and/or for recommendations?

Evaluation Questions:

  • Does analysis of affective reactions improve personalized recommendation?
  • Can MCA predict the reactions reliably enough to improve recommendation?
  • Can analysis of implicit feedback improve recommendation (in comparison to recommendation based on content-based analysis)?

Existing Technology:

  • Non-intrusive user behavior observation methods
  • Methods for measuring brain activity and physiological reactions
  • Techniques for affect multimedia content analysis
  • Algorithms for collaborative filtering and recommendation

  

  

Participants

  • EPF Lausanne: Ashkan Yazdani (coordinator), Jong-Seok Lee, Touradj Ebrahimi
  • QMUL: Sander Koelstra (coordinator), Ioannis Patras
  • TU Berlin: Engin Kurutepe, Kai Clüver
  • University of Geneva: Mohammad Soleymani, Thierry Pun
  • University of Twente: Christian Mühl, Anton Nijholt

Datasets

DEAP Dataset

We present a multimodal dataset for the analysis of human affective states. The electroencephalogram (EEG) and peripheral physiological signals of 32 participants were recorded as each watched 40 one-minute long excerpts of music videos. Participants rated each video in terms of the levels of arousal, valence, like/dislike, dominance and familiarity. For 22 of the 32 participants, frontal face video was also recorded. A novel method for stimuli selection was used, utilising retrieval by affective tags from the last.fm website, video highlight detection and an online assessment tool. The dataset is made publicly available at:

http://www.eecs.qmul.ac.uk/mmv/datasets/deap/

 

Demos

Personal MTV

This demo will search the metadata and recently played songs of a user of last.fm online radio and then will recommend songs based on the recently played songs and a hidden markov model of all songs available on last.fm

 

Implicit Emotional Tagging using BCI

This demo will classify the EEG signals of the subject into six different emotion cathegories, i.e. Happiness, Sadness, Anger, Disgust, Surprise, and Fear.

Publications

  • A. Yazdani, J.-S. Lee, and T. Ebrahimi. Implicit emotional tagging of multimedia using EEG signals and brain computer interface. In Proc. SIGMM Workshop on Social Media, pages 81–88, New York, NY, USA, 2009. ACM.
  • S. Koelstra, C. Muehl and I. Patras," EEG analysis for implicit tagging of video data " in Workshop on Affective Brain-Computer Interfaces, Proc. ACII, 2009.
  • S. Koelstra, A. Yazdani, M. Soleymani, C. Mühl, J.-S. Lee , A. Nijholt, T. Pun, T. Ebrahimi, and I. Patras, “Single Trial Classification of EEG and Peripheral Physiological Signals for Recognition of Emotions Induced by Music Videos” to be published by Springer as a volume of the series of Lecture Notes in Computer Science/Lecture Notes in Artificial Intelligence (LNCS/LNAI).

Publications