Video2Reaction commited on
Commit
d8d8368
·
verified ·
1 Parent(s): cf63751

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -10
README.md CHANGED
@@ -72,13 +72,4 @@ configs:
72
  path: data/test-*
73
  ---
74
 
75
- In this paper, we introduce **Video2Reaction**, a large-scale dataset consisting of over 10,000 movie clips sourced from the licensed MovieClips YouTube channel. Each video is paired with audience comments, allowing for a precise mapping between visual content and the emotional reactions it induces. Unlike perceived emotions, which are typically modeled as unimodal (single-label), induced emotions can be either unimodal or split across multiple emotions. This distinction makes it more important to learn the distribution of reactions, rather than simply predicting single or multi-class labels. To address this, we frame audience emotion recognition as a **label distribution learning** (LDL) problem. Rather than classifying a single dominant reaction, we model the distribution of emotional responses from the population for each video, enabling us to capture the diverse and nuanced nature of audience reactions.
76
-
77
- Our work departs from prior efforts in two key ways. First, rather than predicting the emotion portrayed in a scene, we focus on the _induced_ audience reactions—capturing how viewers emotionally respond to the content. Second, we move beyond single-label or top-k classification by modeling the full _distribution_ of audience reactions, acknowledging the inherently diverse and multimodal nature of human emotional responses.
78
-
79
- Our main contributions are as follows:
80
-
81
- * We introduce **Video2Reaction**, a high-quality dataset that maps movie scenes to distributions over audience reactions, grounded in large-scale real-world viewer comments.
82
- * We develop a scalable two-stage automatic annotation pipeline that enables cost-effective, extensible reaction labeling—paving the way for future dataset updates as new content and evolving audience perspectives emerge.
83
- * We propose a novel benchmark task: predicting audience reaction distributions from multimodal video content, and design a comprehensive evaluation framework that captures both distributional alignment and dominant emotional salience.
84
- * We benchmark a diverse range of approaches—including classical LDL algorithms, adapted multimodal emotion recognition models, and zero-shot foundation vision-language models—highlighting their strengths and limitations in forecasting human reactions towards movie content.
 
72
  path: data/test-*
73
  ---
74
 
75
+ **Video2Reaction**