Video2Reaction
commited on
Update README.md
Browse files
README.md
CHANGED
|
@@ -72,13 +72,4 @@ configs:
|
|
| 72 |
path: data/test-*
|
| 73 |
---
|
| 74 |
|
| 75 |
-
|
| 76 |
-
|
| 77 |
-
Our work departs from prior efforts in two key ways. First, rather than predicting the emotion portrayed in a scene, we focus on the _induced_ audience reactions—capturing how viewers emotionally respond to the content. Second, we move beyond single-label or top-k classification by modeling the full _distribution_ of audience reactions, acknowledging the inherently diverse and multimodal nature of human emotional responses.
|
| 78 |
-
|
| 79 |
-
Our main contributions are as follows:
|
| 80 |
-
|
| 81 |
-
* We introduce **Video2Reaction**, a high-quality dataset that maps movie scenes to distributions over audience reactions, grounded in large-scale real-world viewer comments.
|
| 82 |
-
* We develop a scalable two-stage automatic annotation pipeline that enables cost-effective, extensible reaction labeling—paving the way for future dataset updates as new content and evolving audience perspectives emerge.
|
| 83 |
-
* We propose a novel benchmark task: predicting audience reaction distributions from multimodal video content, and design a comprehensive evaluation framework that captures both distributional alignment and dominant emotional salience.
|
| 84 |
-
* We benchmark a diverse range of approaches—including classical LDL algorithms, adapted multimodal emotion recognition models, and zero-shot foundation vision-language models—highlighting their strengths and limitations in forecasting human reactions towards movie content.
|
|
|
|
| 72 |
path: data/test-*
|
| 73 |
---
|
| 74 |
|
| 75 |
+
**Video2Reaction**
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|