Sidong Zhang
commited on
Update README.md
Browse files
README.md
CHANGED
|
@@ -3,7 +3,7 @@ language:
|
|
| 3 |
- en
|
| 4 |
license: cc-by-nc-sa-4.0
|
| 5 |
size_categories:
|
| 6 |
-
-
|
| 7 |
task_categories:
|
| 8 |
- other
|
| 9 |
pretty_name: ' Video2Reaction '
|
|
@@ -81,4 +81,4 @@ Our main contributions are as follows:
|
|
| 81 |
* We introduce **Video2Reaction**, a high-quality dataset that maps movie scenes to distributions over audience reactions, grounded in large-scale real-world viewer comments.
|
| 82 |
* We develop a scalable two-stage automatic annotation pipeline that enables cost-effective, extensible reaction labeling—paving the way for future dataset updates as new content and evolving audience perspectives emerge.
|
| 83 |
* We propose a novel benchmark task: predicting audience reaction distributions from multimodal video content, and design a comprehensive evaluation framework that captures both distributional alignment and dominant emotional salience.
|
| 84 |
-
* We benchmark a diverse range of approaches—including classical LDL algorithms, adapted multimodal emotion recognition models, and zero-shot foundation vision-language models—highlighting their strengths and limitations in forecasting human reactions towards movie content.
|
|
|
|
| 3 |
- en
|
| 4 |
license: cc-by-nc-sa-4.0
|
| 5 |
size_categories:
|
| 6 |
+
- 10B<n<100B
|
| 7 |
task_categories:
|
| 8 |
- other
|
| 9 |
pretty_name: ' Video2Reaction '
|
|
|
|
| 81 |
* We introduce **Video2Reaction**, a high-quality dataset that maps movie scenes to distributions over audience reactions, grounded in large-scale real-world viewer comments.
|
| 82 |
* We develop a scalable two-stage automatic annotation pipeline that enables cost-effective, extensible reaction labeling—paving the way for future dataset updates as new content and evolving audience perspectives emerge.
|
| 83 |
* We propose a novel benchmark task: predicting audience reaction distributions from multimodal video content, and design a comprehensive evaluation framework that captures both distributional alignment and dominant emotional salience.
|
| 84 |
+
* We benchmark a diverse range of approaches—including classical LDL algorithms, adapted multimodal emotion recognition models, and zero-shot foundation vision-language models—highlighting their strengths and limitations in forecasting human reactions towards movie content.
|