Sidong Zhang commited on
Commit
cf63751
·
verified ·
1 Parent(s): d69a4cd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -3,7 +3,7 @@ language:
3
  - en
4
  license: cc-by-nc-sa-4.0
5
  size_categories:
6
- - 1B<n<10B
7
  task_categories:
8
  - other
9
  pretty_name: ' Video2Reaction '
@@ -81,4 +81,4 @@ Our main contributions are as follows:
81
  * We introduce **Video2Reaction**, a high-quality dataset that maps movie scenes to distributions over audience reactions, grounded in large-scale real-world viewer comments.
82
  * We develop a scalable two-stage automatic annotation pipeline that enables cost-effective, extensible reaction labeling—paving the way for future dataset updates as new content and evolving audience perspectives emerge.
83
  * We propose a novel benchmark task: predicting audience reaction distributions from multimodal video content, and design a comprehensive evaluation framework that captures both distributional alignment and dominant emotional salience.
84
- * We benchmark a diverse range of approaches—including classical LDL algorithms, adapted multimodal emotion recognition models, and zero-shot foundation vision-language models—highlighting their strengths and limitations in forecasting human reactions towards movie content.
 
3
  - en
4
  license: cc-by-nc-sa-4.0
5
  size_categories:
6
+ - 10B<n<100B
7
  task_categories:
8
  - other
9
  pretty_name: ' Video2Reaction '
 
81
  * We introduce **Video2Reaction**, a high-quality dataset that maps movie scenes to distributions over audience reactions, grounded in large-scale real-world viewer comments.
82
  * We develop a scalable two-stage automatic annotation pipeline that enables cost-effective, extensible reaction labeling—paving the way for future dataset updates as new content and evolving audience perspectives emerge.
83
  * We propose a novel benchmark task: predicting audience reaction distributions from multimodal video content, and design a comprehensive evaluation framework that captures both distributional alignment and dominant emotional salience.
84
+ * We benchmark a diverse range of approaches—including classical LDL algorithms, adapted multimodal emotion recognition models, and zero-shot foundation vision-language models—highlighting their strengths and limitations in forecasting human reactions towards movie content.