sc0ttypee commited on
Commit
a695557
·
verified ·
1 Parent(s): c567e88

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +18 -1
README.md CHANGED
@@ -12,4 +12,21 @@ tags:
12
  pretty_name: MuSaG
13
  size_categories:
14
  - n<1K
15
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
  pretty_name: MuSaG
13
  size_categories:
14
  - n<1K
15
+ ---
16
+
17
+ # MuSaG: A Multimodal German Sarcasm Dataset with Full-Modal Annotations
18
+
19
+ ## Abstract
20
+ Sarcasm is a complex form of figurative language in which the intended meaning contradicts the literal one. Its prevalence in social media and popular culture poses persistent challenges for natural language understanding, sentiment analysis, and content moderation. With the emergence of multimodal large language models, sarcasm detection extends beyond text and requires integrating cues from audio and vision.
21
+ We present MuSaG, the first German multimodal sarcasm detection dataset, consisting of 33 minutes of manually selected and human-annotated statements from German television shows. Each instance provides aligned text, audio, and video modalities, annotated separately by humans, enabling evaluation in unimodal and multimodal settings.
22
+ We benchmark nine open-source and commercial models, spanning text, audio, vision, and multimodal architectures, and compare their performance to human annotations. Our results show that while humans rely heavily on audio in conversational settings, models perform best on text. This highlights a gap in current multimodal models and motivates the use of MuSaG for developing models better suited to realistic scenarios.
23
+ We release MuSaG publicly to support future research on multimodal sarcasm detection and human–model alignment.
24
+
25
+ ## Licence
26
+ The dataset is released under the Creative Commons Attribution-NonCommercial 2.0 licence.
27
+
28
+ ## Dataset Statistics
29
+ MuSaG provides 214 elements, each featuring aligned audio, video, and human-reviewed, automatically generated transcripts. All instances are labeled for sarcasm by human annotators through a majority vote. We release the independent annotations from each annotator, as well as modality-specific annotations collected for elements represented in isolated text, audio, and video modalities, enabling analysis and comparison across human perception and model performance.
30
+
31
+ ## Citation
32
+ If you use this dataset in your research, please cite the associated paper: