| | --- |
| | license: apache-2.0 |
| | task_categories: |
| | - text-classification |
| | language: |
| | - en |
| | size_categories: |
| | - 1K<n<10K |
| | pretty_name: Werewolf Among Us |
| | viewer: false |
| | --- |
| | |
| | # Werewolf Among Us: Multimodal Resources for Modeling Persuasion Behaviors in Social Deduction Games |
| |
|
| | ### ACL Findings 2023 |
| |
|
| | [Project Page](https://bolinlai.github.io/projects/Werewolf-Among-Us/) | [Paper](https://aclanthology.org/2023.findings-acl.411.pdf) | [Code](https://github.com/SALT-NLP/PersuationGames) |
| |
|
| | [Bolin Lai*](https://bolinlai.github.io/), [Hongxin Zhang*](https://icefoxzhx.github.io/), [Miao Liu*](https://aptx4869lm.github.io/), [Aryan Pariani*](https://scholar.google.com/citations?hl=en&user=EnC_6s0AAAAJ), [Fiona Ryan](https://fkryan.github.io/), [Wenqi Jia](https://vjwq.github.io/), [Shirley Anugrah Hayati](https://www.shirley.id/), [James M. Rehg](https://rehg.org/), [Diyi Yang](https://cs.stanford.edu/~diyiy/) |
| |
|
| |
|
| | <img src='https://bolinlai.github.io/projects/Werewolf-Among-Us/figures/teaser.png'/> |
| |
|
| | ## Introduction |
| |
|
| | Werewolf Among Us is the first multi-modal (video+audio+text) dataset for social scenario interpretation. The dataset is composed of videos of 199 social deduction games and transcripts, sourced from Youtube (151 games) and [Ego4D](https://ego4d-data.org/) dataset (48 games). In the majority of the videos, participants are playing One Night Werewolf. In a handful of videos, people are playing Avalon. |
| |
|
| | Here are two kinds of annotations provided in this dataset: |
| |
|
| | - Persuasion strategies for each utterance from 6 pre-defined categories, including identity declaration, interrogation, evidence, accusation defense and call for action. |
| | - Voting outcome of each player at the end of each game (only for One Night Werewolf). |
| | - Start roles of each player at the beginning of the game, and end roles of each player at the end of the game. |
| |
|
| | Hence, our dataset is designed for two tasks: |
| |
|
| | - Multi-modal persuasion strategy recognition. |
| | - Human intention (voting) prediction. |
| |
|
| | We also provide extended annotations to support various tasks in our following work. Please checkout their websites for more details: |
| |
|
| | - (CVPR 2024, Oral) [Modeling Multimodal Social Interactions: New Challenges and Baselines with Densely Aligned Representations](https://sangmin-git.github.io/projects/MMSI/) |
| | - New tasks: speaking target identification, pronoun coreference resolution, mentioned player prediction. |
| | - New annotations: key points (face and body) of each player. |
| |
|
| | - (CVPR 2025) [SocialGesture: Delving into Multi-person Gesture Understanding](https://www.irohxucao.com/SocialGesture/) |
| | - New tasks: social gesture temporal detection, social gesture spatial location, social gesture understanding VQA. |
| | - New annotations: timestamp of social gestures, social question-answer pairs. |
| |
|
| |
|
| | ## Youtube Subset |
| |
|
| | 1. All videos are downloaded from Youtube website. We provide all video ids in `Youtube/youtube_urls_released.json`. The video can be accessed and downloaded through https://www.youtube.com/watch?v={video_id}. For example, if the video id is `MjnGiInji-U`, the corresponding video url is https://www.youtube.com/watch?v=MjnGiInji-U. All videos we used are in the resolution of 640x320. |
| | |
| | 2. Players might play multiple games in one long video. We trimmed videos into a bunch of segments. Each segment corresponds to only one game. The start and end timestamps can be found under `Youtube/vote_outcome_youtube_released`. The trimmed videos are in `Youtube/videos`. |
| |
|
| | 3. The transcripts are available under `Youtube/transcripts`. We annotated the speakers and the timestamps of all utterances. |
| |
|
| | 4. The train/val/test split of the dataset is in `Youtube/split`. To make things easier, we integrated both voting outcomes and persuasion strategy of each utterance in the json files. **You can obtain all annotations we provded by simply loading these files.** |
| |
|
| | 5. We also annotated the voting outcomes, start roles and end roles of all players which can be found in `Youtube/vote_outcome_youtube_released`. Players' names are listed in order of players from left to right in the videos. |
| |
|
| | 6. We also released the video features we used in our experiments, which are in `Youtube/mvit_24_k400_features.zip`. In each numpy file, the features are in the shape of `(N, 3, 768)` . `N` corresponds to the `N` utterances in each game. For each utterance, we cropped the frame on the left, center and right, so there are `3` features for each utterance. `768` is the length of flattened feature vectors. A 24-layer multisacle vision transformer (MViT) pretrained on Kinetics-400 was used for video feature extraction. **Note that we froze the video encoder in training.** We encourage users to explore other video encoders to get better video representations. |
| |
|
| |
|
| | ## Ego4D Subset |
| |
|
| | 1. All videos are downloaded from [Ego4D official website](https://ego4d-data.org/). |
| |
|
| | 2. Players might play multiple games in one long video. We trimmed videos into a bunch of segments. Each segment corresponds to only one game. The start and end timestamps can be found under `Ego4D/game_timestamp`. The trimmed videos are in `Ego4D/videos`. **Note: Players were asked to complete some surveys during the game. The script removes all footage during the survey which trims the video of one game into 2-3 segments. Each clip is named as `{uid}_{gameid}_{part}`. The clips having the same `{uid}_{gameid}` and different `{part}` belong to one game, so they have one voting outcome.** |
| | |
| | 3. The transcripts are available under `Ego4D/transcripts`. We annotated the speakers and the timestamps of all utterances. |
| | |
| | 4. The train/val/test split of the dataset is in `Ego4D/split`. To make things easier, we integrated both voting outcomes and persuasion strategy of each utterance in the json files. **You can obtain all annotations we provded by simply loading these files.** |
| | |
| | 5. We also annotated the voting outcomes, start roles and end roles of all players which can be found in `Ego4D/vote_outcome_ego4d`. Players' names are listed in order of players from left to right in the videos. |
| | |
| | 6. We also released the video features we used in our experiments, which are in `Ego4D/mvit_24_k400_features.zip`. In each numpy file, the features are in the shape of `(N, 3, 768)` . `N` corresponds to the `N` utterances in each game. For each utterance, we cropped the frame on the left, center and right, so there are `3` features for each utterance. `768` is the length of flattened feature vectors. A 24-layer multisacle vision transformer (MViT) pretrained on Kinetics-400 was used for video feature extraction. **Note that we froze the video encoder in training.** We encourage users to explore other video encoders to get better video representations. |
| |
|
| |
|
| | ## BibTex |
| |
|
| | Please cite our paper if the dataset is helpful to your research. |
| |
|
| | ```BibTex |
| | @inproceedings{lai2023werewolf, |
| | title={Werewolf among us: Multimodal resources for modeling persuasion behaviors in social deduction games}, |
| | author={Lai, Bolin and Zhang, Hongxin and Liu, Miao and Pariani, Aryan and Ryan, Fiona and Jia, Wenqi and Hayati, Shirley Anugrah and Rehg, James and Yang, Diyi}, |
| | booktitle={Findings of the Association for Computational Linguistics: ACL 2023}, |
| | pages={6570--6588}, |
| | year={2023} |
| | } |
| | ``` |