DEBATE_LLM / README.md
seantw's picture
update README
d736855

DEBATE Benchmark

This repository contains CSV files from the DEBATE project: large-scale human conversation experiments organized around controversial and opinion-based topics. The data consists of multi-round conversations between participants discussing political, social, and belief-related topics, following the protocol described in:

Yun-Shiuan Chuang, Ruixuan Tu, Chengtao Dai, Smit Vasani, You Li, Binwei Yao, Michael Henry Tessler, Sijia Yang, Dhavan Shah, Robert Hawkins, Junjie Hu, & Timothy T. Rogers. (2026). DEBATE: A large-scale benchmark for evaluating opinion dynamics in role-playing LLM agents (arXiv:2510.25110) [Preprint]

Paper Link: https://arxiv.org/abs/2510.25110

Directory Structure

.
├── raw/                       # Raw exports
│   ├── depth/                 # Topic Set 1: Depth topics (fewer topics, more conversations each)
│   │   ├── [topic_name]/
│   │   │   ├── *.csv
│   │   │   └── ...
│   │   └── ...
│   └── breadth/               # Topic Set 2: Breadth topics (many topics, fewer conversations each)
│       ├── [topic_name]/
│       │   ├── *.csv
│       │   └── ...
│       └── ...
├── golden/                    # Curated golden subsets
│   ├── depth/
│   └── breadth/
├── README.md
└── VERSION_LOG.md

Data Organization

Depth Topic Set vs Breadth Topic Set

  • Depth Topics (7 Topics): Focused exploration of a smaller set of topics with multiple conversation sessions per topic

  • Breadth Topics (100 Topics): Broad coverage across many different topics with fewer sessions per topic

  • For more information on topics, check Appendix.

File Naming Convention

Each CSV file follows this naming pattern:

YYYYMMDD_HHMMSS_TOPIC_NAME_UNIQUE_ID.csv

Where:

  • YYYYMMDD: Date (Year/Month/Day)
  • HHMMSS: Time (Hour/Minute/Second)
  • TOPIC_NAME: Underscored topic description
  • UNIQUE_ID: 26-character unique identifier

Data File Structure

Each CSV file contains conversation data with the following key columns:

  • Event tracking: event_order, event_type
  • Participants: worker_id, sender_id, recipient_id
  • Content: text (messages, opinions, slider values)
  • Conversation flow: chat_round_order, message_id
  • User interaction: is_slider_changed (opinion rating changes)

Event Types

  • Initial Opinion: Participant's starting position on the topic
  • tweet: Short messages during conversation
  • message_sent/message_received: Direct messages between participants

Special Notation

  • [SLIDER_VALUE=X]: Indicates participant's opinion rating (typically 1-5 scale)
  • [AUTOSUBMISSION DUE TO TIME LIMIT]: System-generated due to timeout

Data Usage

This dataset is suitable for research on:

  • Opinion dynamics and persuasion
  • Human-AI conversation patterns
  • Political and social belief systems
  • Argumentation and debate analysis
  • Consensus building in controversial topics

Data Quality

  • Files contain real human conversation data
  • Some conversations may be incomplete due to participant dropout
  • Time limits may have caused automatic submissions
  • Processed data may contain empty rows: Consecutive messages from the same user are concatenated and treated as a single message, which can result in empty rows in the processed dataset

License & Usage Restrictions

This dataset is released under the:

DEBATE Dataset Research-Only License (Non-Commercial, v1.0) (see the LICENSE file in this repository).

Citation

Please cite the following work when using this dataset in your research:

@article{chuang2025debate,
  title   = {DEBATE: A Large-Scale Benchmark for Evaluating Opinion Dynamics in Role-Playing LLM Agents},
  author  = {Chuang, Yun-Shiuan and Tu, Ruixuan and Dai, Chengtao and Vasani, Smit and Li, You and Yao, Binwei and Tessler, Michael Henry and Yang, Sijia and Shah, Dhavan and Hawkins, Robert and Hu, Junjie and Rogers, Timothy T.},
  year    = {2025},
  journal = {arXiv preprint arXiv:2510.25110},
  doi     = {10.48550/arXiv.2510.25110},
  url     = {https://arxiv.org/abs/2510.25110}
}