| | --- |
| | license: cc-by-4.0 |
| | task_categories: |
| | - text-classification |
| | - token-classification |
| | - text-generation |
| | task_ids: |
| | - multi-class-classification |
| | - sentiment-analysis |
| | language: |
| | - en |
| | size_categories: |
| | - 10K<n<100K |
| | pretty_name: CreateDebate Arguments Dataset |
| | tags: |
| | - debate |
| | - argumentation |
| | - argument-mining |
| | - stance-detection |
| | - user-generated-content |
| | --- |
| | |
| | # CreateDebate Arguments Dataset |
| |
|
| | ## Dataset Description |
| |
|
| | ### Dataset Summary |
| |
|
| | The CreateDebate Arguments Dataset is a comprehensive collection of 84,872 argumentative statements extracted from 500 curated debates on the CreateDebate platform. This dataset captures natural argumentative discourse in a structured, hierarchical format, making it valuable for research in argument mining, stance detection, argumentation theory, and computational linguistics. |
| |
|
| | Each entry represents a single argument within a debate, complete with metadata about the argument itself, its author, its position in the debate tree, and contextual information about the broader debate. The dataset preserves the conversational structure of online debates, including parent-child relationships between arguments and replies. |
| |
|
| | ### Supported Tasks and Leaderboards |
| |
|
| | This dataset can be used for various NLP and computational argumentation tasks: |
| |
|
| | - **Stance Detection**: Classifying arguments according to their declared side in binary debates |
| | - **Argument Quality Assessment**: Predicting argument scores based on community voting |
| | - **Argument Mining**: Identifying and extracting argumentative components |
| | - **Topic Classification**: Categorizing debates and arguments by topic |
| | - **Argument Generation**: Training models to generate coherent argumentative text |
| |
|
| | ### Languages |
| |
|
| | The dataset is primarily in English (en). |
| |
|
| | ## Dataset Structure |
| |
|
| | ### Data Instances |
| |
|
| | Each instance represents a single argument posted in a debate. Here's an example: |
| |
|
| | ```json |
| | { |
| | "argumentId": "arg997904", |
| | "parentId": -1, |
| | "userPoints": 17, |
| | "argumentBody": "The executive is more powerful than legislation because of the president's power to make executive orders. Although executive orders aren't written in the constitution, it is an implied power from Article 2. With this power, the president can make order which is similar to a law without having the check of congress, allowing him to work faster than congress. The president also has more power in his ability to reverse an executive order without review.", |
| | "date": 1572462800000, |
| | "argumentSide": "The executive is more powerful", |
| | "argumentPoints": 7, |
| | "argumentTag": "", |
| | "debateUrl": "https://www.createdebate.com/debate/show/4th_Block_Which_is_more_powerful_the_executive_or_legislative_branch", |
| | "debateTopic": "general", |
| | "debateTitle": "4th Block: Which is more powerful, the executive or legislative branch?", |
| | "debateDescription": "Make a claim and support your claim. Take intellectual risks and see what arguments you can come up with to support your position.", |
| | "debateStats": "{\"score\": 252, \"arguments\": 92, \"totalVotes\": 279}", |
| | "debateTags": "{\"The executive is more powerful\": 38, \"The legislative is ++ powerful\": 52}", |
| | "debateSides": "[{\"id\": 0, \"title\": \"The executive is more powerful\", \"points\": 104}, {\"id\": 1, \"title\": \"The legislative is ++ powerful\", \"points\": 148}]", |
| | "topic": "executive branch/legislative branch", |
| | "sides": "[\"The executive branch is more powerful than the legislative branch\", \"The legislative branch is more powerful than the executive branch\"]", |
| | "characterCount": 455, |
| | "context": "", |
| | "depth": 0 |
| | } |
| | ``` |
| |
|
| | ### Data Fields |
| |
|
| | #### Argument-Level Fields |
| |
|
| | - `argumentId` (string): Unique identifier for the argument |
| | - `parentId` (int64): ID of the parent argument (-1 for top-level arguments) |
| | - `userPoints` (int64): Total points accumulated by the user on the platform |
| | - `argumentBody` (string): The full text content of the argument |
| | - `date` (int64): Unix timestamp (milliseconds) when the argument was posted |
| | - `argumentSide` (string): The side/stance the argument supports |
| | - `argumentPoints` (int64): Score of the argument based on upvotes/downvotes |
| | - `argumentTag` (string): Optional tag assigned to the argument |
| | - `characterCount` (int64): Number of characters in the argument body |
| | - `context` (string): Text of the parent argument (empty for top-level arguments) |
| | - `depth` (int64): Position in the debate tree (0 = top-level, 1 = first reply, etc.) |
| |
|
| | #### Debate-Level Fields |
| |
|
| | - `debateUrl` (string): URL of the debate |
| | - `debateTopic` (string): Category/topic assigned by the platform |
| | - `debateTitle` (string): Title of the debate |
| | - `debateDescription` (string): Description or prompt provided by the debate creator |
| | - `debateStats` (string): JSON string containing debate statistics (score, total arguments, total votes) |
| | - `debateTags` (string): JSON string mapping sides to argument counts |
| | - `debateSides` (string): JSON string containing side information (id, title, points) |
| | - `topic` (string): Manually assigned topic keywords for research organization |
| | - `sides` (string): JSON string containing expanded side descriptions |
| |
|
| | ### Data Splits |
| |
|
| | The dataset as released contains all 84,872 arguments in a single collection. Users may wish to create their own train/validation/test splits based on their specific research needs. Potential splitting strategies include: |
| |
|
| | - Random split of arguments |
| | - Temporal split based on posting date |
| | - Debate-level split (ensuring all arguments from a debate stay in the same split) |
| | - Topic-based split for domain adaptation studies |
| |
|
| | ## Dataset Creation |
| |
|
| | ### Curation Rationale |
| |
|
| | Online debate platforms provide a rich source of natural argumentative discourse, but most existing argument mining datasets are either small-scale, domain-specific, or artificially constructed. The CreateDebate platform offers: |
| |
|
| | 1. **Hierarchical structure**: Arguments are organized in tree-like conversations with replies and counter-arguments |
| | 2. **Explicit stance labeling**: Users declare their position when posting |
| | 3. **Community evaluation**: Voting mechanisms provide quality signals |
| | 4. **Topical diversity**: Debates span politics, science, philosophy, entertainment, and more |
| | 5. **Natural language**: User-generated content reflects authentic argumentative strategies |
| |
|
| | This dataset was curated to support research in computational argumentation while ensuring quality, diversity, and ethical considerations. |
| |
|
| | ### Source Data |
| |
|
| | #### Initial Data Collection and Normalization |
| |
|
| | Data was collected from CreateDebate (www.createdebate.com) through a two-phase automated process: |
| |
|
| | **Phase One: Debate Discovery** |
| | - Used Selenium-based web crawler to navigate CreateDebate's debate listings |
| | - Configured filters to include all topical categories |
| | - Restricted to "for/against" format debates for clearer stance structure |
| | - Sorted by number of arguments to prioritize substantive discussions |
| | - Included both open (ongoing) and closed (finished) debates |
| | - Collected 7,511 unique debates from the first 100 listing pages |
| |
|
| | **Phase Two: Argument Extraction** |
| | - Visited each debate URL to extract the complete argument tree |
| | - Scraped argument text, timestamp, stance, score, and hierarchical position |
| | - Collected debate-level metadata (title, description, statistics) |
| | - Preserved parent-child relationships for tree reconstruction |
| |
|
| | #### Manual Curation |
| |
|
| | From the 7,511 debates initially collected, 500 debates were manually curated: |
| |
|
| | 1. **Content filtering**: Removed debates on inappropriate or sensitive topics |
| | 2. **Deduplication**: Eliminated near-duplicate debates on the same issue |
| | 3. **Quality verification**: Ensured debates were coherent and substantive |
| | 4. **Topic assignment**: Manually assigned topic keywords for organization |
| | 5. **Structure enhancement**: Added `depth` and `context` fields to capture conversational structure |
| |
|
| | The curation process resulted in 84,872 high-quality arguments across 500 diverse debates. |
| |
|
| | ### Annotations |
| |
|
| | #### Annotation Process |
| |
|
| | The primary annotations in this dataset are user-generated: |
| |
|
| | - **Stance labels**: Users select a side when posting arguments |
| | - **Quality scores**: Community members vote on argument quality |
| | - **Topics**: Platform categorization + manual topic assignment during curation |
| |
|
| | Additional structural annotations (`depth`, `context`) were computed programmatically based on the debate tree structure. |
| |
|
| | ### Personal and Sensitive Information |
| |
|
| | All user-identifying information has been removed from the public release to preserve privacy: |
| |
|
| | The dataset contains only the text of arguments and metadata about debates. Some arguments may reference public figures or discuss sensitive topics, as is typical in political and philosophical debates. |
| |
|
| | ## Considerations for Using the Data |
| |
|
| | ### Social Impact of Dataset |
| |
|
| | **Positive Impacts:** |
| | - Advances research in computational argumentation and NLP |
| | - Enables development of tools to analyze and understand online discourse |
| | - Supports education and training in critical thinking and argumentation |
| | - Provides insights into how people reason about complex issues |
| |
|
| | **Potential Risks:** |
| | - Models trained on this data may reflect biases present in online debate communities |
| | - Arguments may contain false information or flawed reasoning that models could learn |
| | - Debate platform users may not be demographically representative |
| |
|
| | ### Discussion of Biases |
| |
|
| | **Platform Bias**: CreateDebate users are self-selected and may not represent broader populations. The platform attracts users interested in debate and argumentation. |
| |
|
| | **Topical Bias**: While curated for diversity, some topics are more heavily represented than others. Political debates are particularly common. |
| |
|
| | **Quality Variance**: User-generated content varies significantly in quality, from well-researched arguments to informal opinions. |
| |
|
| | **Temporal Bias**: Arguments reflect the time periods when they were posted and may reference contemporary events. |
| |
|
| | **Structural Bias**: The "for/against" format simplifies complex issues into binary positions, which may not capture the full nuance of many topics. |
| |
|
| | ### Other Known Limitations |
| |
|
| | - Some arguments may be off-topic or tangential to the debate question |
| | - Sarcasm, humor, and rhetorical devices may be difficult to detect automatically |
| | - The tree structure can be complex, with some debates having very deep nesting |
| | - Duplicate or near-duplicate arguments may exist within debates |
| | - Some debates may be abandoned or have very few arguments |
| |
|
| | ### Licensing Information |
| |
|
| | This dataset is released under the Creative Commons Attribution 4.0 International (CC BY 4.0) license. |
| |
|
| | Users are free to: |
| | - Share — copy and redistribute the material in any medium or format |
| | - Adapt — remix, transform, and build upon the material for any purpose, even commercially |
| |
|
| | Under the following terms: |
| | - Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made |
| |
|
| | <!-- ### Citation Information |
| |
|
| | If you use this dataset in your research, please cite: |
| |
|
| | ```bibtex |
| | @dataset{createdebate_arguments_2025, |
| | title={CreateDebate Arguments Dataset: A Curated Collection of Hierarchical Argumentative Discourse}, |
| | author={[Your Name/Organization]}, |
| | year={2025}, |
| | publisher={HuggingFace}, |
| | howpublished={\\url{https://huggingface.co/datasets/[your-repo-name]}} |
| | } |
| | ``` --> |
| |
|
| | ### Contributions |
| |
|
| | We welcome contributions and feedback! If you find issues with the dataset or have suggestions for improvements, please open an issue or pull request in the dataset repository. |
| |
|
| | ### Acknowledgments |
| |
|
| | - CreateDebate platform and its community for providing the source content |
| | - The Selenium project for enabling web scraping capabilities |
| | - HuggingFace for dataset hosting infrastructure |
| |
|