Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ArrowInvalid
Message:      JSON parse error: Invalid value. in row 0
Traceback:    Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 174, in _generate_tables
                  df = pandas_read_json(f)
                       ^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 38, in pandas_read_json
                  return pd.read_json(path_or_buf, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 815, in read_json
                  return json_reader.read()
                         ^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 1014, in read
                  obj = self._get_object_parser(self.data)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 1040, in _get_object_parser
                  obj = FrameParser(json, **kwargs).parse()
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 1176, in parse
                  self._parse()
                File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 1392, in _parse
                  ujson_loads(json, precise_float=self.precise_float), dtype=None
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
              ValueError: Expected object or value
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 243, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 3496, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2257, in _head
                  return next(iter(self.iter(batch_size=n)))
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2461, in iter
                  for key, example in iterator:
                                      ^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1952, in __iter__
                  for key, pa_table in self._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1974, in _iter_arrow
                  yield from self.ex_iterable._iter_arrow()
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 503, in _iter_arrow
                  for key, pa_table in iterator:
                                       ^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 350, in _iter_arrow
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 177, in _generate_tables
                  raise e
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 151, in _generate_tables
                  pa_table = paj.read_json(
                             ^^^^^^^^^^^^^^
                File "pyarrow/_json.pyx", line 342, in pyarrow._json.read_json
                File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: JSON parse error: Invalid value. in row 0

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Topics Winning Arguments (TWA)

Topics Winning Arguments (TWA) is a topic-organized extension of the Winning Arguments (WA) dataset, designed to support topic-aware analysis of persuasion in online discussions.

TWA is derived from Reddit’s r/ChangeMyView (CMV) subreddit and preserves the original structure, splits, and annotations of WA, while introducing explicit topic assignments obtained via neural topic modeling.


πŸ“Œ What is in the dataset?

Each data point corresponds to:

  • a CMV original post (OP), and
  • one or more argument pairs, where:
    • one argument successfully persuaded the OP (received a Ξ”),
    • the other is a closely matched unsuccessful argument.

Arguments are grouped by:

  • topic
  • train / test split
  • CMV thread

🧠 Topics

Arguments are clustered into four high-level topics using BERTopic:

  1. Food and Culture
  2. Religion and Ethical Debates
  3. Economics and Politics
  4. Gender, Sexuality, and Minority Rights

Topic modeling details, statistics, and examples are provided in the associated paper.


πŸ“ Dataset structure


TWA/
β”œβ”€β”€ train/
β”‚   β”œβ”€β”€ topic_<id>_<name>/
β”‚   β”‚   β”œβ”€β”€ <doc_id>/
β”‚   β”‚   β”‚   β”œβ”€β”€ op.json
β”‚   β”‚   β”‚   └── pairs.json
β”œβ”€β”€ test/
β”‚   └── ...
β”œβ”€β”€ metadata/
β”‚   β”œβ”€β”€ document_assignments.csv
β”‚   └── topic_info.csv

File descriptions

op.json

Contains the original CMV post:

{
  "doc_id": "...",
  "op_user": "...",
  "op_title": "...",
  "op_text": "...",
  "pair_ids": [...],
  "topic_id": "...",
  "topic_name": "...",
  "split": "train"
}

pairs.json

A list of persuasive / non-persuasive argument pairs:

[
  {
    "pair_id": "p_XXXX",
    "success": "...",
    "unsuccess": "..."
  }
]

πŸš€ How to use the dataset

Example (Python)

import json
from pathlib import Path

doc_dir = Path("TWA/train/topic_2_Religion_and_Ethical_Debates/t3_2ro9ux")

with open(doc_dir / "op.json") as f:
    op = json.load(f)

with open(doc_dir / "pairs.json") as f:
    pairs = json.load(f)

print(op["op_text"])
print(pairs[0]["success"])

The structure is compatible with:

  • Hugging Face datasets
  • PyTorch / TensorFlow pipelines
  • custom evaluation scripts

πŸ“š Credits

Original datasets

  • Winning Arguments Tan et al., Winning Arguments: Interaction Dynamics and Persuasion Strategies in Good-faith Online Discussions, WWW 2016.

  • Change My View (CMV) Reddit community: https://www.reddit.com/r/changemyview/

TWA fully acknowledges and builds upon the original annotations and data collection efforts of these works.


πŸ“œ License

This dataset is released under the Creative Commons Attribution–NonCommercial–NoDerivatives 4.0 International (CC BY-NC-ND 4.0) license.

You are free to:

  • Share β€” copy and redistribute the material in any medium or format

Under the following terms:

  • Attribution β€” You must give appropriate credit to the authors of TWA, provide a link to the license, and indicate if changes were made.
  • NonCommercial β€” You may not use the material for commercial purposes.
  • NoDerivatives β€” You may not distribute modified versions of the dataset.

No additional restrictions apply beyond those described in the license.

πŸ”— Full license text: https://creativecommons.org/licenses/by-nc-nd/4.0/


πŸ”Ž Note on original data sources

TWA is derived from:

  • the Winning Arguments dataset, and
  • Reddit’s r/ChangeMyView (CMV) content.

Users of TWA are responsible for complying with the original terms and conditions of these sources where applicable.


πŸ“ Citation

If you use TWA, please cite:

@inproceedings{labruna2026detecting,
  title     = {Detecting Winning Arguments with Large Language Models and Persuasion Strategies},
  author    = {Labruna, Tiziano and Modzelewski, Arkadiusz and
               Satta, Giorgio and Da San Martino, Giovanni},
  booktitle = {Proceedings of the 19th Conference of the European Chapter
               of the Association for Computational Linguistics (EACL)},
  year      = {2026}
}

πŸ“¬ Contact

For questions, issues, or suggestions, please open a Hugging Face issue or contact the authors of the paper at tiz.labruna@gmail.com.

Downloads last month
18