Datasets:

Modalities:
Text
Formats:
csv
ArXiv:
Libraries:
Datasets
pandas
License:
Rail2Country / README.md
RuHae's picture
Update README.md
3296202 verified
metadata
license: mit
configs:
  - config_name: Mono
    default: true
    data_files:
      - split: train
        path: Mono/train.csv
      - split: train_rules
        path: Mono/train_rules.csv
      - split: validation
        path: Mono/val.csv
      - split: validation_rules
        path: Mono/val_rules.csv
      - split: validation_CoT_rules
        path: Mono/val_rules_cot.csv
      - split: test
        path: Mono/test.csv
      - split: test_rules
        path: Mono/test_rules.csv
      - split: test_CoT_rules
        path: Mono/test_rules_cot.csv
  - config_name: Meta
    data_files:
      - split: train
        path: Meta/train.csv
      - split: train_rules
        path: Meta/train_rules.csv
      - split: validation
        path: Meta/val.csv
      - split: validation_rules
        path: Meta/val_rules.csv
      - split: validation_CoT_rules
        path: Meta/val_rules_cot.csv
      - split: test
        path: Meta/test.csv
      - split: test_rules
        path: Meta/test_rules.csv
      - split: test_CoT_rules
        path: Meta/test_rules_cot.csv

Rail2Country

arXiv GitHub License: MIT

This is the dataset corresponding to the paper "ActivationReasoning: Logical Reasoning in Latent Activation Spaces". The Rail2Country dataset is a novel benchmark introduced in the paper "ActivationReasoning: Logical Reasoning in Latent Activation Spaces". It is specifically designed to test the ability of Large Language Models (LLMs) to perform deductive reasoning over concepts when those concepts are expressed indirectly or abstractly. Inspired by earlier train-based reasoning tasks, each instance requires mapping a train's car color sequence to its country of origin, based on a set of flag color-to-country rules encoded into the logic component.

The dataset includes two distinct variants to evaluate generalization:

  • R2C-Mono: In this setting, colors are explicitly stated in the train descriptions (e.g., "red").
  • R2C-Meta: This variant replaces explicit color mentions with similes and descriptive proxies (e.g., "colored like a tomato"), forcing the model to resolve implicit cues by integrating contextual phrasing and world knowledge.

This dataset enables researchers to investigate how well LLMs can generalize from explicit lexical concepts to abstract, meta-level descriptions.

Dataset usage

from datasets import load_dataset

ds = load_dataset("AIML-TUDA/Rail2Country", "Mono")

or

from datasets import load_dataset

ds = load_dataset("AIML-TUDA/Rail2Country", "Meta")

Dataset subsets

The dataset is structured into two main subsets, Mono and Meta, designed to evaluate concept generalization and reasoning robustness.

R2C-Mono (Monosemantic)

This configuration focuses on explicit reasoning where the colors are clearly and lexically stated (e.g., red). This typically results in stable, robust Sparse Autoencoder (SAE) activations for concept detection.

R2C-Meta (Meta-level)

This configuration challenges models to generalize beyond explicit terms by replacing colors with similes and descriptive proxies, such as colored like a tomato. Successfully identifying the color cue requires integrating contextual phrasing and world knowledge, a task that scatters latent activations and introduces noise for standard SAEs.

Splits and Rule Formats

Each subset provides three standard splits (train, validation, and test), along with additional files that support different experimental configurations:

  • Standard Splits (train, validation, test): These contain the main task prompt and the ground truth country label.
  • Rules Splits (suffixed with _rules): These files contain the context information (the country-to-color-sequence mapping) already encoded in logical rule format, designed for symbolic reasoning modules like the one used in the ACTIVATIONREASONING (AR) framework.
  • Chain-of-Thought (CoT) Splits (suffixed with _CoT_rules): Available for validation and test, these files include the rules alongside a Chain-of-Thought instruction to test LLM baseline performance when given identical structured information to the AR framework.

Dataset Columns

Column Name Type Description
prompt str The task prompt containing the train description (including car colors/similes, chassis details, and cargo).
label str The correct country of origin for the prompt's train, corresponding to the flag color sequence.

🔍 Citation

If you use this code or datasets, please cite:

@inproceedings{helff2025activationreasoning,
  title={ActivationReasoning: Logical Reasoning in Latent Activation Spaces},
  author={Lukas Helff and Ruben Härle and Wolfgang Stammer and Felix Friedrich and Manuel Brack and Antonia Wüst and Hikaru Shindo and Patrick Schramowski and Kristian Kersting},
  booktitle={NeurIPS 2025 Workshop on Foundations of Reasoning in Language Models},
  year={2025},
}