Datasets:
metadata
license: mit
task_categories:
- multiple-choice
- question-answering
language:
- en
tags:
- spatial-reasoning
- embodied-cognition
- benchmark
- evaluation
size_categories:
- 1K<n<10K
Tractatus-Eval: Spatial Embodied Logic Benchmark
A benchmark for evaluating whether LLMs understand hard physical constraints (walls, obstacles, grid boundaries) that require embodied spatial understanding.
Each sample is a 5×5 grid navigation problem with A*-computed ground truth and physics-engine-validated distractors.
- 1,000 samples | 4 choices each | 0% data contamination
- Compatible with EleutherAI lm-evaluation-harness
GitHub: AlexFlanker/tractatus-eval