|
|
--- |
|
|
license: gpl-3.0 |
|
|
language: |
|
|
- en |
|
|
tags: |
|
|
- world-models |
|
|
- arc |
|
|
- abstraction-reasoning-corpus |
|
|
- reasoning |
|
|
- abstract-reasoning |
|
|
- logical-reasoning |
|
|
- causal-reasoning |
|
|
- counterfactual-reasoning |
|
|
- evaluation |
|
|
- benchmark |
|
|
- structural-causal-models |
|
|
- causal-discovery |
|
|
- test-time-training |
|
|
- few-shot-learning |
|
|
- in-context-learning |
|
|
task_categories: |
|
|
- question-answering |
|
|
pretty_name: CausalARC – Abstract Reasoning with Causal World Models |
|
|
--- |
|
|
|
|
|
|
|
|
|
|
|
<p align="center"> |
|
|
<img src='https://jmaasch.github.io/carc/static/images/header.png' width="100%" class="center"> |
|
|
<!-- |
|
|
Jacqueline R. M. A. Maasch<sup>1</sup>, John Kalantari<sup>2</sup>, Kia Khezeli<sup>2</sup> |
|
|
<sup>1</sup> Cornell Tech, New York, NY |
|
|
<sup>2</sup> YRIKKA, New York, NY --> |
|
|
</p> |
|
|
|
|
|
<p align="center"> |
|
|
<b>NeurIPS 2025 LAW Workshop ★ Spotlight Paper </b> |
|
|
<br> |
|
|
<b>Amazon AGI Trusted AI Symposium 2026 ★ Poster</b> |
|
|
<br> |
|
|
<b>See our official project page here: <a href="https://jmaasch.github.io/carc/" |
|
|
target="_blank">https://jmaasch.github.io/carc/</a> </b> |
|
|
</p> |
|
|
|
|
|
# Overview |
|
|
|
|
|
On-the-fly reasoning often requires adaptation to novel problems under limited data and distribution shift. |
|
|
This work introduces CausalARC: an experimental testbed for AI reasoning in low-data and out-of-distribution regimes, |
|
|
modeled after the Abstraction and Reasoning Corpus (ARC). Each CausalARC reasoning task is sampled from a fully specified |
|
|
<i>causal world model</i>, formally expressed as a structural causal model. |
|
|
Principled data augmentations provide observational, interventional, and counterfactual feedback about the world model in |
|
|
the form of few-shot, in-context learning demonstrations. As a proof-of-concept, we illustrate the use of CausalARC for four |
|
|
language model evaluation settings: (1) abstract reasoning with test-time training, (2) counterfactual reasoning with |
|
|
in-context learning, (3) program synthesis, and (4) causal discovery with logical reasoning. Within- and between-model |
|
|
performance varied heavily across tasks, indicating room for significant improvement in language model reasoning. |
|
|
|
|
|
<p align="center"> |
|
|
<img src='https://jmaasch.github.io/carc/static/images/pch.png' width="35%" class="center"> |
|
|
<i>Pearl Causal Hierarchy: observing factual realities (L1), exerting actions to |
|
|
induce interventional realities (L2), and imagining alternate counterfactual realities (L3) |
|
|
[<a href="https://dl.acm.org/doi/pdf/10.1145/3501714.3501743?casa_token=hJAJZQLNGbEAAAAA:exuQk37fuXGMkpOVJEKACgnupjkP-adDQGhv2YzfBN9MfoERkAcHQRDgT3myWfccfqucQd8h63Q" |
|
|
target="_blank">1</a>]. Lower levels generally underdetermine higher levels. |
|
|
</i> |
|
|
</p> |
|
|
|
|
|
This work extends and reconceptualizes the ARC setup to support causal reasoning evaluation under limited data and distribution shift. |
|
|
Given a fully specified SCM, all three levels of the Pearl Causal Hierarchy (PCH) are well-defined: any observational (L1), interventional |
|
|
(L2), or counterfactual (L3) query can be answered about the environment under study [2]. This |
|
|
formulation makes CausalARC an open-ended playground for testing reasoning hypotheses at all |
|
|
three levels of the PCH, with an emphasis on abstract, logical, and counterfactual reasoning. |
|
|
|
|
|
<p align="center"> |
|
|
<img src='https://jmaasch.github.io/carc/static/images/demo.png' width="100%" class="center"> |
|
|
<i> The CausalARC testbed. <b>(A)</b> First, SCM <i>M</i> is manually transcribed in Python code. <b>(B)</b> |
|
|
Input-output pairs are randomly sampled, providing observational (L1) learning signals about the |
|
|
world model. <b>(C)</b> Sampling from interventional submodels <i>M</i>' of <i>M</i> yields interventional (L2) |
|
|
samples (x', y'). Given pair (x, y), performing multiple interventions while holding the exogenous |
|
|
context constant yields a set of counterfactual (L3) pairs. <b>(D)</b> Using L1 and L3 pairs as in-context |
|
|
demonstrations, we can automatically generate natural language prompts for diverse reasoning tasks. |
|
|
</i> |
|
|
</p> |