File size: 3,985 Bytes
222bbc0
 
92bd8c8
 
 
8c059cd
92bd8c8
8c059cd
92bd8c8
 
8c059cd
92bd8c8
 
 
 
8c059cd
 
d6edeea
 
 
92bd8c8
 
 
222bbc0
 
a3e500d
5f4c893
e9ce435
 
 
 
 
 
 
 
77f2bdd
77a6499
 
 
 
 
 
77f2bdd
 
589b450
94a3bd1
2f5e1d6
94a3bd1
2f5e1d6
 
 
 
 
 
 
94a3bd1
19f1b40
92bd8c8
94a3bd1
222bbc0
 
 
94a3bd1
222bbc0
94a3bd1
 
 
 
 
 
 
92bd8c8
 
94a3bd1
222bbc0
 
 
 
 
94a3bd1
222bbc0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
---
license: gpl-3.0
language:
- en
tags:
- world-models
- arc
- abstraction-reasoning-corpus
- reasoning
- abstract-reasoning
- logical-reasoning
- causal-reasoning
- counterfactual-reasoning
- evaluation
- benchmark
- structural-causal-models
- causal-discovery
- test-time-training
- few-shot-learning
- in-context-learning
task_categories:
- question-answering
pretty_name: CausalARC  Abstract Reasoning with Causal World Models
---



<p align="center">
  <img src='https://jmaasch.github.io/carc/static/images/header.png' width="100%" class="center">
  <!--
  Jacqueline R. M. A. Maasch<sup>1</sup>, John Kalantari<sup>2</sup>, Kia Khezeli<sup>2</sup>
  <sup>1</sup> Cornell Tech, New York, NY
  <sup>2</sup> YRIKKA, New York, NY -->
</p>

<p align="center">
  <b>NeurIPS 2025 LAW Workshop ★ Spotlight Paper </b>
  <br>
  <b>Amazon AGI Trusted AI Symposium 2026 ★ Poster</b>
  <br>
  <b>See our official project page here: <a href="https://jmaasch.github.io/carc/"
       target="_blank">https://jmaasch.github.io/carc/</a> </b>
</p>

# Overview

On-the-fly reasoning often requires adaptation to novel problems under limited data and distribution shift. 
This work introduces CausalARC: an experimental testbed for AI reasoning in low-data and out-of-distribution regimes, 
modeled after the Abstraction and Reasoning Corpus (ARC). Each CausalARC reasoning task is sampled from a fully specified 
<i>causal world model</i>, formally expressed as a structural causal model. 
Principled data augmentations provide observational, interventional,  and counterfactual feedback about the world model in 
the form of few-shot, in-context learning demonstrations. As a proof-of-concept, we illustrate the use of CausalARC for four 
language model evaluation settings: (1) abstract reasoning with test-time training, (2) counterfactual reasoning with 
in-context learning, (3) program synthesis, and (4) causal discovery with logical reasoning. Within- and between-model 
performance varied heavily across tasks, indicating room for significant improvement in language model reasoning.

<p align="center">
  <img src='https://jmaasch.github.io/carc/static/images/pch.png' width="35%" class="center">
  <i>Pearl Causal Hierarchy: observing factual realities (L1), exerting actions to 
    induce interventional realities (L2), and imagining alternate counterfactual realities (L3) 
    [<a href="https://dl.acm.org/doi/pdf/10.1145/3501714.3501743?casa_token=hJAJZQLNGbEAAAAA:exuQk37fuXGMkpOVJEKACgnupjkP-adDQGhv2YzfBN9MfoERkAcHQRDgT3myWfccfqucQd8h63Q"
     target="_blank">1</a>]. Lower levels generally underdetermine higher levels.
  </i>
</p>

This work extends and reconceptualizes the ARC setup to support causal reasoning evaluation under limited data and distribution shift. 
Given a fully specified SCM, all three levels of the Pearl Causal Hierarchy (PCH) are well-defined: any observational (L1), interventional 
(L2), or counterfactual (L3) query can be answered about the environment under study [2]. This 
formulation makes CausalARC an open-ended playground for testing reasoning hypotheses at all 
three levels of the PCH, with an emphasis on abstract, logical, and counterfactual reasoning. 

<p align="center">
  <img src='https://jmaasch.github.io/carc/static/images/demo.png' width="100%" class="center">
  <i> The CausalARC testbed. <b>(A)</b> First, SCM <i>M</i> is manually transcribed in Python code. <b>(B)</b>
    Input-output pairs are randomly sampled, providing observational (L1) learning signals about the
    world model. <b>(C)</b> Sampling from interventional submodels <i>M</i>' of <i>M</i> yields interventional (L2)
    samples (x', y'). Given pair (x, y), performing multiple interventions while holding the exogenous
    context constant yields a set of counterfactual (L3) pairs. <b>(D)</b> Using L1 and L3 pairs as in-context
    demonstrations, we can automatically generate natural language prompts for diverse reasoning tasks.
  </i>
</p>