Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,48 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: gpl-3.0
|
| 3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: gpl-3.0
|
| 3 |
+
---
|
| 4 |
+
|
| 5 |
+
<p>
|
| 6 |
+
<b>Note: This page is currently under construction. See our official project page here: <a href="https://jmaasch.github.io/carc/"
|
| 7 |
+
target="_blank">https://jmaasch.github.io/carc/</a> </b>
|
| 8 |
+
<br>
|
| 9 |
+
<br>
|
| 10 |
+
Reasoning requires adaptation to novel problem settings under limited data and distribution shift.
|
| 11 |
+
This work introduces CausalARC: an experimental testbed for AI reasoning in low-data and out-of-distribution regimes,
|
| 12 |
+
modeled after the Abstraction and Reasoning Corpus (ARC). Each CausalARC reasoning task is sampled from a fully
|
| 13 |
+
specified causal world model, formally expressed as a structural causal model (SCM). Principled data augmentations
|
| 14 |
+
provide observational, interventional, and counterfactual feedback about the world model in the form of few-shot,
|
| 15 |
+
in-context learning demonstrations. As a proof-of-concept, we illustrate the use of CausalARC for four language
|
| 16 |
+
model evaluation settings: (1) abstract reasoning with test-time training, (2) counterfactual reasoning with
|
| 17 |
+
in-context learning, (3) program synthesis, and (4) causal discovery with logical reasoning.
|
| 18 |
+
</p>
|
| 19 |
+
<br><br>
|
| 20 |
+
<p style="text-align:center">
|
| 21 |
+
<img style="vertical-align:middle" src='https://jmaasch.github.io/carc/static/images/pch.png' width="40%" class="center">
|
| 22 |
+
<br>
|
| 23 |
+
<small>Pearl Causal Hierarchy: observing factual realities (L1), exerting actions to
|
| 24 |
+
induce interventional realities (L2), and imagining alternate counterfactual realities (L3)
|
| 25 |
+
[<a href="https://dl.acm.org/doi/pdf/10.1145/3501714.3501743?casa_token=hJAJZQLNGbEAAAAA:exuQk37fuXGMkpOVJEKACgnupjkP-adDQGhv2YzfBN9MfoERkAcHQRDgT3myWfccfqucQd8h63Q"
|
| 26 |
+
target="_blank">1</a>]. Lower levels generally underdetermine higher levels.
|
| 27 |
+
</small>
|
| 28 |
+
</p>
|
| 29 |
+
<br><br>
|
| 30 |
+
<p>
|
| 31 |
+
This work extends and reconceptualizes the ARC setup to support causal reasoning evaluation under limited data and distribution shift.
|
| 32 |
+
Given a fully specified SCM, all three levels of the Pearl Causal Hierarchy (PCH) are well-defined: any observational (L1), interventional
|
| 33 |
+
(L2), or counterfactual (L3) query can be answered about the environment under study [2]. This
|
| 34 |
+
formulation makes CausalARC an open-ended playground for testing reasoning hypotheses at all
|
| 35 |
+
three levels of the PCH, with an emphasis on abstract, logical, and counterfactual reasoning.
|
| 36 |
+
</p>
|
| 37 |
+
<br><br>
|
| 38 |
+
<p style="text-align:center">
|
| 39 |
+
<img style="vertical-align:middle" src='https://jmaasch.github.io/carc/static/images/demo.png' width="100%" class="center">
|
| 40 |
+
<br>
|
| 41 |
+
<small> The CausalARC testbed. <b>(A)</b> First, SCM <i>M</i> is manually transcribed in Python code. <b>(B)</b>
|
| 42 |
+
Input-output pairs are randomly sampled, providing observational (L1) learning signals about the
|
| 43 |
+
world model. <b>(C)</b> Sampling from interventional submodels <i>M</i>' of <i>M</i> yields interventional (L2)
|
| 44 |
+
samples (x', y'). Given pair (x, y), performing multiple interventions while holding the exogenous
|
| 45 |
+
context constant yields a set of counterfactual (L3) pairs. <b>(D)</b> Using L1 and L3 pairs as in-context
|
| 46 |
+
demonstrations, we can automatically generate natural language prompts for diverse reasoning tasks.
|
| 47 |
+
</small>
|
| 48 |
+
</p>
|