jmaasch commited on
Commit
94a3bd1
·
verified ·
1 Parent(s): 5f4c893

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +32 -31
README.md CHANGED
@@ -4,51 +4,52 @@ license: gpl-3.0
4
 
5
  # CausalARC: Abstract Reasoning with Causal World Models
6
 
7
- Jacqueline R. M. A. Maasch^1, John Kalantari^2, Kia Khezeli^2
8
- 1. Cornell Tech, New York, NY
9
- 2. YRIKKA, New York, NY
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
 
11
- <p>
12
- <b>Note: This page is currently under construction. See our official project page here: <a href="https://jmaasch.github.io/carc/"
13
- target="_blank">https://jmaasch.github.io/carc/</a> </b>
14
- <br>
15
- <br>
16
- Reasoning requires adaptation to novel problem settings under limited data and distribution shift.
17
- This work introduces CausalARC: an experimental testbed for AI reasoning in low-data and out-of-distribution regimes,
18
- modeled after the Abstraction and Reasoning Corpus (ARC). Each CausalARC reasoning task is sampled from a fully
19
- specified causal world model, formally expressed as a structural causal model (SCM). Principled data augmentations
20
- provide observational, interventional, and counterfactual feedback about the world model in the form of few-shot,
21
- in-context learning demonstrations. As a proof-of-concept, we illustrate the use of CausalARC for four language
22
- model evaluation settings: (1) abstract reasoning with test-time training, (2) counterfactual reasoning with
23
- in-context learning, (3) program synthesis, and (4) causal discovery with logical reasoning.
24
- </p>
25
- <br>
26
  <p style="text-align:center">
27
  <img style="vertical-align:middle" src='https://jmaasch.github.io/carc/static/images/pch.png' width="30%" class="center">
28
  <br>
29
- <small>Pearl Causal Hierarchy: observing factual realities (L1), exerting actions to
30
  induce interventional realities (L2), and imagining alternate counterfactual realities (L3)
31
  [<a href="https://dl.acm.org/doi/pdf/10.1145/3501714.3501743?casa_token=hJAJZQLNGbEAAAAA:exuQk37fuXGMkpOVJEKACgnupjkP-adDQGhv2YzfBN9MfoERkAcHQRDgT3myWfccfqucQd8h63Q"
32
  target="_blank">1</a>]. Lower levels generally underdetermine higher levels.
33
- </small>
34
- </p>
35
- <br>
36
- <p>
37
- This work extends and reconceptualizes the ARC setup to support causal reasoning evaluation under limited data and distribution shift.
38
- Given a fully specified SCM, all three levels of the Pearl Causal Hierarchy (PCH) are well-defined: any observational (L1), interventional
39
- (L2), or counterfactual (L3) query can be answered about the environment under study [2]. This
40
- formulation makes CausalARC an open-ended playground for testing reasoning hypotheses at all
41
- three levels of the PCH, with an emphasis on abstract, logical, and counterfactual reasoning.
42
  </p>
43
- <br>
 
 
 
 
 
 
 
44
  <p style="text-align:center">
45
  <img style="vertical-align:middle" src='https://jmaasch.github.io/carc/static/images/demo.png' width="100%" class="center">
46
  <br>
47
- <small> The CausalARC testbed. <b>(A)</b> First, SCM <i>M</i> is manually transcribed in Python code. <b>(B)</b>
48
  Input-output pairs are randomly sampled, providing observational (L1) learning signals about the
49
  world model. <b>(C)</b> Sampling from interventional submodels <i>M</i>' of <i>M</i> yields interventional (L2)
50
  samples (x', y'). Given pair (x, y), performing multiple interventions while holding the exogenous
51
  context constant yields a set of counterfactual (L3) pairs. <b>(D)</b> Using L1 and L3 pairs as in-context
52
  demonstrations, we can automatically generate natural language prompts for diverse reasoning tasks.
53
- </small>
54
  </p>
 
4
 
5
  # CausalARC: Abstract Reasoning with Causal World Models
6
 
7
+ Jacqueline R. M. A. Maasch<sup>1</sup>, John Kalantari<sup>2</sup>, Kia Khezeli<sup>2</sup>
8
+
9
+ <sup>1</sup> Cornell Tech, New York, NY
10
+ <sup>2</sup> YRIKKA, New York, NY
11
+
12
+
13
+ **Note: This page is currently under construction. See our official project page here: <a href="https://jmaasch.github.io/carc/"
14
+ target="_blank">https://jmaasch.github.io/carc/</a> **
15
+
16
+ ## Overview
17
+
18
+ Reasoning requires adaptation to novel problem settings under limited data and distribution shift.
19
+ This work introduces CausalARC: an experimental testbed for AI reasoning in low-data and out-of-distribution regimes,
20
+ modeled after the Abstraction and Reasoning Corpus (ARC). Each CausalARC reasoning task is sampled from a fully
21
+ specified causal world model, formally expressed as a structural causal model (SCM). Principled data augmentations
22
+ provide observational, interventional, and counterfactual feedback about the world model in the form of few-shot,
23
+ in-context learning demonstrations. As a proof-of-concept, we illustrate the use of CausalARC for four language
24
+ model evaluation settings: (1) abstract reasoning with test-time training, (2) counterfactual reasoning with
25
+ in-context learning, (3) program synthesis, and (4) causal discovery with logical reasoning.
26
+
27
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
28
  <p style="text-align:center">
29
  <img style="vertical-align:middle" src='https://jmaasch.github.io/carc/static/images/pch.png' width="30%" class="center">
30
  <br>
31
+ <i>Pearl Causal Hierarchy: observing factual realities (L1), exerting actions to
32
  induce interventional realities (L2), and imagining alternate counterfactual realities (L3)
33
  [<a href="https://dl.acm.org/doi/pdf/10.1145/3501714.3501743?casa_token=hJAJZQLNGbEAAAAA:exuQk37fuXGMkpOVJEKACgnupjkP-adDQGhv2YzfBN9MfoERkAcHQRDgT3myWfccfqucQd8h63Q"
34
  target="_blank">1</a>]. Lower levels generally underdetermine higher levels.
35
+ </i>
 
 
 
 
 
 
 
 
36
  </p>
37
+
38
+ This work extends and reconceptualizes the ARC setup to support causal reasoning evaluation under limited data and distribution shift.
39
+ Given a fully specified SCM, all three levels of the Pearl Causal Hierarchy (PCH) are well-defined: any observational (L1), interventional
40
+ (L2), or counterfactual (L3) query can be answered about the environment under study [2]. This
41
+ formulation makes CausalARC an open-ended playground for testing reasoning hypotheses at all
42
+ three levels of the PCH, with an emphasis on abstract, logical, and counterfactual reasoning.
43
+
44
+
45
  <p style="text-align:center">
46
  <img style="vertical-align:middle" src='https://jmaasch.github.io/carc/static/images/demo.png' width="100%" class="center">
47
  <br>
48
+ <i> The CausalARC testbed. <b>(A)</b> First, SCM <i>M</i> is manually transcribed in Python code. <b>(B)</b>
49
  Input-output pairs are randomly sampled, providing observational (L1) learning signals about the
50
  world model. <b>(C)</b> Sampling from interventional submodels <i>M</i>' of <i>M</i> yields interventional (L2)
51
  samples (x', y'). Given pair (x, y), performing multiple interventions while holding the exogenous
52
  context constant yields a set of counterfactual (L3) pairs. <b>(D)</b> Using L1 and L3 pairs as in-context
53
  demonstrations, we can automatically generate natural language prompts for diverse reasoning tasks.
54
+ </i>
55
  </p>