jmaasch commited on
Commit
92bd8c8
·
verified ·
1 Parent(s): 19f1b40

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +17 -6
README.md CHANGED
@@ -1,5 +1,19 @@
1
  ---
2
  license: gpl-3.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
4
 
5
  # CausalARC: Abstract Reasoning with Causal World Models
@@ -24,10 +38,8 @@ in-context learning demonstrations. As a proof-of-concept, we illustrate the use
24
  model evaluation settings: (1) abstract reasoning with test-time training, (2) counterfactual reasoning with
25
  in-context learning, (3) program synthesis, and (4) causal discovery with logical reasoning.
26
 
27
-
28
  <p align="center">
29
- <img src='https://jmaasch.github.io/carc/static/images/pch.png' width="30%" class="center">
30
- <br>
31
  <i>Pearl Causal Hierarchy: observing factual realities (L1), exerting actions to
32
  induce interventional realities (L2), and imagining alternate counterfactual realities (L3)
33
  [<a href="https://dl.acm.org/doi/pdf/10.1145/3501714.3501743?casa_token=hJAJZQLNGbEAAAAA:exuQk37fuXGMkpOVJEKACgnupjkP-adDQGhv2YzfBN9MfoERkAcHQRDgT3myWfccfqucQd8h63Q"
@@ -41,9 +53,8 @@ Given a fully specified SCM, all three levels of the Pearl Causal Hierarchy (PCH
41
  formulation makes CausalARC an open-ended playground for testing reasoning hypotheses at all
42
  three levels of the PCH, with an emphasis on abstract, logical, and counterfactual reasoning.
43
 
44
- <p style="text-align:center">
45
- <img style="vertical-align:middle" src='https://jmaasch.github.io/carc/static/images/demo.png' width="100%" class="center">
46
- <br>
47
  <i> The CausalARC testbed. <b>(A)</b> First, SCM <i>M</i> is manually transcribed in Python code. <b>(B)</b>
48
  Input-output pairs are randomly sampled, providing observational (L1) learning signals about the
49
  world model. <b>(C)</b> Sampling from interventional submodels <i>M</i>' of <i>M</i> yields interventional (L2)
 
1
  ---
2
  license: gpl-3.0
3
+ language:
4
+ - en
5
+ tags:
6
+ - arc
7
+ - reasoning
8
+ - abstract-reasoning
9
+ - causal-reasoning
10
+ - counterfactual-reasoning
11
+ - logical-reasoning
12
+ - evaluation
13
+ - benchmark
14
+ task_categories:
15
+ - question-answering
16
+ pretty_name: CausalARC – Abstract Reasoning with Causal World Models
17
  ---
18
 
19
  # CausalARC: Abstract Reasoning with Causal World Models
 
38
  model evaluation settings: (1) abstract reasoning with test-time training, (2) counterfactual reasoning with
39
  in-context learning, (3) program synthesis, and (4) causal discovery with logical reasoning.
40
 
 
41
  <p align="center">
42
+ <img src='https://jmaasch.github.io/carc/static/images/pch.png' width="35%" class="center">
 
43
  <i>Pearl Causal Hierarchy: observing factual realities (L1), exerting actions to
44
  induce interventional realities (L2), and imagining alternate counterfactual realities (L3)
45
  [<a href="https://dl.acm.org/doi/pdf/10.1145/3501714.3501743?casa_token=hJAJZQLNGbEAAAAA:exuQk37fuXGMkpOVJEKACgnupjkP-adDQGhv2YzfBN9MfoERkAcHQRDgT3myWfccfqucQd8h63Q"
 
53
  formulation makes CausalARC an open-ended playground for testing reasoning hypotheses at all
54
  three levels of the PCH, with an emphasis on abstract, logical, and counterfactual reasoning.
55
 
56
+ <p align="center">
57
+ <img src='https://jmaasch.github.io/carc/static/images/demo.png' width="100%" class="center">
 
58
  <i> The CausalARC testbed. <b>(A)</b> First, SCM <i>M</i> is manually transcribed in Python code. <b>(B)</b>
59
  Input-output pairs are randomly sampled, providing observational (L1) learning signals about the
60
  world model. <b>(C)</b> Sampling from interventional submodels <i>M</i>' of <i>M</i> yields interventional (L2)