File size: 2,221 Bytes
15b6acf
 
0bbcba9
15b6acf
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0bbcba9
15b6acf
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
---
pretty_name: DedeuceBench (Dev)
license: cc-by-4.0
tags:
  - llm-agents
  - active-learning
  - tool-use
  - finite-state-machines
  - benchmark
---

# DedeuceBench Dev Split

This dataset provides the public development split for DedeuceBench, an interactive active‑learning benchmark over hidden Mealy machines. Each item in the split is a config entry (seeded episode) defined in `levels_dev.json` under subset `dev`.

## Summary
- Task: Identification-first — agents must probe a hidden finite-state transducer under a strict query budget using tool calls (`act`, `submit_table`) and submit an exact transition table to succeed.
- Deterministic: Episodes are seeded; there is no ground-truth leak in the prompts.
- Split: `dev` subset mirrors an easy configuration (e.g., S=2, budget=25) for quick iteration and public sharing.

## Files
- `levels_dev.json`: multi-split JSON with a `dev` subset containing episode items by seed.

Example schema fragment:
```
{
  "splits": {
    "dev": {
      "mode": "basic",
      "trap": false,
      "feedback": true,
      "budget": 25,
      "n_states": 2,
      "target_len": 8,
      "variety": false,
      "items": [ {"seed": 1001}, ... ]
    }
  }
}
```

## Usage
Programmatic download:
```
pip install huggingface_hub
python - << 'PY'
from huggingface_hub import hf_hub_download
p = hf_hub_download(
    repo_id="comfortably-dumb/dedeucebench-dev",
    filename="levels_dev.json",
    repo_type="dataset",
)
print(p)
PY
```

Evaluate with the DedeuceBench CLI (deterministic baseline):
```
dedeucebench-eval \
  --split /path/to/levels_dev.json \
  --subset dev \
  --model heuristic:none \
  --out results.dev.jsonl

dedeucebench-aggregate results.dev.jsonl > leaderboard.dev.csv
```

Using OpenRouter (OpenAI-compatible):
```
export OPENAI_API_KEY=...           # your OpenRouter key
export OPENAI_BASE_URL=https://openrouter.ai/api/v1

dedeucebench-eval \
  --split /path/to/levels_dev.json \
  --subset dev \
  --provider openrouter \
  --model openai/gpt-5-mini \
  --out results.openrouter_gpt5mini.dev.jsonl
```

## License
CC-BY-4.0

## Citation
- Concept DOI (all versions): 10.5281/zenodo.17166596
- See the main repo README for full citation details.