Datasets:
Curation Workflow
The curation step turns the pipeline-held-out curation_pool.jsonl into the
two evaluation files shipped in data/splits/:
test.jsonl— evaluation set. The paper's headline metrics come from this set.sanity.jsonl— 20-example smoke-test set for development.
In TRACE v1 the whole curation pool is promoted as the test corpus (no hand curation). The review + compile scripts are still part of the pipeline because (a) regenerating the test and sanity splits is deterministic from the pool, and (b) anyone adapting TRACE to a new clinical domain can reuse the same workflow with their own taxonomy.
Two scripts
| Script | What it does |
|---|---|
src/prepare_curation.py |
Renders curation_pool.jsonl as a browseable Markdown document in docs/curation/review.md — one candidate per section, grouped by task × category, with gold labels and provenance visible. |
src/compile_curation.py |
Splits the curation pool into data/splits/test.jsonl (the remainder) and data/splits/sanity.jsonl (20 examples, largest-remainder stratified by category). Deterministic under --seed. |
Commands
# Regenerate the browseable review document.
uv run python src/prepare_curation.py
# Compile the test + sanity splits from the pool.
uv run python src/compile_curation.py
# Override the sanity size (default 20) or seed (default 42).
uv run python src/compile_curation.py --sanity-n 20 --seed 42
After compile_curation.py runs you'll have:
data/splits/test.jsonl— the evaluation setdata/splits/sanity.jsonl— the smoke-test set
Reading review.md
review.md renders each candidate with:
- A heading showing the task type, category, and
example_id. - Gold labels inline (method, domain, level, learner profile, mastery state; or pattern class, behavior functions, escalation, confidence, crisis-plan flag).
- A short provenance line (for session interpretation: number of sessions, number of behaviors, whether IOA is included, whether ABC is included).
- The full user message (the teaching-program prompt or the session log).
- The full assistant message (the structured response).
Use LEGEND.md as a side-pane reference for the session-log notation in
Task 2 examples.
Flagging issues for re-generation
If you spot a clinical inaccuracy while browsing review.md:
- Note the candidate's
example_id(shown in its heading). - Look up the candidate's
meta.provenance.taxonomy_cellsin the JSONL to identify which taxonomy dimension produced the issue. - Edit the relevant YAML under
configs/(or the renderer insrc/generators/if it's a rendering-logic issue) — never hand-edit individual JSONL entries, because the fix should propagate to every example that sampled the same cells. - Regenerate the corpus with
src/generate.py --all, re-split withsrc/split_data.py, and re-compile withsrc/compile_curation.py.
Every clinical-review flag during v1 development landed as a single targeted taxonomy edit plus a full regeneration — never a hand-edit of the JSONL. This is the invariant the pipeline relies on: fixes must propagate through the taxonomy to be systematic across the corpus.
Adapting the workflow to a new clinical domain
The scripts are domain-agnostic. To port to a different dataset:
prepare_curation.pyjust readsdata/splits/curation_pool.jsonland renders it. Works on any JSONL with the same envelope shape.compile_curation.pystratifies oncategory_of(example)— a small function near the top of the script. Change its logic to use whichever field best represents your category (themethodfield for TRACE's teaching programs; thepattern_classfield for session interpretation).- Everything else — largest-remainder allocation, deterministic splitting, provenance preservation — carries over unchanged.