DTLBench / README.md
guosy's picture
Add task categories and improve dataset card (#2)
289f447
metadata
license: other
size_categories:
  - 10K<n<100K
task_categories:
  - text-generation
license_name: mixed-license
license_link: LICENSE.md
tags:
  - llm-agents
  - deployment-time-learning
  - continual-learning
configs:
  - config_name: alfworld
    data_files: alfworld/alfworld.jsonl
  - config_name: banking77
    data_files: banking77/banking77.jsonl
  - config_name: bird
    data_files: bird/bird.jsonl
  - config_name: cmdl
    data_files: cmdl/cmdl.jsonl
  - config_name: ddxplus
    data_files: ddxplus/ddxplus.jsonl
  - config_name: lfd
    data_files: lfd/lfd.jsonl
  - config_name: mud
    data_files: mud/mud.jsonl
  - config_name: rca
    data_files: rca/rca.jsonl
  - config_name: scienceworld
    data_files: scienceworld/scienceworld.jsonl
  - config_name: sentifin
    data_files: sentifin/sentifin.jsonl
  - config_name: spider
    data_files: spider/spider.jsonl
  - config_name: 2wiki
    data_files: 2wiki/2wiki.jsonl
  - config_name: ehr
    data_files: ehr/ehr.jsonl

DTLBench

💻 GitHub Repo | 🫀 DTLBench PhysioNet (To be released) | 📄 Paper

DTLBench is a benchmark for deployment-time learning of large language model agents. It collects diverse task streams spanning medical diagnosis, legal analysis, operational reasoning, financial prediction, text-to-SQL, embodied decision making, tabular reasoning on EHRs, deep search, etc.

The dataset was introduced in the paper: CASCADE: Case-Based Continual Adaptation for Large Language Models During Deployment.

Benchmark Overview

  • Total tasks: 16 (3 of them will be released through PhysioNet)
  • Data format: one JSON object per line (.jsonl)
  • Primary use case: benchmark streams for deployment-time learning

DTLBench covers three environment styles used in CASCADE:

  • single-turn: one input, one final answer
  • multi-turn: sequential interaction with the environment

Summary statistics of the DTLBench. The maximum steps refer to the maximum number of interaction steps that the environment allows per task.

Property Domain Task Dataset Maximum Steps Number of Samples
Single-turn Medical Medical Diagnosis DDXPlus 1 3136
Medication Recommendation MIMIC-IV-MR 1 2881
Medical Specialty Referral MIMIC-IV-MSR 1 2115
Triage Level Prediction MIMIC-IV-TLP 1 2200
Legal Multi-Defendant Legal Charge Prediction MUD 1 1740
Penalty Legal Prediction CMDL 1 2080
Financial Financial Customer Intent Routing Banking77 1 5000
Entity-Aware Financial Sentiment Analysis SEntFiN 1 2299
AIOps AIOps Root Cause Analysis RCA 1 2925
AIOps Log Fault Diagnosis LFD 1 3000
Coding Text-to-SQL SPIDER 1 2147
Knowledge-Augmented Text-to-SQL BIRD 1 1534
Multi-turn, Simulated Embodied Household Embodied Decision Making ALFWorld 30 2000
Scientific Embodied Decision Making ScienceWorld 10-30 1857
Multi-turn, Real-world Information Seeking Web-based Deep Search 2Wiki 5 2500
Medical Complex Tabular Reasoning on Electronic Health Records MIMIC-III 5 2500

Data Format

Each config can be loaded independently from Hugging Face, and each task keeps the fields needed by its original environment. All tasks include a task field, which is the main query or observation presented to the agent.

Load the Dataset

Using datasets:

from datasets import load_dataset

# Load one task
ddxplus = load_dataset("guosy/DTLBench", "ddxplus")
print(ddxplus["train"][0])

# Load another task
spider = load_dataset("guosy/DTLBench", "spider")
print(spider["train"][0]["task"])

Using huggingface-cli:

huggingface-cli download --repo-type dataset guosy/DTLBench --local-dir ./DTLBench

License

DTLBench is a mixed-license collection. Each subdataset follows its own original license, and the benchmark authors do not claim additional rights beyond those licenses.

Please see LICENSE.md and the per-task LICENSE files for details. In particular:

  • Some tasks are under permissive licenses such as MIT or Apache-2.0
  • Some tasks use CC licenses with attribution or share-alike requirements
  • Some tasks have unclear or unknown redistribution terms

You are responsible for ensuring your use complies with the license of each individual subdataset.

Citation

If you use DTLBench, please consider citing our paper:

@misc{guo2026cascadecasebasedcontinualadaptation,
      title={CASCADE: Case-Based Continual Adaptation for Large Language Models During Deployment}, 
      author={Siyuan Guo and Yali Du and Hechang Chen and Yi Chang and Jun Wang},
      year={2026},
      eprint={2605.06702},
      archivePrefix={arXiv},
      primaryClass={cs.AI},
      url={https://arxiv.org/abs/2605.06702}, 
}