ragarwal's picture
Update README.md
0a9f1fb verified
---
license: mit
task_categories:
- text-classification
language:
- en
tags:
- multi-label-text-classification
- medical
- medical-device-failure-events
- adverse-events
pretty_name: MADE Benchmark
size_categories:
- 100K<n<1M
---
# MADE: A Living Benchmark for Multi-Label Text Classification with Uncertainty Quantification of Medical Device Adverse Events (ACL 2026)
*Authors: Raunak Agarwal, Markus Wenzel, Simon Baur, Jonas Zimmer, George Harvey, Jackie Ma*
[Project Page](https://hhi.fraunhofer.de/aml-demonstrator/made-benchmark); [Github](https://github.com/raunak-agarwal/made-benchmark)
**Abstract**: Machine learning in high-stakes domains such as healthcare requires not only strong predictive performance but also reliable uncertainty quantification (UQ) to support human oversight.
Multi-label text classification (MLTC) is a central task in this domain, yet remains challenging due to label imbalances, dependencies, and combinatorial complexity.
Existing MLTC benchmarks are increasingly saturated and may be affected by training data contamination, making it difficult to distinguish genuine reasoning capabilities from memorization in frontier language models.
We introduce **MADE**, a living MLTC benchmark derived from <ins>m</ins>edical device <ins>ad</ins>verse <ins>e</ins>vent reports -- continuously updated with newly published reports to prevent contamination.
MADE features a long-tailed distribution of hierarchical labels and enables reproducible evaluation with strict temporal splits.
We use MADE to establish baselines across more than 20 encoder- and decoder-only models under fine-tuning and few-shot settings (instruction-tuned/reasoning variants, local/API-accessible).
We systematically assess entropy-/consistency-based and self-verbalized UQ methods.
Our results reveal clear trade-offs:
smaller discriminatively fine-tuned decoders achieve the strongest head-to-tail accuracy while maintaining competitive UQ;
generative fine-tuning delivers the most reliable UQ;
large reasoning models improve performance on rare labels yet exhibit surprisingly weak UQ; and self-verbalized confidence is not a reliable proxy for uncertainty. Our benchmark is publicly available at [this URL](https://hhi.fraunhofer.de/aml-demonstrator/made-benchmark).
**Figure 1:** Label hierarchy with product and patient problems. The outer ring shows the fifty most frequent product or patient problems in the test set, grouped by their parent classes (middle ring) and grandparent classes (inner ring).
<img src="sunburst_labels.jpg" width="1000" height="800">
**Table 1:** Summary statistics
| Metric | Value |
| ----------------------------- | ------: |
| Total number of samples | 488,273 |
| Training set (2015–2023) | 298,825 |
| Validation set (1–6/2024) | 71,271 |
| Test set (7/2024–6/2025) | 118,177 |
| Truncated test set | 10,288 |
| Average tokens (cl100k_base) | ~370 |
| Average labels per sample | 8.79 |
| Unique labels | 1,154 |
| Hierarchy levels of labels | 3 |
| Minimum occurrences per label | 5 |
**Figure 2:** *Top:* Overview of the benchmarking setup, encompassing discriminative and generative language models, learning paradigms (discriminative or generative fine-tuning and few-shot prompting), and uncertainty quantification (UQ) approaches. *Bottom, left:* Multi-label text classification of medical device adverse events, each annotated with hierarchical product and patient problem labels. *Bottom, right:* UQ quality is evaluated (for of each model, learning paradigm and UQ method)
<img src="made-benchmark-diagram.png" width="800" height="500">