Datasets:

Modalities:
Text
Formats:
parquet
Size:
< 1K
ArXiv:
License:
File size: 9,004 Bytes
cd785a0
 
 
 
 
 
 
d7cee49
2c58ff1
cd785a0
 
 
 
 
 
 
 
 
 
 
 
2c58ff1
cd785a0
 
2c58ff1
cd785a0
 
2c58ff1
cd785a0
d7cee49
2c58ff1
cd785a0
 
 
 
 
 
 
 
 
e5624ce
cd785a0
1bbb678
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7ca5114
 
 
 
 
 
 
 
1bbb678
 
 
7ca5114
 
1bbb678
 
 
 
 
 
 
 
 
 
 
 
1cde7d5
1bbb678
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1cde7d5
1bbb678
 
 
 
 
7ca5114
 
 
 
 
 
 
 
 
 
a5e570d
 
7ca5114
 
 
 
 
 
 
 
 
 
 
 
 
07bc6dc
 
 
 
 
 
 
7ca5114
 
 
 
 
 
 
 
 
 
 
 
1bbb678
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
---
dataset_info:
  features:
  - name: id
    dtype: string
  - name: transcript
    dtype: string
  - name: enumerated_transcript
    dtype: string
  - name: orders
    list:
    - name: description
      dtype: string
    - name: order_type
      dtype: string
    - name: provenance
      list: int64
    - name: reason
      dtype: string
  splits:
  - name: train
    num_bytes: 1152941
    num_examples: 81
  - name: test1
    num_bytes: 1327621
    num_examples: 90
  - name: test2
    num_bytes: 1329727
    num_examples: 92
  download_size: 1885629
  dataset_size: 3810289
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: test1
    path: data/test1-*
  - split: test2
    path: data/test2-*
license: cdla-permissive-2.0
---

# SIMORD

HuggingFace re-upload of the [SIMORD dataset](https://huggingface.co/datasets/microsoft/SIMORD), **a medical order extraction benchmark based on doctor-patient conversations**, with corrections to data splits and all text transcripts now included by default. If used, please cite the original authors using the citation below.


## Dataset Details

### Dataset Sources

- **HuggingFace:** https://huggingface.co/datasets/microsoft/SIMORD
- **Paper:** https://arxiv.org/pdf/2507.05517


### Dataset Description

The dataset contains three splits (with their corresponding original SIMORD files):

1) `train` (from `train.json`): examples for in-context learning or fine-tuning.
2) `test1` (from `dev.json`): test set used for the EMNLP 2025 industry track paper.
3) `test2` (from `test.json`): test set for MEDIQA-OE shared task of ClinicalNLP 2025.

With the following distribution

| Split | Original | New | Change |
| :--- | :---: | :---: | :---: |
| `train` | 63 | 81 | +18 |
| `test1` | 100 | 90 | -10 |
| `test2` | 100 | 92 | -8 |
| **TOTAL** | **263** | **263** | **-** |

Note: Both the original SIMORD dataset and this upload use the split name `test1` instead of dev/validation (even though the file is `dev.json`) and `test2` instead of test (even though the file is `test.json`), since both were used as test sets.

Each sample contains the following fields:
- `id`: unique sample identifier
- `transcript`: the full doctor-patient conversation text
- `enumerated_transcript`: the transcript with 1-based line numbers before each speaker turn (e.g., "1 [doctor] hello\n2 [patient] hi")
- `orders`: list of expected medical orders, each with `order_type`, `description`, `reason`, and `provenance` (line numbers in enumerated_transcript)

Note: The original SIMORD dataset only contains `id` and `expected_orders`, requiring users to separately load transcripts from ACI-Bench or PriMock57. This version includes the transcripts directly and fixes provenance line numbers to match the enumerated_transcript format.


### Dataset Changes

#### Data Splits

The SIMORD dataset is derived from both [ACI-Bench](https://github.com/wyim/aci-bench) and [PriMock57](https://github.com/babylonhealth/primock57).

While PriMock57 doesn't contain any explicit data splits, ACI-Bench contains five splits: `train`, `valid`, `test1`, `test2`, and `test3`. As discussed in an [open HF issue](https://huggingface.co/datasets/microsoft/SIMORD/discussions/2), these splits were not respected when being merged into SIMORD.

For example, SIMORD's `test.json` contains an ACI-Bench train sample:

`
"id": "acibench_D2N036_aci_train"
`

The official SIMORD HF upload contains three data files that are mapped to the following splits

| SIMORD File | Mapped Split | Total | ACI-Train | ACI-Valid/Dev | ACI-Test1 | ACI-Test2 | ACI-Test3 | PriMock57 |
|:---|:---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| [train.json](https://huggingface.co/datasets/microsoft/SIMORD/blob/main/data/train.json) | `train` | 63 | 15 | 8 | 8 | 10 | 8 | 14 |
| [dev.json](https://huggingface.co/datasets/microsoft/SIMORD/blob/main/data/dev.json) | `test1` | 100 | 27 | 3 | 20 | 14 | 13 | 23 |
| [test.json](https://huggingface.co/datasets/microsoft/SIMORD/blob/main/data/test.json) | `test2` | 100 | 25 | 9 | 11 | 16 | 19 | 20 |



This updated version of SIMORD reallocates samples using the following logic:

- **New `train`** = old train (train+PriMock57 samples) + old test1 (train samples) + old test2 (train samples)
- **New `test1`** = old test1 (non-train samples) + half of old train (non-train, non-PriMock57 samples)
- **New `test2`** = old test2 (non-train samples) + half of old train (non-train, non-PriMock57 samples)

In other words:
- Samples with `_train` suffix are moved to `train`, regardless of which original file they came from
- PriMock57 samples stay in their original splits, since PriMock57 has no explicit data splits
- Non-train samples in the original `test1` and `test2` splits stay where they are
- Non-train, non-PriMock57 samples that were misplaced in the original `train` split are evenly distributed between `test1` and `test2`

After reallocation, the new splits contain the following counts:

| New Split | Total | ACI-Train | ACI-Valid/Dev | ACI-Test1 | ACI-Test2 | ACI-Test3 | PriMock57 |
|:---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| `train` | 81 | 67 | 0 | 0 | 0 | 0 | 14 |
| `test1` | 90 | 0 | 7 | 24 | 19 | 17 | 23 |
| `test2` | 92 | 0 | 13 | 15 | 21 | 23 | 20 |

#### Provenance Corrections

The `orders` column includes a `reason` field within the JSON object, which is a typically a short excerpt from the `transcript` justifying the order. The JSON object also includes a `provenance` field, which is a list of line number indices where the different text spans in the `reason` field are extracted from.

In the original SIMORD dataset, the provenance line numbers usually exceed the total number of speaker turns in the transcript. This likely reflects some preprocessing scheme used prior to publication, or perhaps has something to do with prepending prompts before each transcript. In any case, to align the provenances to the raw transcripts, we:

1. Use the `reason` field (which contains text extracted from the transcript) to find the all relevant line numbers
2. Calculate the offset between the original provenance and the actual line numbers
3. Shift all provenances to match the new `enumerated_transcript` line numbers

> Note: not every `reason` field is directly extracted from the transcript, so there are still some incorrect `provenance` values

For example, `acibench_virtassist_train_D2N010` only has 61 speaker turns, but the original dataset had `provenance` values like [91, 97], which are out of range.

```json
{
  "description": "amoxicillin 500 milligrams three times a day 10 day",
  "order_type": "medication",
  "provenance": [91, 97],
  "reason": "positive for strep"
}
```

However, we see that this particular order corresponds to text in line 45 of the transcript.

```
...
45 [doctor] ..., uh , positive for strep . so i think we have some reasons...
...
51 [doctor] yes it is , yeah .
...
```

So, the offset is 45 - 91 = -46, and the `provenance` shifts from [91, 97] to [45, 51]:

```json
{
  "description": "amoxicillin 500 milligrams three times a day 10 day",
  "order_type": "medication",
  "provenance": [45, 51],
  "reason": "positive for strep"
}
```


### Direct Use

```python
import json
from datasets import load_dataset


if __name__ == "__main__":
    # load all data
    dataset = load_dataset("mkieffer/SIMORD")

    # load only train split
    dataset_train = load_dataset("mkieffer/SIMORD", split="train")

    # load only test1 split
    dataset_test1 = load_dataset("mkieffer/SIMORD", split="test1")

    print("\nfull dataset:\n", dataset)
    print("\ntrain split:\n", dataset_train)
    print("\ntest1 split:\n", dataset_test1)

    print("\ntrain sample:\n", json.dumps(dataset_train[0], indent=2))
    print("\ntest1 sample:\n", json.dumps(dataset_test1[0], indent=2))
```


## Citation 

```bibtex
@inproceedings{corbeil-etal-2025-empowering,
    title = "Empowering Healthcare Practitioners with Language Models: Structuring Speech Transcripts in Two Real-World Clinical Applications",
    author = "Corbeil, Jean-Philippe  and
      Ben Abacha, Asma  and
      Michalopoulos, George  and
      Swazinna, Phillip  and
      Del-Agua, Miguel  and
      Tremblay, Jerome  and
      Daniel, Akila Jeeson  and
      Bader, Cari  and
      Cho, Kevin  and
      Krishnan, Pooja  and
      Bodenstab, Nathan  and
      Lin, Thomas  and
      Teng, Wenxuan  and
      Beaulieu, Francois  and
      Vozila, Paul",
    editor = "Potdar, Saloni  and
      Rojas-Barahona, Lina  and
      Montella, Sebastien",
    booktitle = "Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing: Industry Track",
    month = nov,
    year = "2025",
    address = "Suzhou (China)",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2025.emnlp-industry.58/",
    doi = "10.18653/v1/2025.emnlp-industry.58",
    pages = "859--870",
    ISBN = "979-8-89176-333-3"
}
```