File size: 4,738 Bytes
cf34b1d | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 | # LongHealth Benchmark - Preprocessed for Packed Attention
This dataset is a preprocessed version of the LongHealth benchmark, formatted for evaluation with packed document attention models.
## Dataset Description
LongHealth is a medical question-answering benchmark that tests long-context understanding across multiple clinical documents. Each example consists of multiple medical documents (notes, lab reports, discharge summaries) and a multiple-choice question.
## Format
The dataset is a JSON file with the following structure:
```json
{
"metadata": {
"source": "benchmark_v5.json",
"tokenizer": "google/t5gemma-2b-2b-prefixlm-it",
"max_encoder_len": 55000,
"add_distractors": true,
"preprocessing_timestamp": "2024-12-02T...",
"num_patients": X,
"num_questions": Y,
"stats": { ... }
},
"examples": {
"patient_001_q00": {
"answer_document_tokens": [[101, 2023, ...], [101, 5011, ...]],
"distractor_document_tokens": [[101, 8392, ...], ...],
"decoder_input_ids": [1, 2, 3, ...],
"question_text": "What was the primary diagnosis?",
"correct_answer": "A",
"correct_text": "Pneumonia",
"all_options": {
"A": "Pneumonia",
"B": "Bronchitis",
"C": "...",
"D": "...",
"E": "..."
},
"patient_id": "patient_001",
"question_idx": 0,
"answer_doc_ids": ["text_0", "text_1"],
"num_answer_docs": 2,
"num_distractor_docs": 8,
"total_context_length": 12453,
}
}
}
```
## Field Descriptions
### Example Fields
- **answer_document_tokens**: List of tokenized documents containing the answer (must be included)
- **distractor_document_tokens**: List of tokenized distractor documents (included if budget allows)
- **decoder_input_ids**: Tokenized decoder prompt with chat template applied
- **question_text**: Original question text
- **correct_answer**: Correct answer letter (A, B, C, D, or E)
- **correct_text**: Full text of the correct option
- **all_options**: Dictionary mapping letters to option texts
- **patient_id**: ID of the patient case
- **question_idx**: Index of question within patient case (0-19)
- **answer_doc_ids**: Original document IDs that contain the answer
- **num_answer_docs**: Number of answer documents included
- **num_distractor_docs**: Number of distractor documents included
- **total_context_length**: Total tokens in encoder context
- **budget_exceeded**: Whether answer documents exceeded max_encoder_len
### Metadata Stats
The metadata includes comprehensive statistics:
- Document length distributions (min/max/mean/median/p95)
- Answer document totals per question
- Final context lengths after budget application
- Number and percentage of questions where budget was exceeded
- Average number of answer/distractor documents per question
## Token Budget Logic
1. **Answer documents are prioritized** - they are always included first
2. If total answer document length > max_encoder_len:
- Keep first N answer documents that fit within budget
- No distractor documents are added
- `budget_exceeded = True`
3. If answer documents fit:
- All answer documents included
- Distractor documents added greedily until budget is full (if `add_distractors=True`)
- `budget_exceeded = False`
## Decoder Prompt Format
The decoder prompt uses the model's chat template with `add_generation_prompt=True`:
```
Answer this multiple choice question based on the medical documents provided in the context.
Question: {question}
A: {option_a}
B: {option_b}
C: {option_c}
D: {option_d}
E: {option_e}
You must respond in this exact format:
'The correct answer is [LETTER]: [Full option text]'
Example: 'The correct answer is B: Acute bronchitis.'
```
## Usage
For evaluation with packed attention models:
```python
import json
import torch
from transformers import AutoTokenizer
# Load dataset
with open("longhealth_preprocessed.json") as f:
data = json.load(f)
# Process example
example = data["examples"]["patient_001_q00"]
# Prepare for model
encoder_inputs = [
torch.tensor(tokens, dtype=torch.long)
for tokens in example["answer_document_tokens"]
] + [
torch.tensor(tokens, dtype=torch.long)
for tokens in example["distractor_document_tokens"]
]
decoder_input = torch.tensor(example["decoder_input_ids"], dtype=torch.long)
# Evaluate response
def is_correct(generated_text, correct_letter, correct_text):
return (correct_letter in generated_text and
correct_text.lower() in generated_text.lower())
```
## Citation
If you use this dataset, please cite the original LongHealth paper:
```
[LongHealth citation to be added]
```
## License
Same as original LongHealth benchmark.
|