| # LongHealth Benchmark - Preprocessed for Packed Attention | |
| This dataset is a preprocessed version of the LongHealth benchmark, formatted for evaluation with packed document attention models. | |
| ## Dataset Description | |
| LongHealth is a medical question-answering benchmark that tests long-context understanding across multiple clinical documents. Each example consists of multiple medical documents (notes, lab reports, discharge summaries) and a multiple-choice question. | |
| ## Format | |
| The dataset is a JSON file with the following structure: | |
| ```json | |
| { | |
| "metadata": { | |
| "source": "benchmark_v5.json", | |
| "tokenizer": "google/t5gemma-2b-2b-prefixlm-it", | |
| "max_encoder_len": 55000, | |
| "add_distractors": true, | |
| "preprocessing_timestamp": "2024-12-02T...", | |
| "num_patients": X, | |
| "num_questions": Y, | |
| "stats": { ... } | |
| }, | |
| "examples": { | |
| "patient_001_q00": { | |
| "answer_document_tokens": [[101, 2023, ...], [101, 5011, ...]], | |
| "distractor_document_tokens": [[101, 8392, ...], ...], | |
| "decoder_input_ids": [1, 2, 3, ...], | |
| "question_text": "What was the primary diagnosis?", | |
| "correct_answer": "A", | |
| "correct_text": "Pneumonia", | |
| "all_options": { | |
| "A": "Pneumonia", | |
| "B": "Bronchitis", | |
| "C": "...", | |
| "D": "...", | |
| "E": "..." | |
| }, | |
| "patient_id": "patient_001", | |
| "question_idx": 0, | |
| "answer_doc_ids": ["text_0", "text_1"], | |
| "num_answer_docs": 2, | |
| "num_distractor_docs": 8, | |
| "total_context_length": 12453, | |
| } | |
| } | |
| } | |
| ``` | |
| ## Field Descriptions | |
| ### Example Fields | |
| - **answer_document_tokens**: List of tokenized documents containing the answer (must be included) | |
| - **distractor_document_tokens**: List of tokenized distractor documents (included if budget allows) | |
| - **decoder_input_ids**: Tokenized decoder prompt with chat template applied | |
| - **question_text**: Original question text | |
| - **correct_answer**: Correct answer letter (A, B, C, D, or E) | |
| - **correct_text**: Full text of the correct option | |
| - **all_options**: Dictionary mapping letters to option texts | |
| - **patient_id**: ID of the patient case | |
| - **question_idx**: Index of question within patient case (0-19) | |
| - **answer_doc_ids**: Original document IDs that contain the answer | |
| - **num_answer_docs**: Number of answer documents included | |
| - **num_distractor_docs**: Number of distractor documents included | |
| - **total_context_length**: Total tokens in encoder context | |
| - **budget_exceeded**: Whether answer documents exceeded max_encoder_len | |
| ### Metadata Stats | |
| The metadata includes comprehensive statistics: | |
| - Document length distributions (min/max/mean/median/p95) | |
| - Answer document totals per question | |
| - Final context lengths after budget application | |
| - Number and percentage of questions where budget was exceeded | |
| - Average number of answer/distractor documents per question | |
| ## Token Budget Logic | |
| 1. **Answer documents are prioritized** - they are always included first | |
| 2. If total answer document length > max_encoder_len: | |
| - Keep first N answer documents that fit within budget | |
| - No distractor documents are added | |
| - `budget_exceeded = True` | |
| 3. If answer documents fit: | |
| - All answer documents included | |
| - Distractor documents added greedily until budget is full (if `add_distractors=True`) | |
| - `budget_exceeded = False` | |
| ## Decoder Prompt Format | |
| The decoder prompt uses the model's chat template with `add_generation_prompt=True`: | |
| ``` | |
| Answer this multiple choice question based on the medical documents provided in the context. | |
| Question: {question} | |
| A: {option_a} | |
| B: {option_b} | |
| C: {option_c} | |
| D: {option_d} | |
| E: {option_e} | |
| You must respond in this exact format: | |
| 'The correct answer is [LETTER]: [Full option text]' | |
| Example: 'The correct answer is B: Acute bronchitis.' | |
| ``` | |
| ## Usage | |
| For evaluation with packed attention models: | |
| ```python | |
| import json | |
| import torch | |
| from transformers import AutoTokenizer | |
| # Load dataset | |
| with open("longhealth_preprocessed.json") as f: | |
| data = json.load(f) | |
| # Process example | |
| example = data["examples"]["patient_001_q00"] | |
| # Prepare for model | |
| encoder_inputs = [ | |
| torch.tensor(tokens, dtype=torch.long) | |
| for tokens in example["answer_document_tokens"] | |
| ] + [ | |
| torch.tensor(tokens, dtype=torch.long) | |
| for tokens in example["distractor_document_tokens"] | |
| ] | |
| decoder_input = torch.tensor(example["decoder_input_ids"], dtype=torch.long) | |
| # Evaluate response | |
| def is_correct(generated_text, correct_letter, correct_text): | |
| return (correct_letter in generated_text and | |
| correct_text.lower() in generated_text.lower()) | |
| ``` | |
| ## Citation | |
| If you use this dataset, please cite the original LongHealth paper: | |
| ``` | |
| [LongHealth citation to be added] | |
| ``` | |
| ## License | |
| Same as original LongHealth benchmark. | |