FastFlowLM commited on
Commit
9af1345
·
verified ·
1 Parent(s): ca8b378

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +825 -0
README.md ADDED
@@ -0,0 +1,825 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: health-ai-developer-foundations
4
+ license_link: https://developers.google.com/health-ai-developer-foundations/terms
5
+ library_name: transformers
6
+ pipeline_tag: image-text-to-text
7
+ extra_gated_heading: Access MedGemma on Hugging Face
8
+ extra_gated_prompt: >-
9
+ To access MedGemma on Hugging Face, you're required to review and
10
+ agree to [Health AI Developer Foundation's terms of use](https://developers.google.com/health-ai-developer-foundations/terms).
11
+ To do this, please ensure you're logged in to Hugging Face and click below.
12
+ Requests are processed immediately.
13
+ extra_gated_button_content: Acknowledge license
14
+ tags:
15
+ - medical
16
+ - radiology
17
+ - clinical-reasoning
18
+ - dermatology
19
+ - pathology
20
+ - ophthalmology
21
+ - chest-x-ray
22
+ ---
23
+ # MedGemma 1.5 model card
24
+
25
+ Note: This card describes MedGemma 1.5, which is only available as a 4B
26
+ multimodal instruction-tuned variant. For information on MedGemma 1 variants,
27
+ refer to the [MedGemma 1 model
28
+ card](https://developers.google.com/health-ai-developer-foundations/medgemma/model-card-v1).
29
+
30
+ **Model documentation:** [MedGemma](https://developers.google.com/health-ai-developer-foundations/medgemma)
31
+
32
+ **Resources:**
33
+
34
+ * Model on Google Cloud Model Garden: [MedGemma](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/medgemma)
35
+ * Models on Hugging Face: [Collection](https://huggingface.co/collections/google/medgemma-release-680aade845f90bec6a3f60c4)
36
+ * Concept applications built using MedGemma: [Collection](https://huggingface.co/collections/google/medgemma-concept-apps-686ea036adb6d51416b0928a)
37
+ * [GitHub repository](https://github.com/google-health/medgemma)
38
+ * [Tutorial notebooks](https://github.com/google-health/medgemma/blob/main/notebooks)
39
+
40
+ * License: The use of MedGemma is governed by the [Health AI Developer
41
+ Foundations terms of
42
+ use](https://developers.google.com/health-ai-developer-foundations/terms).
43
+ MedGemma has not been evaluated or optimized for multi-turn applications.
44
+
45
+ MedGemma's training may make it more sensitive to the specific prompt used than
46
+ Gemma 3.
47
+
48
+ When adapting MedGemma developer should consider the following:
49
+
50
+
51
+
52
+ * License: The use of MedGemma is governed by the [Health AI Developer
53
+ Foundations terms of
54
+ use](https://developers.google.com/health-ai-developer-foundations/terms).
55
+
56
+ * [Support](https://developers.google.com/health-ai-developer-foundations/medgemma/get-started.md#contact)
57
+ channels
58
+
59
+ **Author:** Google
60
+
61
+ ## Model information
62
+
63
+ This section describes the specifications and recommended use of the MedGemma
64
+ model.
65
+
66
+ ### Description
67
+
68
+ MedGemma is a collection of [Gemma 3](https://ai.google.dev/gemma/docs/core)
69
+ variants that are trained for performance on medical text and image
70
+ comprehension. Developers can use MedGemma to accelerate building
71
+ healthcare-based AI applications.
72
+
73
+ MedGemma 1.5 4B is an updated version of the MedGemma 1 4B model.
74
+
75
+ MedGemma 1.5 4B expands support for several new medical imaging and data
76
+ processing applications, including:
77
+
78
+ * **High-dimensional medical imaging:** Interpretation of three-dimensional
79
+ volume representations of Computed Tomography (CT) and Magnetic Resonance
80
+ Imaging (MRI).
81
+ * **Whole-slide histopathology imaging (WSI):** Simultaneous interpretation of
82
+ multiple patches from a whole slide histopathology image as input.
83
+ * **Longitudinal medical imaging:** Interpretation of chest X-rays in the
84
+ context of prior images (e.g., comparing current versus historical scans).
85
+ * **Anatomical localization:** Bounding box–based localization of anatomical
86
+ features and findings in chest X-rays.
87
+ * **Medical document understanding:** Extraction of structured data, such as
88
+ values and units, from unstructured medical lab reports.
89
+ * **Electronic Health Record (EHR) understanding:** Interpretation of
90
+ text-based EHR data.
91
+
92
+ In addition to these new features, MedGemma 1.5 4B delivers improved accuracy on
93
+ medical text reasoning and modest improvement on standard 2D image
94
+ interpretation compared to MedGemma 1 4B.
95
+
96
+ MedGemma utilizes a [SigLIP](https://arxiv.org/abs/2303.15343) image encoder
97
+ that has been specifically pre-trained on a variety of de-identified medical
98
+ data, including chest X-rays, dermatology images, ophthalmology images, and
99
+ histopathology slides. The LLM component is trained on a diverse set of medical
100
+ data, including medical text, medical question-answer pairs, FHIR-based
101
+ electronic health record data, 2D and 3D radiology images, histopathology
102
+ images, ophthalmology images, dermatology images, and lab reports for document
103
+ understanding.
104
+
105
+ MedGemma 1.5 4B has been evaluated on a range of clinically relevant benchmarks
106
+ to illustrate its baseline performance. These evaluations are based on both open
107
+ benchmark datasets and internally curated datasets. Developers are expected to
108
+ fine-tune MedGemma for improved performance on their use case. Consult the
109
+ [Intended use section](https://developers.google.com/health-ai-developer-foundations/medgemma/model-card.md#intended_use)
110
+ for more details.
111
+
112
+ MedGemma is optimized for medical applications that involve a text generation
113
+ component. For medical image-based applications that do not involve text
114
+ generation, such as data-efficient classification, zero-shot classification, or
115
+ content-based or semantic image retrieval, the [MedSigLIP image
116
+ encoder](https://developers.google.com/health-ai-developer-foundations/medsiglip/model-card)
117
+ is recommended. MedSigLIP is based on the same image encoder that powers
118
+ MedGemma 1 and MedGemma 1.5.
119
+
120
+ ### How to use
121
+
122
+ The following are some example code snippets to help you quickly get started
123
+ running the model locally on GPU.
124
+
125
+ Note: If you need to use the model at scale, we recommend creating a production
126
+ version using [Model
127
+ Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/medgemma).
128
+ Model Garden provides various deployment options and tutorial notebooks,
129
+ including specialized server-side image processing options for efficiently
130
+ handling large medical images: Whole Slide Digital Pathology (WSI) or volumetric
131
+ scans (CT/MRI) stored in [Cloud DICOM
132
+ Store](https://docs.cloud.google.com/healthcare-api/docs/concepts/dicom) or
133
+ [Google Cloud Storage (GCS)](https://cloud.google.com/storage).
134
+
135
+ First, install the Transformers library. Gemma 3 is supported starting from
136
+ transformers 4.50.0.
137
+
138
+ ```sh
139
+ $ pip install -U transformers
140
+ ```
141
+
142
+ Next, use either the pipeline wrapper or the transformer API directly to send a
143
+ chest X-ray image and a question to the model.
144
+
145
+ Note that CT, MRI and whole-slide histopathology images require some
146
+ pre-processing; see the
147
+ [CT](https://github.com/google-health/medgemma/blob/main/notebooks/high_dimensional_ct_hugging_face.ipynb)
148
+ and
149
+ [WSI](https://github.com/google-health/medgemma/blob/main/notebooks/high_dimensional_pathology_hugging_face.ipynb)
150
+ notebook for examples.
151
+
152
+ **Run model with the pipeline API**
153
+
154
+ ```python
155
+ from transformers import pipeline
156
+ from PIL import Image
157
+ import requests
158
+ import torch
159
+
160
+ pipe = pipeline(
161
+ "image-text-to-text",
162
+ model="google/medgemma-1.5-4b-it",
163
+ torch_dtype=torch.bfloat16,
164
+ device="cuda",
165
+ )
166
+
167
+ # Image attribution: Stillwaterising, CC0, via Wikimedia Commons
168
+ image_url = "https://upload.wikimedia.org/wikipedia/commons/c/c8/Chest_Xray_PA_3-8-2010.png"
169
+ image = Image.open(requests.get(image_url, headers={"User-Agent": "example"}, stream=True).raw)
170
+
171
+ messages = [
172
+ {
173
+ "role": "user",
174
+ "content": [
175
+ {"type": "image", "image": image},
176
+ {"type": "text", "text": "Describe this X-ray"}
177
+ ]
178
+ }
179
+ ]
180
+
181
+ output = pipe(text=messages, max_new_tokens=2000)
182
+ print(output[0]["generated_text"][-1]["content"])
183
+ ```
184
+
185
+ **Run the model directly**
186
+
187
+ ```python
188
+ # Make sure to install the accelerate library first via `pip install accelerate`
189
+ from transformers import AutoProcessor, AutoModelForImageTextToText
190
+ from PIL import Image
191
+ import requests
192
+ import torch
193
+
194
+ model_id = "google/medgemma-1.5-4b-it"
195
+
196
+ model = AutoModelForImageTextToText.from_pretrained(
197
+ model_id,
198
+ torch_dtype=torch.bfloat16,
199
+ device_map="auto",
200
+ )
201
+ processor = AutoProcessor.from_pretrained(model_id)
202
+
203
+ # Image attribution: Stillwaterising, CC0, via Wikimedia Commons
204
+ image_url = "https://upload.wikimedia.org/wikipedia/commons/c/c8/Chest_Xray_PA_3-8-2010.png"
205
+ image = Image.open(requests.get(image_url, headers={"User-Agent": "example"}, stream=True).raw)
206
+
207
+ messages = [
208
+ {
209
+ "role": "user",
210
+ "content": [
211
+ {"type": "image", "image": image},
212
+ {"type": "text", "text": "Describe this X-ray"}
213
+ ]
214
+ }
215
+ ]
216
+
217
+ inputs = processor.apply_chat_template(
218
+ messages, add_generation_prompt=True, tokenize=True,
219
+ return_dict=True, return_tensors="pt"
220
+ ).to(model.device, dtype=torch.bfloat16)
221
+
222
+ input_len = inputs["input_ids"].shape[-1]
223
+
224
+ with torch.inference_mode():
225
+ generation = model.generate(**inputs, max_new_tokens=2000, do_sample=False)
226
+ generation = generation[0][input_len:]
227
+
228
+ decoded = processor.decode(generation, skip_special_tokens=True)
229
+ print(decoded)
230
+ ```
231
+
232
+ ### Examples
233
+
234
+ Refer to the growing collection of [tutorial
235
+ notebooks](https://github.com/google-health/medgemma/blob/main/notebooks) to see
236
+ how to use or fine-tune MedGemma.
237
+
238
+ ### Model architecture overview
239
+
240
+ The MedGemma model is built based on [Gemma 3](https://ai.google.dev/gemma/) and
241
+ uses the same decoder-only transformer architecture as Gemma 3. To read more
242
+ about the architecture, consult the Gemma 3 [model
243
+ card](https://ai.google.dev/gemma/docs/core/model_card_3).
244
+
245
+ ### Technical specifications
246
+
247
+ * **Model type**: Decoder-only Transformer architecture, see the [Gemma 3
248
+ Technical
249
+ Report](https://storage.googleapis.com/deepmind-media/gemma/Gemma3Report.pdf)
250
+ * **Input modalities**: Text, vision (multimodal)
251
+ * **Output modality**: Text only
252
+ * **Attention mechanism**: Grouped-query attention (GQA)
253
+ * **Context length**: Supports long context, at least 128K tokens
254
+ * **Key publication**: [https://arxiv.org/abs/2507.05201](https://arxiv.org/abs/2507.05201)
255
+ * **Model created**: **4B multimodal**: Jan 13, 2026
256
+ * **Model version**: **4B multimodal**: 1.5.0
257
+
258
+ ### Citation
259
+
260
+ When using this model, please cite: Sellergren et al. "MedGemma Technical
261
+ Report." *arXiv preprint arXiv:2507.05201* (2025).
262
+
263
+ ```none
264
+ @article{sellergren2025medgemma,
265
+ title={MedGemma Technical Report},
266
+ author={Sellergren, Andrew and Kazemzadeh, Sahar and Jaroensri, Tiam and Kiraly, Atilla and Traverse, Madeleine and Kohlberger, Timo and Xu, Shawn and Jamil, Fayaz and Hughes, Cían and Lau, Charles and others},
267
+ journal={arXiv preprint arXiv:2507.05201},
268
+ year={2025}
269
+ }
270
+ ```
271
+
272
+ ### Inputs and outputs
273
+
274
+ **Input**:
275
+
276
+ * Text string, such as a question or prompt
277
+ * Images, normalized to 896 x 896 resolution and encoded to 256 tokens each
278
+ * Total input length of 128K tokens
279
+
280
+ **Output**:
281
+
282
+ * Generated text in response to the input, such as an answer to a question,
283
+ analysis of image content, or a summary of a document
284
+ * Total output length of 8192 tokens
285
+
286
+ ### Performance and evaluations
287
+
288
+ MedGemma was evaluated across a range of different multimodal classification,
289
+ report generation, visual question answering, and text-based tasks.
290
+
291
+ ### Key performance metrics
292
+
293
+ #### Imaging evaluations
294
+
295
+ The multimodal performance of MedGemma 1.5 4B was evaluated across a range of
296
+ benchmarks, focusing on radiology (2D, longitudinal 2D, and 3D), dermatology,
297
+ histopathology, ophthalmology, document understanding, and multimodal clinical
298
+ reasoning. See Data card for details of individual datasets.
299
+
300
+ We also list the previous results for MedGemma 1 4B and 27B (multimodal models
301
+ only), as well as for Gemma 3 4B for comparison.
302
+
303
+ | Task / Dataset | Metric | Gemma 3 4B | MedGemma 1 4B | MedGemma 1.5 4B | MedGemma 1 27B |
304
+ | :---- | :---- | :---- | :---- | :---- | :---- |
305
+ | **3D radiology image classification** | | | | | |
306
+ | CT Dataset 1\*(7 conditions/abnormalities) | Macro accuracy | 54.5 | 58.2 | 61.1 | 57.8 |
307
+ | CT-RATE (validation, 18 conditions/abnormalities ) | Macro F1 | | 23.5 | 27.0 | |
308
+ | | Macro precision | | 34.5 | 34.2 | |
309
+ | | Macro recall | | 34.1 | 42.0 | |
310
+ | MRI Dataset 1\*(10 conditions/abnormalities) | Macro accuracy | 51.1 | 51.3 | 64.7 | 57.4 |
311
+ | **2D image classification** | | | | | |
312
+ | MIMIC CXR\*\* | Macro F1 (top 5 conditions) | 81.2 | 88.9 | 89.5 | 90.0 |
313
+ | CheXpert CXR | Macro F1 (top 5 conditions) | 32.6 | 48.1 | 48.2 | 49.9 |
314
+ | CXR14 | Macro F1 (3 conditions) | 32.0 | 50.1 | 48.4 | 45.3 |
315
+ | PathMCQA\* (histopathology) | Accuracy | 37.1 | 69.8 | 70.0 | 71.6 |
316
+ | WSI-Path\* (whole-slide histopathology) | ROUGE | 2.3 | 2.2 | 49.4 | 4.1 |
317
+ | US-DermMCQA\* | Accuracy | 52.5 | 71.8 | 73.5 | 71.7 |
318
+ | EyePACS\* (fundus) | Accuracy | 14.4 | 64.9 | 76.8 | 75.3 |
319
+ | **Disease Progression Classification (Longitudinal)** | | | | | |
320
+ | MS-CXR-T | Macro Accuracy | 59.0 | 61.11 | 65.7 | 50.1 |
321
+ | **Visual question answering** | | | | | |
322
+ | SLAKE (radiology) | Tokenized F1 | 40.2 | 72.3 | 59.7\*\*\*\* | 70.3 |
323
+ | | Accuracy (on closed subset) | 62.0 | 87.6 | 82.8 | 85.9 |
324
+ | VQA-RAD\*\*\* (radiology) | Tokenized F1 | 33.6 | 49.9 | 48.1 | 46.7 |
325
+ | | Accuracy (on closed subset) | 42.1 | 69.1 | 70.2 | 67.1 |
326
+ | **Region of interest detection** | | | | | |
327
+ | Chest ImaGenome: Anatomy bounding box detection | Intersection over union | 5.7 | 3.1 | 38.0 | 16.0 |
328
+ | **Multimodal medical knowledge and reasoning** | | | | | |
329
+ | MedXpertQA (text \+ multimodal questions) | Accuracy | 16.4 | 18.8 | 20.9 | 26.8 |
330
+
331
+ \* Internal datasets. CT Dataset 1 and MRI Dataset 1 are described below \-- for
332
+ evaluation, perfectly balanced samples were drawn per condition. US-DermMCQA is
333
+ described in [Liu et al. (2020, Nature
334
+ medicine)](https://www.nature.com/articles/s41591-020-0842-3), presented as a
335
+ 4-way MCQ per example for skin condition classification. PathMCQA is based on
336
+ multiple datasets, presented as 3-9 way MCQ per example for identification,
337
+ grading, and subtype for breast, cervical, and prostate cancer. WSI-Path is a
338
+ dataset of deidentified H\&E WSIs and associated final diagnosis text from
339
+ original pathology reports, comprising single WSI examples and previously
340
+ described in [Ahmed et al. (2024, arXiv)](https://arxiv.org/pdf/2406.19578).
341
+ EyePACS is a dataset of fundus images with classification labels based on
342
+ 5-level diabetic retinopathy severity (None, Mild, Moderate, Severe,
343
+ Proliferative). A subset of these datasets are described in more detail in the
344
+ [MedGemma Technical Report](https://arxiv.org/abs/2507.05201).
345
+
346
+ \*\* Based on radiologist adjudicated labels, described in [Yang (2024,
347
+ arXiv)](https://arxiv.org/pdf/2405.03162) Section A.1.1.
348
+
349
+ \*\*\* Based on "balanced split," described in [Yang (2024,
350
+ arXiv)](https://arxiv.org/pdf/2405.03162).
351
+
352
+ \*\*\*\* While MedGemma 1.5 4B exhibits strong radiology interpretation
353
+ capabilities, it was less optimized for the SLAKE Q\&A format compared to
354
+ MedGemma 1 4B. Fine-tuning on SLAKE may improve results.
355
+
356
+ #### Chest X-ray report generation
357
+
358
+ MedGemma chest X-ray (CXR) report generation performance was evaluated on
359
+ [MIMIC-CXR](https://physionet.org/content/mimic-cxr/2.1.0/) using the [RadGraph
360
+ F1 metric](https://arxiv.org/abs/2106.14463). We compare MedGemma 1.5 4B against
361
+ a fine-tuned version of MedGemma 1 4B, and the MedGemma 1 27B base model.
362
+
363
+ | Task / Dataset | Metric | MedGemma 1 4B (tuned for CXR) | MedGemma 1.5 4B | MedGemma 1 27B |
364
+ | :---- | :---- | :---- | :---- | :---- |
365
+ | **Chest X-ray report generation** | | | | |
366
+ | MIMIC CXR \- RadGraph F1 | | 30.3 | 27.2 | 27.0 |
367
+
368
+ #### Text evaluations
369
+
370
+ MedGemma 1.5 4B was evaluated across a range of text-only benchmarks for medical
371
+ knowledge and reasoning. Existing results for MedGemma 1 variants and Gemma 3
372
+ are shown for comparison.
373
+
374
+ | Dataset | Gemma 3 4B | MedGemma 1 4B | MedGemma 1.5 4B | MedGemma 1 27B |
375
+ | :---- | :---- | :---- | :---- | :---- |
376
+ | MedQA (4-op) | 50.7 | 64.4 | 69.1 | 85.3 |
377
+ | MedMCQA | 45.4 | 55.7 | 59.8 | 70.2 |
378
+ | PubMedQA | 68.4 | 73.4 | 68.2 | 77.2 |
379
+ | MMLU Med | 67.2 | 70.0 | 69.6 | 86.2 |
380
+ | MedXpertQA (text only) | 11.6 | 14.2 | 16.4 | 23.7 |
381
+ | AfriMed-QA (25 question test set) | 48.0 | 52.0 | 56.0 | 72.0 |
382
+
383
+ #### Medical record evaluations
384
+
385
+ EHR understanding and interpretation was evaluated for synthetic longitudinal
386
+ text-based EHR data and real-world de-identified discharge summaries via
387
+ question-answering benchmark datasets for MedGemma 1.5 4B, MedGemma 1 variants,
388
+ and Gemma 3 4B.
389
+
390
+ | Dataset | Metric | Gemma 3 4B | MedGemma 1 4B | MedGemma 1.5 4B | MedGemma 1 27B |
391
+ | :---- | :---- | :---- | :---- | :---- | :---- |
392
+ | EHRQA\* | Accuracy | 70.9 | 67.6 | 89.6 | 90.5 |
393
+ | EHRNoteQA | Accuracy | 78.0 | 79.4 | 80.4 | 90.7 |
394
+
395
+ \* Internal dataset
396
+
397
+ #### Document understanding evaluations
398
+
399
+ Evaluation of converting unstructured medical lab reports documents
400
+ (PDFs/images) into structured JSON data.
401
+
402
+ | Task / Dataset | Metric | Gemma 3 4B | MedGemma 1 4B | MedGemma 1.5 4B | MedGemma 1 27B |
403
+ | :---- | :---- | :---- | :---- | :---- | :---- |
404
+ | **PDF-to-JSON Lab Test Data Conversion** | | | | | |
405
+ | EHR Dataset 2\* (raw PDF to JSON) | Macro F1 (average over per document F1 scores) | 84.0 | 78.0 | 91.0 | 76.0 |
406
+ | | Micro F1 (F1 across all extracted data fields) | 81.0 | 75.0 | 88.0 | 70.0 |
407
+ | EHR Dataset 3\* (raw PDF to JSON) | Macro F1 | 61.0 | 50.0 | 71.0 | 66.0 |
408
+ | | Micro F1 | 61.0 | 51.0 | 70.0 | 69.0 |
409
+ | Mendeley Clinical Laboratory Test Reports (PNG image to JSON) | Macro F1 | 83.0 | 85.0 | 85.0 | 69.0 |
410
+ | | Micro F1 | 78.0 | 81.0 | 83.0 | 68.0 |
411
+ | EHR Dataset 4\* | Macro F1 | 41.0 | 25.0 | 64.0 | |
412
+ | | Micro F1 | 41.0 | 33.0 | 67.0 | |
413
+
414
+ \* Internal datasets.
415
+
416
+ ### Ethics and safety evaluation
417
+
418
+ #### Evaluation approach
419
+
420
+ Our evaluation methods include structured evaluations and internal red-teaming
421
+ testing of relevant content policies. Red-teaming was conducted by a number of
422
+ different teams, each with different goals and human evaluation metrics. These
423
+ models were evaluated against a number of different categories relevant to
424
+ ethics and safety, including:
425
+
426
+ * **Child safety**: Evaluation of text-to-text and image-to-text prompts
427
+ covering child safety policies, including child sexual abuse and
428
+ exploitation.
429
+ * **Content safety**: Evaluation of text-to-text and image-to-text prompts
430
+ covering safety policies, including harassment, violence and gore, and hate
431
+ speech.
432
+ * **Representational harms**: Evaluation of text-to-text and image-to-text
433
+ prompts covering safety policies, including bias, stereotyping, and harmful
434
+ associations or inaccuracies.
435
+ * **General medical harms**: Evaluation of text-to-text and image-to-text
436
+ prompts covering safety policies, including information quality and
437
+ potentially harmful responses or inaccuracies.
438
+
439
+ In addition to development level evaluations, we conduct "assurance evaluations"
440
+ which are our "arms-length" internal evaluations for responsibility governance
441
+ decision making. They are conducted separately from the model development team
442
+ and inform decision making about release. High-level findings are fed back to
443
+ the model team but prompt sets are held out to prevent overfitting and preserve
444
+ the results' ability to inform decision making. Notable assurance evaluation
445
+ results are reported to our Responsibility & Safety Council as part of release
446
+ review.
447
+
448
+ #### Evaluation results
449
+
450
+ For all areas of safety testing, we saw safe levels of performance across the
451
+ categories of child safety, content safety, and representational harms compared
452
+ to previous Gemma models. All testing was conducted without safety filters to
453
+ evaluate the model capabilities and behaviors. For both text-to-text and
454
+ image-to-text the model produced minimal policy violations. A limitation of our
455
+ evaluations was that they included primarily English language prompts.
456
+
457
+ ## Data card
458
+
459
+ ### Dataset overview
460
+
461
+ #### Training
462
+
463
+ The base Gemma models are pre-trained on a large corpus of text and code data.
464
+ MedGemma multimodal variants utilize a
465
+ [SigLIP](https://arxiv.org/abs/2303.15343) image encoder that has been
466
+ specifically pre-trained on a variety of de-identified medical data, including
467
+ radiology images, histopathology images, ophthalmology images, and dermatology
468
+ images. Their LLM component is trained on a diverse set of medical data,
469
+ including medical text, medical question-answer pairs, FHIR-based electronic
470
+ health record data (27B multimodal only), radiology images, histopathology
471
+ patches, ophthalmology images, and dermatology images.
472
+
473
+ #### Evaluation
474
+
475
+ MedGemma models have been evaluated on a comprehensive set of clinically
476
+ relevant benchmarks across multiple datasets, tasks and modalities. These
477
+ benchmarks include both open and internal datasets.
478
+
479
+ #### Source
480
+
481
+ MedGemma utilizes a combination of public and private datasets.
482
+
483
+ This model was trained on diverse public datasets including MIMIC-CXR (chest
484
+ X-rays and reports), ChestImaGenome: Set of bounding boxes linking image
485
+ findings with anatomical regions for MIMIC-CXR SLAKE (multimodal medical images
486
+ and questions), PAD-UFES-20 (skin lesion images and data), SCIN (dermatology
487
+ images), TCGA (cancer genomics data), CAMELYON (lymph node histopathology
488
+ images), PMC-OA (biomedical literature with images), and Mendeley Digital Knee
489
+ X-Ray (knee X-rays).
490
+
491
+ Additionally, multiple diverse proprietary datasets were licensed and
492
+ incorporated (described next).
493
+
494
+ ### Data ownership and documentation
495
+
496
+ * [MIMIC-CXR](https://physionet.org/content/mimic-cxr/2.1.0/): MIT Laboratory
497
+ for Computational Physiology and Beth Israel Deaconess Medical Center
498
+ (BIDMC).
499
+ * [MS-CXR-T](https://physionet.org/content/ms-cxr-t/1.0.0/): Microsoft
500
+ Research Health Futures, Microsoft Research.
501
+ * [ChestX-ray14](https://pmc.ncbi.nlm.nih.gov/articles/PMC6476887/): National
502
+ Institutes of Health \- Clinical Center.
503
+ * [SLAKE](https://www.med-vqa.com/slake/): The Hong Kong Polytechnic
504
+ University (PolyU), with collaborators including West China Hospital of
505
+ Sichuan University and Sichuan Academy of Medical Sciences / Sichuan
506
+ Provincial People's Hospital.
507
+ * [PAD-UFES-20](https://pmc.ncbi.nlm.nih.gov/articles/PMC7479321/): Federal
508
+ University of Espírito Santo (UFES), Brazil, through its Dermatological and
509
+ Surgical Assistance Program (PAD).
510
+ * [SCIN](https://github.com/google-research-datasets/scin): A collaboration
511
+ between Google Health and Stanford Medicine.
512
+ * [TCGA](https://portal.gdc.cancer.gov/) (The Cancer Genome Atlas): A joint
513
+ effort of National Cancer Institute and National Human Genome Research
514
+ Institute. Data from TCGA are available via the Genomic Data Commons (GDC)
515
+ * [CAMELYON](https://camelyon17.grand-challenge.org/Data/): The data was
516
+ collected from Radboud University Medical Center and University Medical
517
+ Center Utrecht in the Netherlands.
518
+ * [PMC-OA (PubMed Central Open Access
519
+ Subset)](https://catalog.data.gov/dataset/pubmed-central-open-access-subset-pmc-oa):
520
+ Maintained by the National Library of Medicine (NLM) and National Center for
521
+ Biotechnology Information (NCBI), which are part of the NIH.
522
+ * [MedQA](https://arxiv.org/pdf/2009.13081): This dataset was created by a
523
+ team of researchers led by Di Jin, Eileen Pan, Nassim Oufattole, Wei-Hung
524
+ Weng, Hanyi Fang, and Peter Szolovits.
525
+ * [MedMCQA](https://arxiv.org/abs/2203.14371): This dataset was created by
526
+ Ankit Pal, Logesh Kumar Umapathi and Malaikannan Sankarasubbu from Saama AI
527
+ Research, Chennai, India
528
+ * [PubMedQA](https://arxiv.org/abs/1909.06146): This dataset was created by
529
+ Qiao Jin, Bhuwan Dhingra, Zhengping Liu, William W. Cohen, Xinghua Lu from
530
+ the University of Pittsburg, Carnegie Mellon University and Google.
531
+ * [LiveQA](https://trec.nist.gov/pubs/trec26/papers/Overview-QA.pdf): This
532
+ dataset was created by Ben Abacha Asma, Eugene Agichtein Yuval Pinter and
533
+ Dina Demner-Fushman from the U.S. National Library of Medicine, Emory
534
+ University and Georgia Institute of Technology.
535
+ * [Mendeley Digital Knee
536
+ X-Ray](https://data.mendeley.com/datasets/t9ndx37v5h/1): This dataset is
537
+ from Rani Channamma University, and is hosted on Mendeley Data.
538
+ * [AfriMed-QA](https://afrimedqa.com/): This data was developed and led by
539
+ multiple collaborating organizations and researchers include key
540
+ contributors: Intron Health, SisonkeBiotik, BioRAMP, Georgia Institute of
541
+ Technology, and MasakhaneNLP.
542
+ * [VQA-RAD](https://www.nature.com/articles/sdata2018251): This dataset was
543
+ created by a research team led by Jason J. Lau, Soumya Gayen, Asma Ben
544
+ Abacha, and Dina Demner-Fushman and their affiliated institutions (the US
545
+ National Library of Medicine and National Institutes of Health)
546
+ * [Chest ImaGenome](https://physionet.org/content/chest-imagenome/1.0.0/): IBM
547
+ Research.
548
+ * [MedExpQA](https://www.sciencedirect.com/science/article/pii/S0933365724001805):
549
+ This dataset was created by researchers at the HiTZ Center (Basque Center
550
+ for Language Technology and Artificial Intelligence).
551
+ * [MedXpertQA](https://huggingface.co/datasets/TsinghuaC3I/MedXpertQA): This
552
+ dataset was developed by researchers at Tsinghua University (Beijing, China)
553
+ and Shanghai Artificial Intelligence Laboratory (Shanghai, China).
554
+ * [HealthSearchQA](https://huggingface.co/datasets/katielink/healthsearchqa):
555
+ This dataset consists of consisting of 3,173 commonly searched consumer
556
+ questions.
557
+ * [ISIC](https://www.isic-archive.com/): International Skin Imaging
558
+ Collaboration is a joint effort involving clinicians, researchers, and
559
+ engineers from various institutions worldwide.
560
+ * [Mendeley Clinical Laboratory Test
561
+ Reports](https://data.mendeley.com/datasets/bygfmk4rx9/2): This dataset is
562
+ hosted on Mendeley and includes 260 clinical laboratory test reports issued
563
+ by 24 laboratories in Egypt.
564
+ * [CT-RATE](https://huggingface.co/datasets/ibrahimhamamci/CT-RATE): Istanbul
565
+ Medipol University Mega Hospital and University of Zurich / ETH Zurich.
566
+
567
+ In addition to the public datasets listed above, MedGemma was also trained on
568
+ de-identified, licensed datasets or datasets collected internally at Google from
569
+ consented participants.
570
+
571
+ * **CT dataset 1:** De-identified dataset of different axial CT studies across
572
+ body parts (head, chest, abdomen) from a US-based radiology outpatient
573
+ diagnostic center network.
574
+ * **MRI dataset 1:** De-identified dataset of different axial multi-parametric
575
+ MRI studies across body parts (head, abdomen, knee) from a US-based
576
+ radiology outpatient diagnostic center network
577
+ * **Ophthalmology dataset 1 (EyePACS):** De-identified dataset of fundus
578
+ images from diabetic retinopathy screening.
579
+ * **Dermatology dataset 1:** De-identified dataset of teledermatology skin
580
+ condition images (both clinical and dermatoscopic) from Colombia.
581
+ * **Dermatology dataset 2:** De-identified dataset of skin cancer images (both
582
+ clinical and dermatoscopic) from Australia.
583
+ * **Dermatology dataset 3:** De-identified dataset of non-diseased skin images
584
+ from an internal data collection effort.
585
+ * **Dermatology dataset 4**: De-identified dataset featuring multiple images
586
+ and longitudinal visits and records from Japan.
587
+ * **Dermatology dataset 5**: Dermatology dataset featuring unlabeled images.
588
+ * **Dermatology dataset 6**: De-identified cases from adult patients with data
589
+ representing Fitzpatrick 5 or 6 skin types
590
+ * **Pathology dataset 1:** De-identified dataset of histopathology H\&E whole
591
+ slide images created in collaboration with an academic research hospital and
592
+ biobank in Europe. Comprises de-identified colon, prostate, and lymph nodes.
593
+ * **Pathology dataset 2:** De-identified dataset of lung histopathology H\&E
594
+ and IHC whole slide images created by a commercial biobank in the United
595
+ States.
596
+ * **Pathology dataset 3:** De-identified dataset of prostate and lymph node
597
+ H\&E and IHC histopathology whole slide images created by a contract
598
+ research organization in the United States.
599
+ * **Pathology dataset 4:** De-identified dataset of histopathology whole slide
600
+ images created in collaboration with a large, tertiary teaching hospital in
601
+ the United States. Comprises a diverse set of tissue and stain types,
602
+ predominantly H\&E.
603
+ * **EHR dataset 1:** Question/answer dataset drawn from synthetic FHIR records
604
+ created by [Synthea.](https://synthetichealth.github.io/synthea/) The test
605
+ set includes 19 unique patients with 200 questions per patient divided into
606
+ 10 different categories.
607
+ * **EHR dataset 2**: De-identified Lab Reports across different departments in
608
+ Pathology such as Biochemistry, Clinical Pathology, Hematology, Microbiology
609
+ and Serology
610
+ * **EHR dataset 3**: De-identified Lab Reports across different departments in
611
+ Pathology such as Biochemistry, Clinical Pathology, Hematology, Microbiology
612
+ and Serology from at least 25 different labs
613
+ * **EHR dataset 4**: Synthetic dataset of laboratory reports
614
+ * **EHR dataset 5**: Synthetic dataset of approximately 60,000 health-relevant
615
+ user queries
616
+
617
+ ### Data citation
618
+
619
+ * **MIMIC-CXR:** Johnson, A., Pollard, T., Mark, R., Berkowitz, S., & Horng,
620
+ S. (2024). MIMIC-CXR Database (version 2.1.0). PhysioNet.
621
+ [https://physionet.org/content/mimic-cxr/2.1.0/](https://physionet.org/content/mimic-cxr/2.1.0/)
622
+ *and* Johnson, Alistair E. W., Tom J. Pollard, Seth J. Berkowitz, Nathaniel
623
+ R. Greenbaum, Matthew P. Lungren, Chih-Ying Deng, Roger G. Mark, and Steven
624
+ Horng. 2019\. "MIMIC-CXR, a de-Identified Publicly Available Database of
625
+ Chest Radiographs with Free-Text Reports." *Scientific Data 6* (1): 1–8.
626
+ * **MS-CXR-T:** Bannur, S., Hyland, S., Liu, Q., Pérez-García, F., Ilse, M.,
627
+ Coelho de Castro, D., Boecking, B., Sharma, H., Bouzid, K., Schwaighofer,
628
+ A., Wetscherek, M. T., Richardson, H., Naumann, T., Alvarez Valle, J., &
629
+ Oktay, O. (2023). MS-CXR-T: Learning to Exploit Temporal Structure for
630
+ Biomedical Vision-Language Processing (version 1.0.0). PhysioNet.
631
+ [https://doi.org/10.13026/pg10-j984](https://doi.org/10.13026/pg10-j984).
632
+ * **ChestX-ray14:** Wang, Xiaosong, Yifan Peng, Le Lu, Zhiyong Lu,
633
+ Mohammadhadi Bagheri, and Ronald M. Summers. "Chestx-ray8: Hospital-scale
634
+ chest x-ray database and benchmarks on weakly-supervised classification and
635
+ localization of common thorax diseases." In *Proceedings of the IEEE
636
+ conference on computer vision and pattern recognition*, pp. 2097-2106.
637
+ 2017\.
638
+ * **SLAKE:** Liu, Bo, Li-Ming Zhan, Li Xu, Lin Ma, Yan Yang, and Xiao-Ming Wu.
639
+ 2021.SLAKE: A Semantically-Labeled Knowledge-Enhanced Dataset for Medical
640
+ Visual Question Answering."
641
+ [http://arxiv.org/abs/2102.09542](http://arxiv.org/abs/2102.09542).
642
+ * **PAD-UFES-20:** Pacheco, Andre GC, et al. "PAD-UFES-20: A skin lesion
643
+ dataset composed of patient data and clinical images collected from
644
+ smartphones." *Data in brief* 32 (2020): 106221\.
645
+ * **SCIN:** Ward, Abbi, Jimmy Li, Julie Wang, Sriram Lakshminarasimhan, Ashley
646
+ Carrick, Bilson Campana, Jay Hartford, et al. 2024\. "Creating an Empirical
647
+ Dermatology Dataset Through Crowdsourcing With Web Search Advertisements."
648
+ *JAMA Network Open 7* (11): e2446615–e2446615.
649
+ * **TCGA:** The results shown here are in whole or part based upon data
650
+ generated by the TCGA Research Network:
651
+ [https://www.cancer.gov/tcga](https://www.cancer.gov/tcga).
652
+ * **CAMELYON16:** Ehteshami Bejnordi, Babak, Mitko Veta, Paul Johannes van
653
+ Diest, Bram van Ginneken, Nico Karssemeijer, Geert Litjens, Jeroen A. W. M.
654
+ van der Laak, et al. 2017\. "Diagnostic Assessment of Deep Learning
655
+ Algorithms for Detection of Lymph Node Metastases in Women With Breast
656
+ Cancer." *JAMA 318* (22): 2199–2210.
657
+ * **CAMELYON17:** Bandi, Peter, et al. "From detection of individual
658
+ metastases to classification of lymph node status at the patient level: the
659
+ camelyon17 challenge." *IEEE transactions on medical imaging* 38.2 (2018):
660
+ 550-560.
661
+ * **Mendeley Digital Knee X-Ray:** Gornale, Shivanand; Patravali, Pooja
662
+ (2020), "Digital Knee X-ray Images", Mendeley Data, V1, doi:
663
+ 10.17632/t9ndx37v5h.1
664
+ * **VQA-RAD:** Lau, Jason J., Soumya Gayen, Asma Ben Abacha, and Dina
665
+ Demner-Fushman. 2018\. "A Dataset of Clinically Generated Visual Questions
666
+ and Answers about Radiology Images." *Scientific Data 5* (1): 1–10.
667
+ * **Chest ImaGenome:** Wu, J., Agu, N., Lourentzou, I., Sharma, A., Paguio,
668
+ J., Yao, J. S., Dee, E. C., Mitchell, W., Kashyap, S., Giovannini, A., Celi,
669
+ L. A., Syeda-Mahmood, T., & Moradi, M. (2021). Chest ImaGenome Dataset
670
+ (version 1.0.0). PhysioNet. RRID:SCR\_007345.
671
+ [https://doi.org/10.13026/wv01-y230](https://doi.org/10.13026/wv01-y230)
672
+ * **MedQA:** Jin, Di, Eileen Pan, Nassim Oufattole, Wei-Hung Weng, Hanyi Fang,
673
+ and Peter Szolovits. 2020\. "What Disease Does This Patient Have? A
674
+ Large-Scale Open Domain Question Answering Dataset from Medical Exams."
675
+ [http://arxiv.org/abs/2009.13081](http://arxiv.org/abs/2009.13081).
676
+ * **MedMCQA:** Pal, Ankit, Logesh Kumar Umapathi, and Malaikannan
677
+ Sankarasubbu. "Medmcqa: A large-scale multi-subject multi-choice dataset for
678
+ medical domain question answering." *Conference on health, inference, and
679
+ learning. PMLR,* 2022\.
680
+ * **PubMedQA:** Jin, Qiao, et al. "Pubmedqa: A dataset for biomedical research
681
+ question answering." *Proceedings of the 2019 conference on empirical
682
+ methods in natural language processing and the 9th international joint
683
+ conference on natural language processing (EMNLP-IJCNLP).* 2019\.
684
+ * **LiveQA:** Abacha, Asma Ben, et al. "Overview of the medical question
685
+ answering task at TREC 2017 LiveQA." *TREC.* 2017\.
686
+ * **AfriMed-QA:** Olatunji, Tobi, Charles Nimo, Abraham Owodunni, Tassallah
687
+ Abdullahi, Emmanuel Ayodele, Mardhiyah Sanni, Chinemelu Aka, et al. 2024\.
688
+ "AfriMed-QA: A Pan-African, Multi-Specialty, Medical Question-Answering
689
+ Benchmark Dataset."
690
+ [http://arxiv.org/abs/2411.15640](http://arxiv.org/abs/2411.15640).
691
+ * **MedExpQA:** Alonso, I., Oronoz, M., & Agerri, R. (2024). MedExpQA:
692
+ Multilingual Benchmarking of Large Language Models for Medical Question
693
+ Answering. *arXiv preprint arXiv:2404.05590*. Retrieved from
694
+ [https://arxiv.org/abs/2404.05590](https://arxiv.org/abs/2404.05590)
695
+ * **MedXpertQA:** Zuo, Yuxin, Shang Qu, Yifei Li, Zhangren Chen, Xuekai Zhu,
696
+ Ermo Hua, Kaiyan Zhang, Ning Ding, and Bowen Zhou. 2025\. "MedXpertQA:
697
+ Benchmarking Expert-Level Medical Reasoning and Understanding."
698
+ [http://arxiv.org/abs/2501.18362](http://arxiv.org/abs/2501.18362).
699
+ * **HealthSearchQA:** Singhal, Karan, Shekoofeh Azizi, Tao Tu, S. Sara
700
+ Mahdavi, Jason Wei, Hyung Won Chung, Nathan Scales et al. "Large language
701
+ models encode clinical knowledge." *Nature* 620, no. 7972 (2023): 172-180.
702
+ * **ISIC**: Gutman, David; Codella, Noel C. F.; Celebi, Emre; Helba, Brian;
703
+ Marchetti, Michael; Mishra, Nabin; Halpern, Allan. "Skin Lesion Analysis
704
+ toward Melanoma Detection: A Challenge at the International Symposium on
705
+ Biomedical Imaging (ISBI) 2016, hosted by the International Skin Imaging
706
+ Collaboration (ISIC)". eprint [arXiv:1605.01397.
707
+ 2016](https://arxiv.org/abs/1605.01397)
708
+ * **Mendeley Clinical Laboratory Test Reports:** Abdelmaksoud, Esraa;
709
+ Gadallah, Ahmed; Asad, Ahmed (2022), “Clinical Laboratory Test Reports”,
710
+ Mendeley Data, V2, doi: 10.17632/bygfmk4rx9.2
711
+ * **CheXpert**: Irvin, J., Rajpurkar, P., Ko, M., Yu, Y., Ciurea-Ilcus, S.,
712
+ Chute, C., Marklund, H., Haghgoo, B., Ball, R., Shpanskaya, K., Seekins, J.,
713
+ Mong, D. A., Halabi, S. S., Sandberg, J. K., Jones, R., Larson, D. B.,
714
+ Langlotz, C. P., Patel, B. N., Lungren, M. P., & Ng, A. Y. (2019). CheXpert:
715
+ A Large Chest Radiograph Dataset with Uncertainty Labels and Expert
716
+ Comparison. arXiv:1901.07031
717
+ * **CT-RATE:** Hamamci, I. E., Er, S., Almas, F., Simsek, A. G., Esirgun, S.
718
+ N., Dogan, I., Dasdelen, M. F., Wittmann, B., Menze, B., et al. (2024).
719
+ CT-RATE Dataset. Hugging Face.
720
+ [https://huggingface.co/datasets/ibrahimhamamci/CT-RATE](https://huggingface.co/datasets/ibrahimhamamci/CT-RATE)
721
+ and Hamamci, Ibrahim Ethem, Sezgin Er, Furkan Almas, Ayse Gulnihan Simsek,
722
+ Sevval Nil Esirgun, Irem Dogan, Muhammed Furkan Dasdelen, Bastian Wittmann,
723
+ et al. 2024\. "Developing Generalist Foundation Models from a Multimodal
724
+ Dataset for 3D Computed Tomography." *arXiv preprint arXiv:2403.17834*.
725
+ [https://arxiv.org/abs/2403.17834](https://arxiv.org/abs/2403.17834)
726
+ * **EHRNoteQA**: Sunjun Kweon, Jiyoun Kim, Heeyoung Kwak, Dongchul Cha,
727
+ Hangyul Yoon, Kwanghyun Kim, Jeewon Yang, Seunghyun Won, Edward Choi. (2024)
728
+ “EHRNoteQA: An LLM Benchmark for Real-World Clinical Practice Using
729
+ Discharge Summaries.” arXiv:2402.16040
730
+
731
+ ### De-identification/anonymization:
732
+
733
+ Google and its partners utilize datasets that have been rigorously anonymized or
734
+ de-identified to ensure the protection of individual research participants and
735
+ patient privacy.
736
+
737
+ ## Implementation information
738
+
739
+ Details about the model internals.
740
+
741
+ ### Software
742
+
743
+ Training was done using [JAX](https://github.com/jax-ml/jax).
744
+
745
+ JAX allows researchers to take advantage of the latest generation of hardware,
746
+ including TPUs, for faster and more efficient training of large models.
747
+
748
+ ## Use and limitations
749
+
750
+ ### Intended use
751
+
752
+ MedGemma is an open multimodal generative AI model intended to be used as a
753
+ starting point that enables more efficient development of downstream healthcare
754
+ applications involving medical text and images. MedGemma is intended for
755
+ developers in the life sciences and healthcare space. Developers are responsible
756
+ for training, adapting, and making meaningful changes to MedGemma to accomplish
757
+ their specific intended use. MedGemma models can be fine-tuned by developers
758
+ using their own proprietary data for their specific tasks or solutions.
759
+
760
+ MedGemma is based on Gemma 3 and has been further trained on medical images and
761
+ text. MedGemma enables further development in medical contexts (image and
762
+ textual); however, the model has been trained using chest x-ray, histopathology,
763
+ dermatology, fundus images, CT, MR, medical text/documents and electronic health
764
+ records (EHR) data. Examples of tasks within MedGemma’s training include visual
765
+ question answering pertaining to medical images, such as radiographs, document
766
+ understanding, or providing answers to textual medical questions.
767
+
768
+ ### Benefits
769
+
770
+ * Provides strong baseline medical image and text comprehension for models of
771
+ its size.
772
+ * This strong performance makes it efficient to adapt for downstream
773
+ healthcare-based use cases, compared to models of similar size without
774
+ medical data pre-training.
775
+ * This adaptation may involve prompt engineering, grounding, agentic
776
+ orchestration or fine-tuning depending on the use case, baseline validation
777
+ requirements, and desired performance characteristics.
778
+
779
+ ### Limitations
780
+
781
+ MedGemma is not intended to be used without appropriate validation, adaptation,
782
+ and/or making meaningful modification by developers for their specific use case.
783
+ The outputs generated by MedGemma are not intended to directly inform clinical
784
+ diagnosis, patient management decisions, treatment recommendations, or any other
785
+ direct clinical practice applications. All outputs from MedGemma should be
786
+ considered preliminary and require independent verification, clinical
787
+ correlation, and further investigation through established research and
788
+ development methodologies.
789
+
790
+ MedGemma's multimodal capabilities have been primarily evaluated on single-image
791
+ tasks. MedGemma has not been evaluated in use cases that involve comprehension
792
+ of multiple images.
793
+
794
+ MedGemma has not been evaluated or optimized for multi-turn applications.
795
+
796
+ MedGemma's training may make it more sensitive to the specific prompt used than
797
+ Gemma 3.
798
+
799
+ When adapting MedGemma developer should consider the following:
800
+
801
+ * **Bias in validation data:** As with any research, developers should ensure
802
+ that any downstream application is validated to understand performance using
803
+ data that is appropriately representative of the intended use setting for
804
+ the specific application (e.g., age, sex, gender, condition, imaging device,
805
+ etc).
806
+ * **Data contamination concerns**: When evaluating the generalization
807
+ capabilities of a large model like MedGemma in a medical context, there is a
808
+ risk of data contamination, where the model might have inadvertently seen
809
+ related medical information during its pre-training, potentially
810
+ overestimating its true ability to generalize to novel medical concepts.
811
+ Developers should validate MedGemma on datasets not publicly available or
812
+ otherwise made available to non-institutional researchers to mitigate this
813
+ risk.
814
+
815
+ ### Release notes
816
+
817
+ #### MedGemma 4B IT
818
+
819
+ * Jan 13, 2026: Release of MedGemma 1.5 with improved medical reasoning,
820
+ medical records interpretation and medical image interpretation
821
+ * Jan 23, 2026: Updated generation config to use greedy decoding by default.
822
+ Sampling can still be allowed by users to achieve previous functionality.
823
+ Please see https://huggingface.co/docs/transformers/en/generation_strategies
824
+ for details.
825
+