Spaces:
Sleeping
Sleeping
A newer version of the Gradio SDK is available:
6.8.0
metadata
title: MedGemma Radiology Report Generator
emoji: 🩻
colorFrom: purple
colorTo: indigo
sdk: gradio
python_version: '3.10'
app_file: app.py
sdk_version: 5.39.0
🏥 MedGemma Radiology Report Generator
Created by CultriX
This Hugging Face Space demonstrates the capabilities of Google's MedGemma 4b-it model, a medical-focused LLM designed for both image comprehension and medical text generation.
The application is accelerated by ZeroGPU for fast inference and features a dual-tab interface for different clinical workflows.
✨ Features
🩻 X-Ray Analysis Tab
- Multimodal Analysis: Upload an X-ray to receive a structured radiology report (Findings, Impression, Recommendations).
- Clinical Context: Optionally provide patient history (e.g., "65M, cough for 3 weeks") to guide the model's interpretation.
- Interactive Chat: Ask follow-up questions about specific findings directly in the chat window after the report is generated.
- Token Management: Real-time token counting ensures your input stays within the model's context window.
💬 Medical Assistant Tab
- Text-Only Mode: Chat with the MedGemma model about general medical concepts, differential diagnoses, or terminology without uploading an image.
🕹️ How to Use
For X-Ray Analysis:
- Go to the "🩻 X-Ray Analysis" tab.
- Upload a chest X-ray image (PNG or JPEG).
- (Optional) Enter relevant Clinical Information in the text box.
- Click "🔬 Generate Report".
- Once the report appears, use the chat box below it to ask follow-up questions (e.g., "Can you explain the pleural effusion finding?").
For General Questions:
- Switch to the "💬 Medical Assistant" tab.
- Type your medical question and hit Enter.
🧠 Model Information
- Model:
google/medgemma-4b-it - Architecture: MedGemma is built on top of Gemma 3, fine-tuned with medical instruction data.
- Hardware: Powered by Hugging Face ZeroGPU (dynamic H100/A100 allocation) for efficient inference.
- Requirements: This Space uses
sentencepieceandprotobuffor tokenizer handling.
⚠️ Disclaimer
This application is intended for research and demonstration purposes only.
- It should not be used for clinical decision-making.
- The model may hallucinate findings or miss critical anomalies.
- All generated reports must be reviewed and validated by a qualified medical professional.
🔧 Known Limitations
- Image Formats: DICOM files are not yet supported — please convert to PNG or JPEG before uploading.
- Model Size: This demo uses the 4B parameter version. While fast, it may be less accurate than the larger 27B variant.
- ZeroGPU Quotas: Inference speed and availability depend on the current load of the ZeroGPU cluster.
Feel free to fork this Space to customize the system prompts or integrate it into your own clinical AI research workflows!