SamarpeetGarad commited on
Commit
86aa283
·
verified ·
1 Parent(s): 4394ee1

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,8 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ sample_data/real_cxr_1.png filter=lfs diff=lfs merge=lfs -text
37
+ sample_data/real_cxr_2.jpg filter=lfs diff=lfs merge=lfs -text
38
+ sample_data/real_cxr_bilateral.jpg filter=lfs diff=lfs merge=lfs -text
39
+ sample_data/real_cxr_opacity.png filter=lfs diff=lfs merge=lfs -text
40
+ sample_data/real_cxr_pneumonia.png filter=lfs diff=lfs merge=lfs -text
.gitignore ADDED
@@ -0,0 +1,72 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Python
2
+ __pycache__/
3
+ *.py[cod]
4
+ *$py.class
5
+ *.so
6
+ .Python
7
+ build/
8
+ develop-eggs/
9
+ dist/
10
+ downloads/
11
+ eggs/
12
+ .eggs/
13
+ lib/
14
+ lib64/
15
+ parts/
16
+ sdist/
17
+ var/
18
+ wheels/
19
+ *.egg-info/
20
+ .installed.cfg
21
+ *.egg
22
+
23
+ # Virtual Environment
24
+ venv/
25
+ ENV/
26
+ env/
27
+ .venv/
28
+
29
+ # IDE
30
+ .idea/
31
+ .vscode/
32
+ *.swp
33
+ *.swo
34
+ *~
35
+
36
+ # Jupyter
37
+ .ipynb_checkpoints/
38
+ *.ipynb
39
+
40
+ # Model files (large)
41
+ *.bin
42
+ *.safetensors
43
+ *.h5
44
+ *.pt
45
+ *.pth
46
+ models/
47
+
48
+ # Data
49
+ *.csv
50
+ *.json
51
+ !package.json
52
+ *.parquet
53
+ data/
54
+
55
+ # Secrets
56
+ .env
57
+ .env.local
58
+ *.key
59
+ secrets/
60
+
61
+ # OS
62
+ .DS_Store
63
+ Thumbs.db
64
+
65
+ # Logs
66
+ *.log
67
+ logs/
68
+
69
+ # Cache
70
+ .cache/
71
+ .gradio/
72
+ flagged/
DEPLOYMENT.md ADDED
@@ -0,0 +1,166 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # RadioFlow Deployment Guide
2
+
3
+ ## Deploying to HuggingFace Spaces
4
+
5
+ ### Step 1: Create a HuggingFace Account
6
+ 1. Go to [huggingface.co](https://huggingface.co)
7
+ 2. Sign up or log in
8
+ 3. Go to Settings → Access Tokens
9
+ 4. Create a new token with write permissions
10
+
11
+ ### Step 2: Create a New Space
12
+ 1. Click on your profile → New Space
13
+ 2. Configure:
14
+ - **Space name**: `radioflow`
15
+ - **License**: CC BY 4.0
16
+ - **SDK**: Gradio
17
+ - **Hardware**: CPU basic (or GPU if available)
18
+ - **Visibility**: Public
19
+
20
+ ### Step 3: Upload Files
21
+ You can either:
22
+
23
+ #### Option A: Git Push
24
+ ```bash
25
+ # Clone your space
26
+ git clone https://huggingface.co/spaces/YOUR_USERNAME/radioflow
27
+ cd radioflow
28
+
29
+ # Copy all project files
30
+ cp -r /path/to/project/* .
31
+
32
+ # Push
33
+ git add .
34
+ git commit -m "Initial RadioFlow deployment"
35
+ git push
36
+ ```
37
+
38
+ #### Option B: Web Upload
39
+ 1. Go to your Space's Files tab
40
+ 2. Click "Upload files"
41
+ 3. Upload these files:
42
+ - `app.py`
43
+ - `requirements.txt`
44
+ - `agents/` folder
45
+ - `orchestrator/` folder
46
+ - `utils/` folder
47
+ - `config.py`
48
+
49
+ ### Step 4: Configure Environment
50
+ 1. Go to Space Settings → Variables and secrets
51
+ 2. Add your HuggingFace token:
52
+ - **Name**: `HF_TOKEN`
53
+ - **Value**: Your token (keep secret)
54
+
55
+ ### Step 5: Wait for Build
56
+ - The Space will automatically build
57
+ - Check the Logs tab for any errors
58
+ - First build takes 5-10 minutes
59
+
60
+ ### Step 6: Test Your Deployment
61
+ 1. Visit `https://huggingface.co/spaces/YOUR_USERNAME/radioflow`
62
+ 2. Upload a test chest X-ray
63
+ 3. Verify the workflow completes
64
+
65
+ ---
66
+
67
+ ## Local Development
68
+
69
+ ### Prerequisites
70
+ - Python 3.10+
71
+ - pip or conda
72
+
73
+ ### Setup
74
+ ```bash
75
+ # Create virtual environment
76
+ python -m venv venv
77
+ source venv/bin/activate # Windows: venv\Scripts\activate
78
+
79
+ # Install dependencies
80
+ pip install -r requirements.txt
81
+
82
+ # Login to HuggingFace (for model access)
83
+ huggingface-cli login
84
+ ```
85
+
86
+ ### Run Locally
87
+ ```bash
88
+ # Run tests first
89
+ python test_radioflow.py
90
+
91
+ # Start the app
92
+ python app.py
93
+ ```
94
+
95
+ The app will be available at `http://localhost:7860`
96
+
97
+ ---
98
+
99
+ ## Troubleshooting
100
+
101
+ ### "Model not found" Error
102
+ - Ensure you've accepted the model license on HuggingFace
103
+ - Check that HF_TOKEN is set correctly
104
+ - For gated models, you may need to request access
105
+
106
+ ### Out of Memory
107
+ - Enable `LOW_MEMORY_MODE` in `config.py`
108
+ - Use CPU-only mode
109
+ - Reduce `MAX_NEW_TOKENS`
110
+
111
+ ### Slow Inference
112
+ - Demo mode uses simulated outputs for speed
113
+ - For real inference, GPU is recommended
114
+ - Consider model quantization
115
+
116
+ ### Build Fails on Spaces
117
+ 1. Check the build logs
118
+ 2. Verify all files are uploaded
119
+ 3. Ensure requirements.txt versions are compatible
120
+ 4. Try removing version pins if issues persist
121
+
122
+ ---
123
+
124
+ ## File Structure for Deployment
125
+
126
+ ```
127
+ radioflow/
128
+ ├── app.py # Main Gradio app (required)
129
+ ├── requirements.txt # Dependencies (required)
130
+ ├── README.md # Description for Space
131
+ ├── space.yaml # HuggingFace config
132
+ ├── config.py # Configuration
133
+ ├── agents/
134
+ │ ├── __init__.py
135
+ │ ├── base_agent.py
136
+ │ ├── cxr_analyzer.py
137
+ │ ├── finding_interpreter.py
138
+ │ ├── report_generator.py
139
+ │ └── priority_router.py
140
+ ├── orchestrator/
141
+ │ ├── __init__.py
142
+ │ └── workflow.py
143
+ └── utils/
144
+ ├── __init__.py
145
+ ├── visualization.py
146
+ └── metrics.py
147
+ ```
148
+
149
+ ---
150
+
151
+ ## Competition Submission Checklist
152
+
153
+ Before submitting to Kaggle:
154
+
155
+ - [ ] HuggingFace Space is live and working
156
+ - [ ] Public GitHub repository created
157
+ - [ ] 3-minute video recorded and uploaded
158
+ - [ ] Writeup completed (3 pages max)
159
+ - [ ] All links tested and accessible
160
+ - [ ] Submitted before deadline
161
+
162
+ ### Required Links for Submission
163
+ 1. **Video URL**: YouTube, Loom, or Google Drive
164
+ 2. **Code Repository**: GitHub link
165
+ 3. **Live Demo**: HuggingFace Spaces link
166
+ 4. **Model (Bonus)**: HuggingFace model link (if fine-tuned)
VIDEO_SCRIPT.md ADDED
@@ -0,0 +1,179 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # RadioFlow Demo Video Script
2
+ ## 3-Minute Competition Video
3
+
4
+ ---
5
+
6
+ ## 📋 BEFORE RECORDING - What You Need
7
+
8
+ ### Images to Use
9
+ Use the **sample chest X-rays** in your `sample_data/` folder:
10
+ - `real_cxr_1.png`
11
+ - `real_cxr_2.jpg`
12
+ - `real_cxr_bilateral.jpg`
13
+ - `real_cxr_opacity.png`
14
+
15
+ **⚠️ IMPORTANT**: Only use CHEST X-rays (lungs/heart visible).
16
+ Do NOT use shoulder, orthopedic, or other body part X-rays.
17
+
18
+ ### Checklist
19
+ - [ ] Local app running at http://127.0.0.1:7860
20
+ - [ ] Sample X-ray images ready
21
+ - [ ] Screen recording software ready (OBS, Loom, QuickTime)
22
+ - [ ] Microphone tested
23
+ - [ ] Notifications turned off
24
+
25
+ ---
26
+
27
+ ## 🎬 THE SCRIPT
28
+
29
+ ### INTRO (0:00 - 0:20)
30
+
31
+ **[Screen: Title slide or RadioFlow interface]**
32
+
33
+ > "Hi, I'm Samarpeet, and this is RadioFlow - a multi-agent AI system for radiology workflows, built for the MedGemma Impact Challenge."
34
+
35
+ **[Screen: Show the 4-agent diagram or interface]**
36
+
37
+ > "RadioFlow demonstrates how specialized AI agents can collaborate to assist radiologists - analyzing images, interpreting findings, generating reports, and assessing priority."
38
+
39
+ ---
40
+
41
+ ### THE PROBLEM (0:20 - 0:40)
42
+
43
+ **[Screen: Problem statistics or simple text slide]**
44
+
45
+ > "Radiologists face a critical challenge: over 700 million imaging studies per year in the US alone, with burnout rates exceeding 30%."
46
+
47
+ > "The manual workflow of analyzing images, writing reports, and prioritizing cases creates bottlenecks and delays."
48
+
49
+ > "RadioFlow addresses this with a multi-agent AI approach."
50
+
51
+ ---
52
+
53
+ ### ARCHITECTURE (0:40 - 1:10)
54
+
55
+ **[Screen: Show the RadioFlow interface or architecture diagram]**
56
+
57
+ > "RadioFlow uses a 4-agent orchestrated pipeline."
58
+
59
+ **[Point to or highlight each section]**
60
+
61
+ > "Agent 1, the CXR Analyzer, processes the chest X-ray image."
62
+
63
+ > "Agent 2, the Finding Interpreter, uses MedGemma to translate findings into clinical language."
64
+
65
+ > "Agent 3, the Report Generator, creates a structured radiology report using MedGemma."
66
+
67
+ > "Agent 4, the Priority Router, assesses urgency and determines case routing - also powered by MedGemma."
68
+
69
+ > "Each agent has a specific job and hands off to the next - this is agentic workflow design."
70
+
71
+ ---
72
+
73
+ ### LIVE DEMO (1:10 - 2:10)
74
+
75
+ **[Screen: RadioFlow Gradio interface - http://127.0.0.1:7860]**
76
+
77
+ > "Let me show you RadioFlow in action."
78
+
79
+ **[Upload one of the sample chest X-rays]**
80
+
81
+ > "I'll upload a chest X-ray. You can also add clinical context like patient history."
82
+
83
+ **[Type in Clinical History box: "65-year-old with cough and fever"]**
84
+
85
+ **[Click 'Analyze X-Ray' button]**
86
+
87
+ > "Now watch as the pipeline processes through each agent..."
88
+
89
+ **[Wait for processing - about 15-20 seconds with real MedGemma]**
90
+
91
+ > "Stage 1 analyzes the image... Stage 2 interprets findings with MedGemma... Stage 3 generates the report... Stage 4 assesses priority..."
92
+
93
+ **[Show the results when ready]**
94
+
95
+ > "In about 15 seconds, RadioFlow has produced a complete analysis."
96
+
97
+ **[Click on Report tab]**
98
+
99
+ > "Here's the structured radiology report - generated by MedGemma with findings, impression, and recommendations."
100
+
101
+ **[Show Priority section]**
102
+
103
+ > "The system assessed this as [READ THE PRIORITY LEVEL] priority."
104
+
105
+ **[Optionally show Visualizations tab]**
106
+
107
+ > "The visualization shows the agent pipeline and processing metrics."
108
+
109
+ ---
110
+
111
+ ### WHY THIS MATTERS (2:10 - 2:40)
112
+
113
+ **[Screen: Back to main interface or impact slide]**
114
+
115
+ > "What makes RadioFlow special isn't just the output - it's the architecture."
116
+
117
+ > "Four specialized agents, each doing one thing well, with clear handoffs between them."
118
+
119
+ > "This modular design means each agent can be improved independently, debugged clearly, and scaled as needed."
120
+
121
+ > "MedGemma powers the clinical intelligence - understanding medical terminology and generating professional reports."
122
+
123
+ > "For production deployment, this architecture could integrate medical imaging AI like CXR Foundation for the image analysis stage."
124
+
125
+ ---
126
+
127
+ ### CLOSING (2:40 - 3:00)
128
+
129
+ **[Screen: Summary or final slide]**
130
+
131
+ > "RadioFlow: a multi-agent AI system demonstrating how specialized agents can collaborate for radiology workflow automation."
132
+
133
+ > "Built with Google's MedGemma, targeting both the Main Track and the Agentic Workflow Prize."
134
+
135
+ > "Thank you for watching!"
136
+
137
+ **[Screen: Your name and links]**
138
+
139
+ ---
140
+
141
+ ## 🎥 Recording Tips
142
+
143
+ 1. **Speak slowly and clearly** - You have 3 minutes, no need to rush
144
+ 2. **Practice once or twice** before recording
145
+ 3. **Wait for processing** - The ~15 second MedGemma processing is fine to show
146
+ 4. **If something goes wrong** - Just pause and retry that section
147
+ 5. **Aim for 2:45-2:55** - Leave buffer under the 3-minute limit
148
+
149
+ ## 🛠️ Recording Tools
150
+
151
+ - **Mac**: QuickTime Player (built-in) or OBS Studio (free)
152
+ - **Simple option**: Loom (easy screen + audio recording)
153
+ - **Editing**: iMovie (Mac) or DaVinci Resolve (free)
154
+
155
+ ## 📁 Sample Images Location
156
+
157
+ Your sample X-rays are in:
158
+ ```
159
+ /Users/samarpeetgarad/Desktop/competitions/The MedGemma Impact Challenge/sample_data/
160
+ ```
161
+
162
+ Use any of these for the demo:
163
+ - `real_cxr_1.png` - Good for showing opacity detection
164
+ - `real_cxr_2.jpg` - Clear chest X-ray
165
+ - `real_cxr_bilateral.jpg` - Shows bilateral findings
166
+ - `real_cxr_opacity.png` - Shows opacity findings
167
+
168
+ ---
169
+
170
+ ## ✅ Final Checklist
171
+
172
+ - [ ] Script practiced 2-3 times
173
+ - [ ] Local app running and tested
174
+ - [ ] Sample image ready to upload
175
+ - [ ] Recording software ready
176
+ - [ ] Microphone working
177
+ - [ ] Notifications off
178
+ - [ ] Video under 3 minutes
179
+ - [ ] Uploaded to YouTube/Drive and link added to submission
WRITEUP.md ADDED
@@ -0,0 +1,232 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # RadioFlow: Multi-Agent Radiology Workflow System
2
+
3
+ **MedGemma Impact Challenge Submission**
4
+
5
+ ---
6
+
7
+ ## Project Name
8
+ **RadioFlow** - Multi-Agent AI Workflow for Radiology Assistance
9
+
10
+ ## Team
11
+ - **Samarpeet Garad** - ML Engineer & Project Lead
12
+
13
+ ---
14
+
15
+ ## Executive Summary
16
+
17
+ RadioFlow is a **proof-of-concept multi-agent system** demonstrating how AI could transform radiology workflows. It showcases a 4-agent orchestrated pipeline where specialized AI agents collaborate to analyze chest X-rays, interpret findings, generate reports, and assess priority.
18
+
19
+ **Key Innovation**: The agentic workflow architecture with clear handoffs between specialized agents - a design pattern that enables modular, observable, and scalable medical AI systems.
20
+
21
+ ---
22
+
23
+ ## Problem Statement
24
+
25
+ ### The Challenge: Radiologist Burnout & Workflow Inefficiency
26
+
27
+ Radiology departments worldwide face a critical crisis:
28
+
29
+ - **700+ million** imaging studies performed annually in the US alone
30
+ - **30%+ burnout rate** among radiologists
31
+ - **Average 5-10 minutes** per chest X-ray for preliminary reading
32
+ - **Limited access** to radiologist expertise in underserved regions
33
+
34
+ Current clinical workflows require radiologists to manually:
35
+ 1. Analyze each image for abnormalities
36
+ 2. Interpret findings in clinical context
37
+ 3. Generate structured reports
38
+ 4. Determine case urgency and routing
39
+
40
+ This sequential, manual process creates bottlenecks, delays critical findings communication, and contributes to physician burnout.
41
+
42
+ ### Why Multi-Agent AI is the Right Approach
43
+
44
+ A multi-agent system offers advantages over monolithic AI:
45
+ - **Specialization**: Each agent focuses on one task, doing it well
46
+ - **Observability**: Clear handoffs enable debugging and explainability
47
+ - **Modularity**: Agents can be upgraded independently
48
+ - **Reliability**: Graceful degradation if one component fails
49
+
50
+ ---
51
+
52
+ ## Solution: Agentic Workflow Architecture
53
+
54
+ ### The 4-Agent Pipeline
55
+
56
+ RadioFlow demonstrates a **production-ready architecture** for AI-assisted radiology:
57
+
58
+ ```
59
+ ┌─────────────────────────────────────────────────────────────┐
60
+ │ RADIOFLOW AGENT ORCHESTRATOR │
61
+ ├─────────────────────────────────────────────────────────────┤
62
+ │ │
63
+ │ Agent 1: CXR ANALYZER │
64
+ │ └─ Processes chest X-ray images │
65
+ │ Extracts visual features and patterns │
66
+ │ ↓ │
67
+ │ Agent 2: FINDING INTERPRETER [MedGemma] │
68
+ │ └─ Interprets findings into clinical language │
69
+ │ Generates differential diagnoses │
70
+ │ ↓ │
71
+ │ Agent 3: REPORT GENERATOR [MedGemma] │
72
+ │ └─ Creates structured radiology reports │
73
+ │ Follows standard clinical format │
74
+ │ ↓ │
75
+ │ Agent 4: PRIORITY ROUTER [MedGemma] │
76
+ │ └─ Assesses urgency and routing │
77
+ │ Flags critical findings for communication │
78
+ │ │
79
+ └─────────────────────────────────────────────────────────────┘
80
+ ```
81
+
82
+ ### What Makes This Agentic
83
+
84
+ Each agent in RadioFlow:
85
+ - **Has a specific role**: One task, clear responsibility
86
+ - **Produces structured output**: JSON-formatted results for downstream agents
87
+ - **Maintains context**: Passes relevant information through the pipeline
88
+ - **Is independently testable**: Can be validated and improved in isolation
89
+ - **Hands off explicitly**: Clear agent-to-agent transitions
90
+
91
+ This is the essence of agentic design - autonomous components collaborating toward a goal.
92
+
93
+ ---
94
+
95
+ ## Technical Implementation
96
+
97
+ ### MedGemma Integration
98
+
99
+ MedGemma powers three agents in the pipeline:
100
+
101
+ **Finding Interpreter (Agent 2)**
102
+ ```python
103
+ # MedGemma interprets visual findings
104
+ prompt = f"As a radiologist, interpret these findings: {findings}"
105
+ interpretation = medgemma.generate(prompt)
106
+ ```
107
+
108
+ **Report Generator (Agent 3)**
109
+ ```python
110
+ # MedGemma generates structured reports
111
+ prompt = f"Generate a radiology report for: {interpreted_findings}"
112
+ report = medgemma.generate(prompt)
113
+ ```
114
+
115
+ **Priority Router (Agent 4)**
116
+ ```python
117
+ # MedGemma assesses clinical priority
118
+ prompt = f"Assess the priority of this case: {report}"
119
+ priority = medgemma.generate(prompt)
120
+ ```
121
+
122
+ ### Technology Stack
123
+
124
+ | Component | Technology |
125
+ |-----------|------------|
126
+ | Frontend | Gradio |
127
+ | Orchestration | Custom Python Pipeline |
128
+ | Language Model | MedGemma 4B-IT (via MLX/Transformers) |
129
+ | Visualization | Plotly |
130
+ | Deployment | Local (MLX) / HuggingFace Spaces |
131
+
132
+ ### Local Development with MLX
133
+
134
+ RadioFlow runs **real MedGemma inference** locally on Apple Silicon:
135
+ ```python
136
+ from mlx_lm import load, generate
137
+ model, tokenizer = load("mlx-community/medgemma-4b-it-4bit")
138
+ response = generate(model, tokenizer, prompt=clinical_prompt)
139
+ ```
140
+
141
+ ---
142
+
143
+ ## Current Scope & Future Vision
144
+
145
+ ### What RadioFlow Demonstrates Today
146
+
147
+ | Component | Current Implementation |
148
+ |-----------|----------------------|
149
+ | Image Analysis | Pattern-based feature extraction |
150
+ | Clinical Interpretation | Real MedGemma inference |
151
+ | Report Generation | Real MedGemma inference |
152
+ | Priority Assessment | Real MedGemma inference |
153
+
154
+ ### Production Roadmap
155
+
156
+ For clinical deployment, RadioFlow would integrate:
157
+
158
+ 1. **CXR Foundation Model**: Google's medical imaging AI for accurate finding detection
159
+ 2. **Validation Studies**: Clinical testing with radiologist oversight
160
+ 3. **EHR Integration**: FHIR-compliant APIs for hospital systems
161
+ 4. **Regulatory Compliance**: FDA clearance pathway
162
+
163
+ ---
164
+
165
+ ## Impact Potential
166
+
167
+ ### If Deployed at Scale
168
+
169
+ | Metric | Conservative Estimate |
170
+ |--------|----------------------|
171
+ | Time saved per study | 2-4 minutes |
172
+ | Studies per radiologist/day | 50-100 |
173
+ | Daily time savings | 1.5-6 hours |
174
+ | Reduced documentation burden | 40-60% |
175
+
176
+ ### Key Benefits
177
+
178
+ 1. **Radiologist Augmentation**: Preliminary analysis reduces cognitive load
179
+ 2. **Consistent Reporting**: Standardized format for every case
180
+ 3. **Priority Triage**: Critical findings flagged automatically
181
+ 4. **Scalability**: Edge-deployable for underserved regions
182
+
183
+ ---
184
+
185
+ ## Competition Alignment
186
+
187
+ ### Main Track: Effective Use of HAI-DEF Models
188
+
189
+ - MedGemma powers 3 of 4 agents
190
+ - Demonstrates medical text understanding and generation
191
+ - Shows practical application in radiology workflow
192
+
193
+ ### Agentic Workflow Prize
194
+
195
+ - **4 specialized agents** with clear roles
196
+ - **Explicit handoffs** between agents
197
+ - **Observable pipeline** with metrics at each stage
198
+ - **Modular design** enabling independent upgrades
199
+
200
+ ### Human-Centered Design
201
+
202
+ - Augments radiologists, doesn't replace them
203
+ - Explainable results with confidence scores
204
+ - Clear workflow visualization for trust
205
+
206
+ ---
207
+
208
+ ## Honest Limitations
209
+
210
+ 1. **Image Analysis**: Current demo uses pattern-based extraction, not production imaging AI
211
+ 2. **Validation**: Not clinically validated - requires professional oversight
212
+ 3. **Scope**: Designed for chest X-rays; orthopedic/other imaging not supported
213
+ 4. **Regulatory**: Not FDA-cleared; demonstration purposes only
214
+
215
+ ---
216
+
217
+ ## Resources
218
+
219
+ - **Live Demo**: http://127.0.0.1:7860 (local)
220
+ - **Kaggle Notebook**: Real MedGemma inference on GPU
221
+ - **Video Demo**: 3-minute walkthrough
222
+
223
+ ---
224
+
225
+ ## Disclaimer
226
+
227
+ RadioFlow is a **demonstration system** for the MedGemma Impact Challenge. It is **not intended for clinical use** and requires radiologist verification. This system demonstrates workflow architecture and MedGemma integration, not production-ready diagnostics.
228
+
229
+ ---
230
+
231
+ *Built with Google's MedGemma from Health AI Developer Foundations (HAI-DEF)*
232
+ *MedGemma Impact Challenge 2026*
app.py CHANGED
@@ -15,22 +15,8 @@ import time
15
  from typing import Optional, Tuple, List, Dict
16
  import json
17
 
18
- # Try to import spaces for ZeroGPU on HuggingFace
19
- try:
20
- import spaces
21
- SPACES_AVAILABLE = True
22
- except ImportError:
23
- SPACES_AVAILABLE = False
24
- # Create a dummy decorator that accepts any arguments
25
- class spaces:
26
- @staticmethod
27
- def GPU(*args, **kwargs):
28
- def decorator(func):
29
- return func
30
- # Handle both @spaces.GPU and @spaces.GPU(duration=120)
31
- if len(args) == 1 and callable(args[0]):
32
- return args[0]
33
- return decorator
34
 
35
  # Import our modules
36
  from orchestrator import RadioFlowOrchestrator, WorkflowResult, create_orchestrator
@@ -86,7 +72,6 @@ def initialize_system():
86
  return f"✅ RadioFlow System Initialized ({engine_status})"
87
 
88
 
89
- @spaces.GPU(duration=120) # Request GPU for up to 2 minutes per inference
90
  def process_xray(
91
  image: Optional[Image.Image],
92
  clinical_history: str,
 
15
  from typing import Optional, Tuple, List, Dict
16
  import json
17
 
18
+ # HuggingFace Spaces detection
19
+ SPACES_AVAILABLE = os.environ.get("SPACE_ID") is not None
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20
 
21
  # Import our modules
22
  from orchestrator import RadioFlowOrchestrator, WorkflowResult, create_orchestrator
 
72
  return f"✅ RadioFlow System Initialized ({engine_status})"
73
 
74
 
 
75
  def process_xray(
76
  image: Optional[Image.Image],
77
  clinical_history: str,
kaggle_notebook.py ADDED
@@ -0,0 +1,1005 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ RadioFlow: AI-Powered Radiology Workflow Agent
3
+ Kaggle Notebook with REAL MedGemma Model
4
+ MedGemma Impact Challenge Submission
5
+ """
6
+
7
+ # %% [markdown]
8
+ # # 🩻 RadioFlow: AI-Powered Radiology Workflow Agent
9
+ # ## MedGemma Impact Challenge Submission
10
+ #
11
+ # **Author:** Samarpeet Garad
12
+ # **Date:** February 2026
13
+ #
14
+ # ---
15
+ #
16
+ # ## Executive Summary
17
+ #
18
+ # RadioFlow is a **real AI-powered** multi-agent system that analyzes chest X-rays using
19
+ # Google's **MedGemma** model. This notebook runs with actual model inference on Kaggle's
20
+ # free GPU, demonstrating production-ready medical AI.
21
+ #
22
+ # **Key Features:**
23
+ # - 🤖 Real MedGemma-4B model inference (not simulated!)
24
+ # - 🔬 4-agent orchestrated pipeline
25
+ # - 📋 Generates structured radiology reports
26
+ # - 🚦 Automatic priority assessment and routing
27
+
28
+ # %% [markdown]
29
+ # ## 1. Setup and GPU Check
30
+
31
+ # %%
32
+ import os
33
+ import sys
34
+ import time
35
+ import json
36
+ import warnings
37
+
38
+ warnings.filterwarnings("ignore")
39
+
40
+ # Check GPU availability
41
+ import torch
42
+
43
+ print(f"PyTorch version: {torch.__version__}")
44
+ print(f"CUDA available: {torch.cuda.is_available()}")
45
+ if torch.cuda.is_available():
46
+ print(f"GPU: {torch.cuda.get_device_name(0)}")
47
+ print(
48
+ f"GPU Memory: {torch.cuda.get_device_properties(0).total_memory / 1e9:.1f} GB"
49
+ )
50
+ else:
51
+ print("⚠️ No GPU detected - model will run slower on CPU")
52
+
53
+ # %%
54
+ # Install required packages
55
+ print("📦 Installing required packages...")
56
+ import subprocess
57
+
58
+ subprocess.run(
59
+ ["pip", "install", "-q", "bitsandbytes", "accelerate", "sentencepiece"], check=True
60
+ )
61
+ print("✅ Packages installed!")
62
+
63
+ # %%
64
+ import numpy as np
65
+ import pandas as pd
66
+ from datetime import datetime
67
+ from dataclasses import dataclass, field
68
+ from typing import Dict, List, Optional, Any, Tuple
69
+ from PIL import Image, ImageDraw
70
+ import matplotlib.pyplot as plt
71
+
72
+ # Hugging Face
73
+ from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
74
+ from huggingface_hub import login
75
+
76
+ # Display
77
+ from IPython.display import HTML, display, Markdown, clear_output
78
+ import plotly.graph_objects as go
79
+
80
+ print("✅ Dependencies loaded successfully")
81
+
82
+ # %% [markdown]
83
+ # ## 2. Authenticate with Hugging Face
84
+ #
85
+ # To use MedGemma, you need to:
86
+ # 1. Accept the license at https://huggingface.co/google/medgemma-4b-it
87
+ # 2. Add your HF token as a Kaggle secret named "HF_TOKEN"
88
+
89
+ # %%
90
+ # Get HuggingFace token from Kaggle secrets
91
+ try:
92
+ from kaggle_secrets import UserSecretsClient
93
+
94
+ secrets = UserSecretsClient()
95
+ HF_TOKEN = secrets.get_secret("HF_TOKEN")
96
+ login(token=HF_TOKEN)
97
+ print("✅ Authenticated with Hugging Face")
98
+ except Exception as e:
99
+ print(f"⚠️ Could not get HF token from Kaggle secrets: {e}")
100
+ print("Please add your HF_TOKEN as a Kaggle secret")
101
+ HF_TOKEN = None
102
+
103
+ # %% [markdown]
104
+ # ## 3. Load Real MedGemma Model
105
+
106
+ # %%
107
+ MODEL_NAME = "google/medgemma-4b-it"
108
+
109
+ # Configure 4-bit quantization for efficient memory usage
110
+ quantization_config = BitsAndBytesConfig(
111
+ load_in_4bit=True,
112
+ bnb_4bit_compute_dtype=torch.float16,
113
+ bnb_4bit_use_double_quant=True,
114
+ bnb_4bit_quant_type="nf4",
115
+ )
116
+
117
+ print(f"🔄 Loading {MODEL_NAME}...")
118
+ print(" This may take 1-2 minutes on first run...")
119
+
120
+ try:
121
+ tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME, trust_remote_code=True)
122
+
123
+ model = AutoModelForCausalLM.from_pretrained(
124
+ MODEL_NAME,
125
+ quantization_config=quantization_config,
126
+ device_map="auto",
127
+ trust_remote_code=True,
128
+ torch_dtype=torch.float16,
129
+ )
130
+ model.eval()
131
+
132
+ MODEL_LOADED = True
133
+ print(f"✅ MedGemma loaded successfully!")
134
+ print(f" Memory used: {torch.cuda.memory_allocated() / 1e9:.2f} GB")
135
+
136
+ except Exception as e:
137
+ print(f"❌ Failed to load model: {e}")
138
+ print(" Falling back to demo mode...")
139
+ MODEL_LOADED = False
140
+ model = None
141
+ tokenizer = None
142
+
143
+
144
+ # %%
145
+ def generate_medgemma_response(prompt: str, max_tokens: int = 512) -> str:
146
+ """Generate response using real MedGemma model."""
147
+ if not MODEL_LOADED:
148
+ return "[Demo mode - model not loaded]"
149
+
150
+ messages = [{"role": "user", "content": prompt}]
151
+
152
+ # Tokenize with proper attention mask
153
+ inputs = tokenizer.apply_chat_template(
154
+ messages, return_tensors="pt", add_generation_prompt=True
155
+ )
156
+
157
+ # Create attention mask (1 for all tokens since no padding)
158
+ attention_mask = torch.ones_like(inputs)
159
+
160
+ # Move to device
161
+ inputs = inputs.to(model.device)
162
+ attention_mask = attention_mask.to(model.device)
163
+
164
+ with torch.no_grad():
165
+ outputs = model.generate(
166
+ inputs,
167
+ attention_mask=attention_mask,
168
+ max_new_tokens=max_tokens,
169
+ do_sample=False, # Use greedy decoding to avoid numerical issues
170
+ pad_token_id=tokenizer.eos_token_id,
171
+ )
172
+
173
+ response = tokenizer.decode(outputs[0][inputs.shape[1] :], skip_special_tokens=True)
174
+ return response.strip()
175
+
176
+
177
+ # Test the model
178
+ if MODEL_LOADED:
179
+ print("\n🧪 Testing MedGemma...")
180
+ test_response = generate_medgemma_response(
181
+ "What are the key findings to look for in a chest X-ray? List 3 briefly.",
182
+ max_tokens=100,
183
+ )
184
+ print(f"Response: {test_response[:200]}...")
185
+
186
+ # %% [markdown]
187
+ # ## 4. Agent Architecture
188
+ #
189
+ # RadioFlow uses a 4-agent pipeline, each powered by MedGemma:
190
+ #
191
+ # ```
192
+ # ┌────────────────┐ ┌────────────────┐ ┌────────────────┐ ┌────────────────┐
193
+ # │ CXR Analyzer │───▶│ Finding │───▶│ Report │───▶│ Priority │
194
+ # │ (Image Analysis│ │ Interpreter │ │ Generator │ │ Router │
195
+ # │ + MedGemma) │ │ (MedGemma) │ │ (MedGemma) │ │ (MedGemma) │
196
+ # └────────────────┘ └────────────────┘ └────────────────┘ └────────────────┘
197
+ # ```
198
+
199
+
200
+ # %%
201
+ @dataclass
202
+ class AgentResult:
203
+ """Standardized result from any agent"""
204
+
205
+ agent_name: str
206
+ status: str
207
+ data: Dict[str, Any]
208
+ processing_time_ms: float
209
+ timestamp: str = field(default_factory=lambda: datetime.now().isoformat())
210
+
211
+
212
+ class BaseAgent:
213
+ """Base class for all RadioFlow agents"""
214
+
215
+ def __init__(self, name: str, model_name: str):
216
+ self.name = name
217
+ self.model_name = model_name
218
+
219
+ def __call__(self, input_data: Any, context: Optional[Dict] = None) -> AgentResult:
220
+ start = time.time()
221
+ result = self.process(input_data, context)
222
+ result.processing_time_ms = (time.time() - start) * 1000
223
+ return result
224
+
225
+ def process(self, input_data: Any, context: Optional[Dict] = None) -> AgentResult:
226
+ raise NotImplementedError
227
+
228
+
229
+ print("✅ Base agent class defined")
230
+
231
+ # %% [markdown]
232
+ # ## 5. Agent Implementations with Real MedGemma
233
+
234
+
235
+ # %%
236
+ class CXRAnalyzerAgent(BaseAgent):
237
+ """
238
+ Agent 1: Image Analyzer
239
+ Analyzes chest X-ray images using computer vision + MedGemma.
240
+ """
241
+
242
+ def __init__(self):
243
+ super().__init__("CXR Analyzer", "MedGemma + Image Analysis")
244
+ self.regions = [
245
+ "right_upper_lung",
246
+ "right_middle_lung",
247
+ "right_lower_lung",
248
+ "left_upper_lung",
249
+ "left_lower_lung",
250
+ "cardiac_silhouette",
251
+ "mediastinum",
252
+ "costophrenic_angles",
253
+ ]
254
+
255
+ def process(
256
+ self, image: Image.Image, context: Optional[Dict] = None
257
+ ) -> AgentResult:
258
+ # Analyze image characteristics
259
+ img_array = np.array(image.convert("L")) # Grayscale
260
+
261
+ # Calculate regional statistics
262
+ h, w = img_array.shape
263
+ regions_stats = {
264
+ "right_lung": img_array[:, w // 2 :].mean(),
265
+ "left_lung": img_array[:, : w // 2].mean(),
266
+ "upper": img_array[: h // 2, :].mean(),
267
+ "lower": img_array[h // 2 :, :].mean(),
268
+ "cardiac": img_array[h // 3 : 2 * h // 3, w // 3 : 2 * w // 3].mean(),
269
+ }
270
+
271
+ overall_brightness = img_array.mean()
272
+ contrast = img_array.std()
273
+ asymmetry = abs(regions_stats["right_lung"] - regions_stats["left_lung"])
274
+
275
+ # Generate findings based on image analysis
276
+ findings = []
277
+
278
+ # Check for opacities (darker regions than expected)
279
+ if regions_stats["lower"] > overall_brightness + 10:
280
+ findings.append(
281
+ {
282
+ "type": "opacity",
283
+ "region": "lower_lung_zones",
284
+ "confidence": min(0.95, 0.7 + asymmetry / 50),
285
+ "severity": "moderate"
286
+ if regions_stats["lower"] > overall_brightness + 20
287
+ else "mild",
288
+ "description": f"Increased density in lower lung zones (mean: {regions_stats['lower']:.0f})",
289
+ }
290
+ )
291
+
292
+ # Check for asymmetry
293
+ if asymmetry > 15:
294
+ side = (
295
+ "right"
296
+ if regions_stats["right_lung"] > regions_stats["left_lung"]
297
+ else "left"
298
+ )
299
+ findings.append(
300
+ {
301
+ "type": "asymmetry",
302
+ "region": f"{side}_hemithorax",
303
+ "confidence": min(0.9, 0.6 + asymmetry / 30),
304
+ "severity": "mild",
305
+ "description": f"Asymmetric density noted, {side} side appears denser",
306
+ }
307
+ )
308
+
309
+ # Check cardiac region
310
+ if regions_stats["cardiac"] > overall_brightness + 25:
311
+ findings.append(
312
+ {
313
+ "type": "cardiomegaly",
314
+ "region": "cardiac_silhouette",
315
+ "confidence": 0.75,
316
+ "severity": "mild",
317
+ "description": "Enlarged cardiac silhouette suggested",
318
+ }
319
+ )
320
+
321
+ # If no abnormalities, report normal
322
+ if not findings:
323
+ findings.append(
324
+ {
325
+ "type": "normal",
326
+ "region": "bilateral_lungs",
327
+ "confidence": 0.85,
328
+ "severity": "none",
329
+ "description": "No significant abnormalities detected on initial analysis",
330
+ }
331
+ )
332
+
333
+ # Use MedGemma to enhance the analysis
334
+ if MODEL_LOADED and findings:
335
+ finding_desc = "; ".join([f["description"] for f in findings])
336
+ enhancement_prompt = f"""As a radiologist, given these image analysis findings:
337
+ {finding_desc}
338
+
339
+ Provide a brief (2-3 sentence) clinical interpretation of what these findings might indicate.
340
+ Focus on clinical relevance."""
341
+
342
+ enhanced = generate_medgemma_response(enhancement_prompt, max_tokens=100)
343
+ clinical_note = enhanced
344
+ else:
345
+ clinical_note = "Clinical correlation recommended."
346
+
347
+ return AgentResult(
348
+ agent_name=self.name,
349
+ status="success",
350
+ data={
351
+ "findings": findings,
352
+ "image_stats": regions_stats,
353
+ "quality_score": min(0.98, 0.7 + contrast / 100),
354
+ "clinical_note": clinical_note,
355
+ "model_used": self.model_name,
356
+ },
357
+ processing_time_ms=0,
358
+ )
359
+
360
+
361
+ class FindingInterpreterAgent(BaseAgent):
362
+ """
363
+ Agent 2: MedGemma Finding Interpreter
364
+ Uses real MedGemma to interpret findings into clinical language.
365
+ """
366
+
367
+ def __init__(self):
368
+ super().__init__("Finding Interpreter", "google/medgemma-4b-it")
369
+
370
+ def process(self, input_data: Dict, context: Optional[Dict] = None) -> AgentResult:
371
+ findings = input_data.get("findings", [])
372
+ clinical_note = input_data.get("clinical_note", "")
373
+
374
+ interpreted = []
375
+
376
+ for finding in findings:
377
+ if MODEL_LOADED:
378
+ prompt = f"""As a radiologist, interpret this chest X-ray finding:
379
+
380
+ Finding Type: {finding.get("type")}
381
+ Region: {finding.get("region")}
382
+ Severity: {finding.get("severity")}
383
+ Description: {finding.get("description")}
384
+
385
+ Provide:
386
+ 1. Clinical significance (1 sentence)
387
+ 2. Top 3 differential diagnoses
388
+ 3. Recommended follow-up
389
+
390
+ Be concise and clinically relevant."""
391
+
392
+ response = generate_medgemma_response(prompt, max_tokens=200)
393
+
394
+ interpreted.append(
395
+ {
396
+ "original": finding,
397
+ "medgemma_interpretation": response,
398
+ "clinical_significance": self._extract_significance(
399
+ response, finding
400
+ ),
401
+ "differential_diagnoses": self._extract_differentials(
402
+ response, finding
403
+ ),
404
+ }
405
+ )
406
+ else:
407
+ # Demo fallback
408
+ interpreted.append(
409
+ {
410
+ "original": finding,
411
+ "medgemma_interpretation": "[Model not loaded - demo mode]",
412
+ "clinical_significance": "Clinical correlation recommended.",
413
+ "differential_diagnoses": ["Requires radiologist review"],
414
+ }
415
+ )
416
+
417
+ return AgentResult(
418
+ agent_name=self.name,
419
+ status="success",
420
+ data={
421
+ "interpreted_findings": interpreted,
422
+ "findings_count": len(findings),
423
+ "model_used": self.model_name if MODEL_LOADED else "Demo mode",
424
+ },
425
+ processing_time_ms=0,
426
+ )
427
+
428
+ def _extract_significance(self, response: str, finding: Dict) -> str:
429
+ # Extract first meaningful sentence from response
430
+ sentences = response.split(".")
431
+ if sentences:
432
+ return sentences[0].strip() + "."
433
+ return f"{finding.get('type', 'Finding')} requires clinical correlation."
434
+
435
+ def _extract_differentials(self, response: str, finding: Dict) -> List[str]:
436
+ # Default differentials based on finding type
437
+ defaults = {
438
+ "opacity": ["Pneumonia", "Atelectasis", "Mass/Nodule"],
439
+ "cardiomegaly": ["Heart failure", "Cardiomyopathy", "Pericardial effusion"],
440
+ "asymmetry": ["Pleural effusion", "Consolidation", "Mass effect"],
441
+ "normal": ["No significant pathology"],
442
+ }
443
+ return defaults.get(finding.get("type", ""), ["Undetermined"])
444
+
445
+
446
+ class ReportGeneratorAgent(BaseAgent):
447
+ """
448
+ Agent 3: MedGemma Report Generator
449
+ Uses real MedGemma to create structured radiology reports.
450
+ """
451
+
452
+ def __init__(self):
453
+ super().__init__("Report Generator", "google/medgemma-4b-it")
454
+
455
+ def process(self, input_data: Dict, context: Optional[Dict] = None) -> AgentResult:
456
+ interpreted = input_data.get("interpreted_findings", [])
457
+ clinical_history = (
458
+ context.get("clinical_history", "Not provided")
459
+ if context
460
+ else "Not provided"
461
+ )
462
+
463
+ if MODEL_LOADED:
464
+ # Prepare findings for MedGemma
465
+ findings_text = ""
466
+ for item in interpreted:
467
+ orig = item.get("original", {})
468
+ interp = item.get("medgemma_interpretation", "")
469
+ findings_text += (
470
+ f"- {orig.get('type', 'Finding')}: {orig.get('description', '')}\n"
471
+ )
472
+ findings_text += f" Interpretation: {interp[:150]}...\n"
473
+
474
+ prompt = f"""Generate a professional radiology report for a chest X-ray with these details:
475
+
476
+ CLINICAL HISTORY: {clinical_history}
477
+
478
+ FINDINGS FROM IMAGE ANALYSIS:
479
+ {findings_text if findings_text else "No significant abnormalities detected."}
480
+
481
+ Generate a complete, structured radiology report with:
482
+ - TECHNIQUE section
483
+ - COMPARISON section
484
+ - FINDINGS section (detailed)
485
+ - IMPRESSION section (numbered list)
486
+ - RECOMMENDATIONS
487
+
488
+ Use proper radiological terminology. Be concise but thorough."""
489
+
490
+ report_text = generate_medgemma_response(prompt, max_tokens=500)
491
+ else:
492
+ report_text = self._generate_demo_report(interpreted, clinical_history)
493
+
494
+ # Wrap in standard format
495
+ full_report = f"""
496
+ {"=" * 80}
497
+ CHEST RADIOGRAPH REPORT
498
+ Generated by RadioFlow AI System
499
+ {"=" * 80}
500
+
501
+ CLINICAL INDICATION:
502
+ {clinical_history}
503
+
504
+ {report_text}
505
+
506
+ {"=" * 80}
507
+ ⚠️ AI-GENERATED REPORT - Requires radiologist verification before clinical use.
508
+ Model: {self.model_name} | Generated: {datetime.now().strftime("%Y-%m-%d %H:%M:%S")}
509
+ {"=" * 80}
510
+ """
511
+
512
+ return AgentResult(
513
+ agent_name=self.name,
514
+ status="success",
515
+ data={
516
+ "full_report": full_report.strip(),
517
+ "findings_count": len(interpreted),
518
+ "model_used": self.model_name if MODEL_LOADED else "Demo mode",
519
+ },
520
+ processing_time_ms=0,
521
+ )
522
+
523
+ def _generate_demo_report(
524
+ self, interpreted: List[Dict], clinical_history: str
525
+ ) -> str:
526
+ findings_list = []
527
+ for item in interpreted:
528
+ orig = item.get("original", {})
529
+ findings_list.append(f"- {orig.get('description', 'Finding noted')}")
530
+
531
+ return f"""
532
+ TECHNIQUE:
533
+ Single frontal (PA) view of the chest was obtained.
534
+
535
+ COMPARISON:
536
+ None available.
537
+
538
+ FINDINGS:
539
+ LUNGS: {chr(10).join(findings_list) if findings_list else "Clear bilaterally. No focal consolidation."}
540
+
541
+ HEART: Normal cardiac silhouette size.
542
+
543
+ MEDIASTINUM: Unremarkable.
544
+
545
+ BONES: No acute osseous abnormality.
546
+
547
+ IMPRESSION:
548
+ 1. {"Findings as described above require clinical correlation." if interpreted else "No acute cardiopulmonary abnormality."}
549
+
550
+ RECOMMENDATIONS:
551
+ Clinical correlation recommended as indicated.
552
+ """
553
+
554
+
555
+ class PriorityRouterAgent(BaseAgent):
556
+ """
557
+ Agent 4: MedGemma Priority Router
558
+ Uses real MedGemma to assess urgency and route cases.
559
+ """
560
+
561
+ PRIORITY_LEVELS = {
562
+ "STAT": {
563
+ "color": "#ef4444",
564
+ "response_time": "< 30 minutes",
565
+ "score_range": (0.8, 1.0),
566
+ },
567
+ "URGENT": {
568
+ "color": "#f59e0b",
569
+ "response_time": "< 4 hours",
570
+ "score_range": (0.5, 0.8),
571
+ },
572
+ "ROUTINE": {
573
+ "color": "#22c55e",
574
+ "response_time": "< 24 hours",
575
+ "score_range": (0.0, 0.5),
576
+ },
577
+ }
578
+
579
+ def __init__(self):
580
+ super().__init__("Priority Router", "google/medgemma-4b-it")
581
+
582
+ def process(self, input_data: Dict, context: Optional[Dict] = None) -> AgentResult:
583
+ full_report = input_data.get("full_report", "")
584
+ original_findings = context.get("original_findings", []) if context else []
585
+
586
+ # Calculate base priority score from findings
587
+ severity_scores = {
588
+ "critical": 1.0,
589
+ "high": 0.8,
590
+ "moderate": 0.5,
591
+ "mild": 0.3,
592
+ "none": 0.1,
593
+ }
594
+ max_severity = 0.2
595
+ for finding in original_findings:
596
+ sev = finding.get("severity", "none")
597
+ max_severity = max(max_severity, severity_scores.get(sev, 0.2))
598
+
599
+ if MODEL_LOADED:
600
+ # Use MedGemma for clinical priority assessment
601
+ prompt = f"""As a radiologist, assess the clinical priority of this chest X-ray report:
602
+
603
+ {full_report[:1000]}
604
+
605
+ Based on the findings, determine:
606
+ 1. PRIORITY LEVEL: STAT (immediate), URGENT (within 4 hours), or ROUTINE (within 24 hours)
607
+ 2. CRITICAL FINDINGS: List any findings requiring immediate physician notification
608
+ 3. RECOMMENDED ACTIONS: What should happen next?
609
+
610
+ Respond concisely."""
611
+
612
+ medgemma_assessment = generate_medgemma_response(prompt, max_tokens=200)
613
+
614
+ # Adjust score based on MedGemma's assessment
615
+ if (
616
+ "STAT" in medgemma_assessment.upper()
617
+ or "IMMEDIATE" in medgemma_assessment.upper()
618
+ ):
619
+ max_severity = max(max_severity, 0.85)
620
+ elif "URGENT" in medgemma_assessment.upper():
621
+ max_severity = max(max_severity, 0.55)
622
+ else:
623
+ medgemma_assessment = "Priority assessment based on finding severity."
624
+
625
+ # Determine priority level
626
+ priority_level = "ROUTINE"
627
+ if max_severity >= 0.8:
628
+ priority_level = "STAT"
629
+ elif max_severity >= 0.5:
630
+ priority_level = "URGENT"
631
+
632
+ return AgentResult(
633
+ agent_name=self.name,
634
+ status="success",
635
+ data={
636
+ "priority_score": round(max_severity, 2),
637
+ "priority_level": priority_level,
638
+ "priority_details": self.PRIORITY_LEVELS[priority_level],
639
+ "medgemma_assessment": medgemma_assessment,
640
+ "routing_recommendation": {
641
+ "destination": f"{priority_level} Reading Queue",
642
+ "notification_required": priority_level in ["STAT", "URGENT"],
643
+ },
644
+ "model_used": self.model_name if MODEL_LOADED else "Demo mode",
645
+ },
646
+ processing_time_ms=0,
647
+ )
648
+
649
+
650
+ print("✅ All agents defined with real MedGemma integration")
651
+
652
+ # %% [markdown]
653
+ # ## 6. Workflow Orchestrator
654
+
655
+
656
+ # %%
657
+ @dataclass
658
+ class WorkflowResult:
659
+ """Complete result from RadioFlow workflow"""
660
+
661
+ workflow_id: str
662
+ status: str
663
+ total_duration_ms: float
664
+ final_report: str = ""
665
+ priority_level: str = "ROUTINE"
666
+ priority_score: float = 0.0
667
+ findings_count: int = 0
668
+ agent_results: Dict[str, AgentResult] = field(default_factory=dict)
669
+
670
+
671
+ class RadioFlowOrchestrator:
672
+ """Main orchestrator for the RadioFlow multi-agent system."""
673
+
674
+ def __init__(self):
675
+ self.agents = {
676
+ "cxr_analyzer": CXRAnalyzerAgent(),
677
+ "finding_interpreter": FindingInterpreterAgent(),
678
+ "report_generator": ReportGeneratorAgent(),
679
+ "priority_router": PriorityRouterAgent(),
680
+ }
681
+ print("🚀 RadioFlow Orchestrator initialized with 4 agents")
682
+
683
+ def process(
684
+ self, image: Image.Image, context: Optional[Dict] = None
685
+ ) -> WorkflowResult:
686
+ start = time.time()
687
+ workflow_id = f"rf_{datetime.now().strftime('%Y%m%d_%H%M%S')}"
688
+ context = context or {}
689
+
690
+ print(f"\n{'=' * 60}")
691
+ print(f"🩻 RadioFlow Workflow: {workflow_id}")
692
+ print(f" Model: {'MedGemma (REAL)' if MODEL_LOADED else 'Demo Mode'}")
693
+ print(f"{'=' * 60}")
694
+
695
+ # Stage 1: CXR Analysis
696
+ print("\n🔬 Stage 1: Analyzing chest X-ray...")
697
+ cxr_result = self.agents["cxr_analyzer"](image, context)
698
+ findings = cxr_result.data.get("findings", [])
699
+ print(f" ✅ Detected {len(findings)} finding(s)")
700
+ for f in findings[:3]:
701
+ print(f" • {f['type']}: {f['description'][:50]}...")
702
+
703
+ # Stage 2: Finding Interpretation
704
+ print("\n📋 Stage 2: Interpreting findings with MedGemma...")
705
+ interp_result = self.agents["finding_interpreter"](cxr_result.data, context)
706
+ print(f" ✅ Generated clinical interpretations")
707
+
708
+ # Stage 3: Report Generation
709
+ print("\n📝 Stage 3: Generating radiology report...")
710
+ report_result = self.agents["report_generator"](interp_result.data, context)
711
+ print(
712
+ f" ✅ Report generated ({len(report_result.data.get('full_report', ''))} chars)"
713
+ )
714
+
715
+ # Stage 4: Priority Routing
716
+ print("\n🚦 Stage 4: Assessing priority...")
717
+ priority_context = {**context, "original_findings": findings}
718
+ priority_result = self.agents["priority_router"](
719
+ report_result.data, priority_context
720
+ )
721
+ level = priority_result.data.get("priority_level")
722
+ score = priority_result.data.get("priority_score", 0)
723
+ print(f" ✅ Priority: {level} ({score:.0%})")
724
+
725
+ total_time = (time.time() - start) * 1000
726
+
727
+ print(f"\n{'=' * 60}")
728
+ print(f"✅ Workflow Complete in {total_time:.0f}ms")
729
+ print(f"{'=' * 60}\n")
730
+
731
+ return WorkflowResult(
732
+ workflow_id=workflow_id,
733
+ status="success",
734
+ total_duration_ms=total_time,
735
+ final_report=report_result.data.get("full_report", ""),
736
+ priority_level=level,
737
+ priority_score=score,
738
+ findings_count=len(findings),
739
+ agent_results={
740
+ "cxr_analyzer": cxr_result,
741
+ "finding_interpreter": interp_result,
742
+ "report_generator": report_result,
743
+ "priority_router": priority_result,
744
+ },
745
+ )
746
+
747
+
748
+ orchestrator = RadioFlowOrchestrator()
749
+
750
+ # %% [markdown]
751
+ # ## 7. Run Demo with Your Own Image
752
+ #
753
+ # ### Option A: Upload your own X-ray image
754
+ # 1. Click "Add data" in the right panel → "Upload" → Select your X-ray image
755
+ # 2. Set `USE_CUSTOM_IMAGE = True` below
756
+ # 3. Update `CUSTOM_IMAGE_PATH` with your image filename
757
+ #
758
+ # ### Option B: Use generated sample image
759
+ # Keep `USE_CUSTOM_IMAGE = False` to use the auto-generated sample
760
+
761
+
762
+ # %%
763
+ # ========== CONFIGURATION - EDIT THIS ==========
764
+ USE_CUSTOM_IMAGE = False # Set to True to use your own image
765
+ CUSTOM_IMAGE_PATH = "/kaggle/input/your-dataset/your-xray.jpg" # Update this path
766
+ # ===============================================
767
+
768
+
769
+ def create_sample_cxr(size=(512, 512), seed=None):
770
+ """Create a simulated chest X-ray for demo purposes."""
771
+ if seed:
772
+ np.random.seed(seed)
773
+
774
+ img = Image.new("L", size, color=30)
775
+ draw = ImageDraw.Draw(img)
776
+
777
+ w, h = size
778
+
779
+ # Lung fields (darker areas)
780
+ draw.ellipse([50, 80, w // 2 - 20, h - 50], fill=20) # Left lung
781
+ draw.ellipse(
782
+ [w // 2 + 20, 80, w - 50, h - 50], fill=25
783
+ ) # Right lung (slightly denser)
784
+
785
+ # Heart shadow (bright/dense)
786
+ draw.ellipse([w // 3, h // 3, 2 * w // 3, 2 * h // 3], fill=80)
787
+
788
+ # Spine
789
+ draw.rectangle([w // 2 - 15, 50, w // 2 + 15, h - 30], fill=90)
790
+
791
+ # Ribs
792
+ for i in range(8):
793
+ y = 100 + i * 45
794
+ draw.arc([30, y, w // 2 - 30, y + 40], 180, 360, fill=70, width=2)
795
+ draw.arc([w // 2 + 30, y, w - 30, y + 40], 180, 360, fill=70, width=2)
796
+
797
+ # Add some noise
798
+ img_array = np.array(img)
799
+ noise = np.random.normal(0, 5, img_array.shape)
800
+ img_array = np.clip(img_array + noise, 0, 255).astype(np.uint8)
801
+
802
+ return Image.fromarray(img_array).convert("RGB")
803
+
804
+
805
+ # Load image based on configuration
806
+ if USE_CUSTOM_IMAGE:
807
+ print(f"📂 Loading custom image: {CUSTOM_IMAGE_PATH}")
808
+ try:
809
+ sample_image = Image.open(CUSTOM_IMAGE_PATH).convert("RGB")
810
+ # Resize if too large
811
+ max_size = 1024
812
+ if max(sample_image.size) > max_size:
813
+ sample_image.thumbnail((max_size, max_size), Image.LANCZOS)
814
+ print(f" ✅ Image loaded! Size: {sample_image.size}")
815
+ title = "Your Chest X-Ray"
816
+ except Exception as e:
817
+ print(f" ❌ Error loading image: {e}")
818
+ print(" Falling back to sample image...")
819
+ sample_image = create_sample_cxr(seed=42)
820
+ title = "Sample Chest X-Ray (fallback)"
821
+ else:
822
+ print("🎨 Using generated sample image")
823
+ sample_image = create_sample_cxr(seed=42)
824
+ title = "Generated Sample Chest X-Ray"
825
+
826
+ # Display
827
+ plt.figure(figsize=(8, 8))
828
+ plt.imshow(sample_image, cmap="gray")
829
+ plt.title(title, fontsize=14)
830
+ plt.axis("off")
831
+ plt.show()
832
+
833
+ # %%
834
+ # Run the workflow
835
+ clinical_context = {
836
+ "clinical_history": "65-year-old male presenting with productive cough and low-grade fever for 5 days. History of hypertension and type 2 diabetes.",
837
+ "symptoms": "Cough, fever, mild dyspnea on exertion",
838
+ }
839
+
840
+ print("🩻 Processing chest X-ray with RadioFlow...\n")
841
+ result = orchestrator.process(sample_image, clinical_context)
842
+
843
+ # %% [markdown]
844
+ # ## 8. Results
845
+
846
+ # %%
847
+ # Display the generated report
848
+ print(result.final_report)
849
+
850
+ # %%
851
+ # Priority Assessment Display
852
+ priority_data = result.agent_results["priority_router"].data
853
+ colors = {"STAT": "#ef4444", "URGENT": "#f59e0b", "ROUTINE": "#22c55e"}
854
+
855
+ display(
856
+ HTML(f"""
857
+ <div style="padding: 25px; background: linear-gradient(135deg, #1e3a5f, #2d5a87);
858
+ border-radius: 15px; color: white; margin: 20px 0; box-shadow: 0 4px 6px rgba(0,0,0,0.3);">
859
+ <h2 style="margin: 0 0 20px 0; font-size: 24px;">🚦 Priority Assessment</h2>
860
+ <div style="display: flex; gap: 40px; flex-wrap: wrap;">
861
+ <div style="text-align: center;">
862
+ <div style="font-size: 48px; font-weight: bold; color: {colors.get(result.priority_level, "#fff")};">
863
+ {result.priority_level}
864
+ </div>
865
+ <div style="opacity: 0.8; font-size: 14px;">Priority Level</div>
866
+ </div>
867
+ <div style="text-align: center;">
868
+ <div style="font-size: 48px; font-weight: bold;">{result.priority_score:.0%}</div>
869
+ <div style="opacity: 0.8; font-size: 14px;">Urgency Score</div>
870
+ </div>
871
+ <div style="text-align: center;">
872
+ <div style="font-size: 48px; font-weight: bold;">{result.findings_count}</div>
873
+ <div style="opacity: 0.8; font-size: 14px;">Findings</div>
874
+ </div>
875
+ <div style="text-align: center;">
876
+ <div style="font-size: 48px; font-weight: bold;">{result.total_duration_ms / 1000:.1f}s</div>
877
+ <div style="opacity: 0.8; font-size: 14px;">Total Time</div>
878
+ </div>
879
+ </div>
880
+ <div style="margin-top: 20px; padding: 15px; background: rgba(255,255,255,0.1); border-radius: 8px;">
881
+ <strong>MedGemma Assessment:</strong><br>
882
+ {priority_data.get("medgemma_assessment", "N/A")[:300]}
883
+ </div>
884
+ </div>
885
+ """)
886
+ )
887
+
888
+ # %%
889
+ # Agent Metrics
890
+ metrics_data = []
891
+ for key, agent_result in result.agent_results.items():
892
+ metrics_data.append(
893
+ {
894
+ "Agent": agent_result.agent_name,
895
+ "Status": "✅ Success" if agent_result.status == "success" else "❌ Error",
896
+ "Time (ms)": f"{agent_result.processing_time_ms:.0f}",
897
+ "Model": agent_result.data.get("model_used", "N/A")[:40],
898
+ }
899
+ )
900
+
901
+ metrics_df = pd.DataFrame(metrics_data)
902
+ print("\n📊 Agent Performance Metrics:")
903
+ display(metrics_df)
904
+
905
+ # %%
906
+ # Create workflow visualization
907
+ fig = go.Figure()
908
+
909
+ agents = ["CXR Analyzer", "Finding Interpreter", "Report Generator", "Priority Router"]
910
+ times = [
911
+ result.agent_results["cxr_analyzer"].processing_time_ms,
912
+ result.agent_results["finding_interpreter"].processing_time_ms,
913
+ result.agent_results["report_generator"].processing_time_ms,
914
+ result.agent_results["priority_router"].processing_time_ms,
915
+ ]
916
+
917
+ fig.add_trace(
918
+ go.Bar(
919
+ x=times,
920
+ y=agents,
921
+ orientation="h",
922
+ marker_color=["#3b82f6", "#8b5cf6", "#10b981", "#f59e0b"],
923
+ text=[f"{t:.0f}ms" for t in times],
924
+ textposition="inside",
925
+ textfont=dict(color="white", size=14),
926
+ )
927
+ )
928
+
929
+ fig.update_layout(
930
+ title="Agent Processing Times",
931
+ xaxis_title="Time (ms)",
932
+ height=300,
933
+ margin=dict(l=150, r=40, t=60, b=40),
934
+ paper_bgcolor="rgba(0,0,0,0)",
935
+ plot_bgcolor="rgba(0,0,0,0)",
936
+ )
937
+
938
+ fig.show()
939
+
940
+ # %% [markdown]
941
+ # ## 9. MedGemma Interpretation Showcase
942
+
943
+ # %%
944
+ # Show MedGemma's clinical interpretations
945
+ print("🧠 MedGemma Clinical Interpretations:\n")
946
+ print("=" * 60)
947
+
948
+ interpreted = result.agent_results["finding_interpreter"].data.get(
949
+ "interpreted_findings", []
950
+ )
951
+ for i, item in enumerate(interpreted, 1):
952
+ orig = item.get("original", {})
953
+ interp = item.get("medgemma_interpretation", "")
954
+
955
+ print(f"\n📋 Finding {i}: {orig.get('type', 'Unknown').upper()}")
956
+ print(f" Region: {orig.get('region', 'N/A')}")
957
+ print(f" Severity: {orig.get('severity', 'N/A')}")
958
+ print(f"\n 🤖 MedGemma Interpretation:")
959
+ print(f" {interp[:500]}")
960
+ print("-" * 60)
961
+
962
+ # %% [markdown]
963
+ # ## 10. Conclusion
964
+ #
965
+ # ### ✅ Key Technical Achievements
966
+ #
967
+ # 1. **Real MedGemma Integration**: This notebook uses the actual MedGemma-4B model for clinical
968
+ # interpretation, report generation, and priority assessment - not simulated responses.
969
+ #
970
+ # 2. **Multi-Agent Architecture**: Successfully implemented a 4-agent pipeline demonstrating
971
+ # agentic workflow principles with clear separation of concerns.
972
+ #
973
+ # 3. **Efficient Inference**: Uses 4-bit quantization (bitsandbytes) to run MedGemma on
974
+ # Kaggle's free T4 GPU within memory constraints.
975
+ #
976
+ # 4. **Production-Ready**: Generates professional radiology reports following clinical standards.
977
+ #
978
+ # ### 📊 Competition Alignment
979
+ #
980
+ # | Criterion | How RadioFlow Addresses It |
981
+ # |-----------|---------------------------|
982
+ # | **Effective HAI-DEF Use** | Real MedGemma inference throughout pipeline |
983
+ # | **Problem Domain** | Addresses radiologist burnout and workflow inefficiency |
984
+ # | **Impact Potential** | Quantifiable time savings and improved critical finding detection |
985
+ # | **Product Feasibility** | Deployable demo with clear technical architecture |
986
+ # | **Agentic Workflow** | 4-agent orchestrated system with handoffs |
987
+ #
988
+ # ---
989
+ #
990
+ # **🔗 Live Demo:** https://huggingface.co/spaces/SamarpeetGarad/radioflow
991
+ #
992
+ # **Thank you for reviewing the RadioFlow submission!** 🙏
993
+
994
+ # %%
995
+ print("\n" + "=" * 60)
996
+ print("🏆 RadioFlow - MedGemma Impact Challenge Submission")
997
+ print("=" * 60)
998
+ print(f"\n📊 Final Summary:")
999
+ print(f" • Model Used: {'MedGemma-4B (REAL)' if MODEL_LOADED else 'Demo Mode'}")
1000
+ print(f" • Total Processing Time: {result.total_duration_ms:.0f}ms")
1001
+ print(f" • Findings Detected: {result.findings_count}")
1002
+ print(f" • Priority Level: {result.priority_level}")
1003
+ print(f" • Priority Score: {result.priority_score:.0%}")
1004
+ print(f"\n🔗 Live Demo: https://huggingface.co/spaces/SamarpeetGarad/radioflow")
1005
+ print("\n✅ Notebook Complete!")
requirements.txt CHANGED
@@ -1,8 +1,8 @@
1
  # RadioFlow: AI-Powered Radiology Workflow Agent
2
  # MedGemma Impact Challenge
3
 
4
- # UI Framework - Python 3.13 compatible version
5
- gradio>=5.0.0
6
 
7
  # Core ML/AI
8
  torch>=2.0.0
 
1
  # RadioFlow: AI-Powered Radiology Workflow Agent
2
  # MedGemma Impact Challenge
3
 
4
+ # UI Framework - Use stable version for HuggingFace
5
+ gradio==4.44.0
6
 
7
  # Core ML/AI
8
  torch>=2.0.0
sample_data/.gitkeep ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ # Sample chest X-ray images for testing
2
+ # Download from NIH ChestX-ray dataset or use provided samples
sample_data/WhatsApp Image 2026-01-29 at 15.28.44.jpeg ADDED
sample_data/WhatsApp Image 2026-01-29 at 15.29.29.jpeg ADDED
sample_data/WhatsApp Image 2026-01-29 at 15.31.01.jpeg ADDED
sample_data/WhatsApp Image 2026-01-29 at 15.31.50.jpeg ADDED
sample_data/WhatsApp Image 2026-01-29 at 15.34.18.jpeg ADDED
sample_data/real_cxr_1.png ADDED

Git LFS Details

  • SHA256: a5eb82134592746e6e8e51e5c0348c525b7117a115987d809b9149fdb825bdec
  • Pointer size: 131 Bytes
  • Size of remote file: 199 kB
sample_data/real_cxr_2.jpg ADDED

Git LFS Details

  • SHA256: bf337da4c661652a5290caf9c17761d22d1c1a90bee483eafc8c871c6c738741
  • Pointer size: 131 Bytes
  • Size of remote file: 276 kB
sample_data/real_cxr_bilateral.jpg ADDED

Git LFS Details

  • SHA256: 5a2b71769ed61425618927f8a91dd12a8d7784280947944c607946947879f639
  • Pointer size: 132 Bytes
  • Size of remote file: 1.36 MB
sample_data/real_cxr_opacity.png ADDED

Git LFS Details

  • SHA256: 23b1bd0c1fc370213b08f8dd18c0e391aea312daca891da52af9a79ce524403a
  • Pointer size: 132 Bytes
  • Size of remote file: 1.83 MB
sample_data/real_cxr_pneumonia.png ADDED

Git LFS Details

  • SHA256: e63182002f41650c9d6a9f48614e820f3c713f65ae97599a1a29c0d82115042a
  • Pointer size: 131 Bytes
  • Size of remote file: 367 kB
space.yaml CHANGED
@@ -6,7 +6,7 @@ emoji: 🩻
6
  colorFrom: blue
7
  colorTo: indigo
8
  sdk: gradio
9
- sdk_version: 5.9.1
10
  app_file: app.py
11
  pinned: true
12
  license: cc-by-4.0
 
6
  colorFrom: blue
7
  colorTo: indigo
8
  sdk: gradio
9
+ sdk_version: 4.44.0
10
  app_file: app.py
11
  pinned: true
12
  license: cc-by-4.0
test_radioflow.py ADDED
@@ -0,0 +1,187 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ RadioFlow Test Script
3
+ Quick test to verify the system works correctly
4
+ """
5
+
6
+ import sys
7
+ from PIL import Image
8
+ import time
9
+
10
+
11
+ def create_test_image():
12
+ """Create a simple test image."""
13
+ img = Image.new('RGB', (512, 512), color=(30, 30, 40))
14
+ from PIL import ImageDraw
15
+ draw = ImageDraw.Draw(img)
16
+ draw.ellipse([100, 150, 412, 450], outline=(60, 60, 70), width=3)
17
+ return img
18
+
19
+
20
+ def test_agents():
21
+ """Test individual agents."""
22
+ print("=" * 60)
23
+ print("Testing RadioFlow Agents")
24
+ print("=" * 60)
25
+
26
+ from agents import (
27
+ CXRAnalyzerAgent,
28
+ FindingInterpreterAgent,
29
+ ReportGeneratorAgent,
30
+ PriorityRouterAgent
31
+ )
32
+
33
+ # Test CXR Analyzer
34
+ print("\n[1/4] Testing CXR Analyzer...")
35
+ agent1 = CXRAnalyzerAgent(demo_mode=True)
36
+ agent1.load_model()
37
+ result1 = agent1(create_test_image())
38
+ print(f" Status: {result1.status}")
39
+ print(f" Findings: {len(result1.data.get('findings', []))}")
40
+ print(f" Time: {result1.processing_time_ms:.0f}ms")
41
+
42
+ # Test Finding Interpreter
43
+ print("\n[2/4] Testing Finding Interpreter...")
44
+ agent2 = FindingInterpreterAgent(demo_mode=True)
45
+ agent2.load_model()
46
+ result2 = agent2(result1.data)
47
+ print(f" Status: {result2.status}")
48
+ print(f" Interpreted: {len(result2.data.get('interpreted_findings', []))}")
49
+ print(f" Time: {result2.processing_time_ms:.0f}ms")
50
+
51
+ # Test Report Generator
52
+ print("\n[3/4] Testing Report Generator...")
53
+ agent3 = ReportGeneratorAgent(demo_mode=True)
54
+ agent3.load_model()
55
+ result3 = agent3(result2.data)
56
+ print(f" Status: {result3.status}")
57
+ print(f" Report length: {len(result3.data.get('full_report', ''))}")
58
+ print(f" Time: {result3.processing_time_ms:.0f}ms")
59
+
60
+ # Test Priority Router
61
+ print("\n[4/4] Testing Priority Router...")
62
+ agent4 = PriorityRouterAgent(demo_mode=True)
63
+ agent4.load_model()
64
+ context = {"original_findings": result1.data.get("findings", [])}
65
+ result4 = agent4(result3.data, context)
66
+ print(f" Status: {result4.status}")
67
+ print(f" Priority: {result4.data.get('priority_level')}")
68
+ print(f" Score: {result4.data.get('priority_score')}")
69
+ print(f" Time: {result4.processing_time_ms:.0f}ms")
70
+
71
+ print("\n" + "=" * 60)
72
+ print("✅ All agents tested successfully!")
73
+ print("=" * 60)
74
+
75
+ return True
76
+
77
+
78
+ def test_orchestrator():
79
+ """Test the full orchestrator."""
80
+ print("\n" + "=" * 60)
81
+ print("Testing RadioFlow Orchestrator")
82
+ print("=" * 60)
83
+
84
+ from orchestrator import create_orchestrator
85
+
86
+ # Create orchestrator
87
+ print("\n[1/2] Creating orchestrator...")
88
+ orchestrator = create_orchestrator(demo_mode=True)
89
+ print(" ✅ Orchestrator created")
90
+
91
+ # Run workflow
92
+ print("\n[2/2] Running workflow...")
93
+ context = {
94
+ "clinical_history": "65-year-old with cough and fever",
95
+ "symptoms": "Productive cough, dyspnea"
96
+ }
97
+
98
+ result = orchestrator.process(create_test_image(), context)
99
+
100
+ print(f"\n Status: {result.status}")
101
+ print(f" Duration: {result.total_duration_ms:.0f}ms")
102
+ print(f" Findings: {result.findings_count}")
103
+ print(f" Priority: {result.priority_level} ({result.priority_score:.0%})")
104
+
105
+ print("\n" + "=" * 60)
106
+ print("✅ Orchestrator tested successfully!")
107
+ print("=" * 60)
108
+
109
+ return True
110
+
111
+
112
+ def test_visualization():
113
+ """Test visualization functions."""
114
+ print("\n" + "=" * 60)
115
+ print("Testing Visualization Functions")
116
+ print("=" * 60)
117
+
118
+ from utils.visualization import (
119
+ create_workflow_diagram,
120
+ create_priority_gauge,
121
+ create_radar_chart
122
+ )
123
+
124
+ # Test workflow diagram
125
+ print("\n[1/3] Testing workflow diagram...")
126
+ agent_results = [
127
+ {"name": "CXR Analyzer", "status": "success", "processing_time_ms": 300},
128
+ {"name": "Finding Interpreter", "status": "success", "processing_time_ms": 400},
129
+ {"name": "Report Generator", "status": "success", "processing_time_ms": 500},
130
+ {"name": "Priority Router", "status": "success", "processing_time_ms": 300}
131
+ ]
132
+ fig1 = create_workflow_diagram(agent_results)
133
+ print(" ✅ Workflow diagram created")
134
+
135
+ # Test priority gauge
136
+ print("\n[2/3] Testing priority gauge...")
137
+ fig2 = create_priority_gauge(0.65, "URGENT")
138
+ print(" ✅ Priority gauge created")
139
+
140
+ # Test radar chart
141
+ print("\n[3/3] Testing radar chart...")
142
+ scores = {"Lungs": 0.9, "Heart": 0.7, "Mediastinum": 0.95, "Bones": 0.85}
143
+ fig3 = create_radar_chart(scores)
144
+ print(" ✅ Radar chart created")
145
+
146
+ print("\n" + "=" * 60)
147
+ print("✅ All visualizations tested successfully!")
148
+ print("=" * 60)
149
+
150
+ return True
151
+
152
+
153
+ def main():
154
+ """Run all tests."""
155
+ print("\n")
156
+ print("🩻 RadioFlow Test Suite")
157
+ print("=" * 60)
158
+ print("MedGemma Impact Challenge\n")
159
+
160
+ start_time = time.time()
161
+
162
+ try:
163
+ # Run tests
164
+ test_agents()
165
+ test_orchestrator()
166
+ test_visualization()
167
+
168
+ total_time = time.time() - start_time
169
+
170
+ print("\n" + "=" * 60)
171
+ print(f"🎉 ALL TESTS PASSED in {total_time:.1f}s")
172
+ print("=" * 60)
173
+ print("\nRadioFlow is ready!")
174
+ print("Run 'python app.py' to start the Gradio demo.")
175
+ print("=" * 60 + "\n")
176
+
177
+ return 0
178
+
179
+ except Exception as e:
180
+ print(f"\n❌ Test failed: {e}")
181
+ import traceback
182
+ traceback.print_exc()
183
+ return 1
184
+
185
+
186
+ if __name__ == "__main__":
187
+ sys.exit(main())