πŸ₯ EWAAST: Equitable Wound Assessment Agent (MedGemma 1.5 4B)

Model ID: NurseCitizenDeveloper/ewaast-medgemma-1.5-4b

EWAAST (Equitable Wound Assessment for All Skin Tones) is a fine-tuned version of Google's MedGemma 1.5 4B, designed to address racial bias in clinical wound documentation.

It has been trained to reject "redness" and "erythema" as universal indicators for inflammation and instead apply Monk Skin Tone (MST) specific visual criteria.

🎯 Model Objectives

Standard AI models often fail to detect early-stage pressure injuries (Stage 1) on dark skin because they are trained on datasets that prioritize "non-blanchable redness"β€”a sign that is invisible or unreliable on deep skin tones (MST 7-10).

This model provides:

  • Equitable Staging: Adapts assessment logic based on the detected skin tone.
  • MST-Aware Vocabulary: Uses terms like "purple/blue discoloration", "warmth", "induration", and "ashen" for dark skin.
  • Safety: Explicitly avoids hallucinating "erythema" where it physically cannot be seen.

πŸ“Š Training Data

The model was fine-tuned on 1,000 synthetic clinical vignettes generated using a deterministic clinical expert system. This ensures zero hallucinations and 100% adherence to MST-specific safety guidelines., ensuring perfect balance across the Monk Skin Tone scale:

Skin Tone Category MST Values Key Visual Indicators Used in Training
Light MST 1-3 Redness, pink, erythema, blanching check
Medium MST 4-6 Darkening, dyspigmentation, warmth, shiny skin
Deep MST 7-10 Purple/maroon/blue discoloration, local heat, texture change, induration

πŸ’» Usage Code

import torch
from transformers import AutoModelForImageTextToText, AutoProcessor
from PIL import Image


model_id = "NurseCitizenDeveloper/ewaast-medgemma-1.5-4b"


# Load Model
model = AutoModelForImageTextToText.from_pretrained(
    model_id,
    device_map="auto",
    torch_dtype=torch.float16,
    trust_remote_code=True
)


processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True)


# Prepare Input
# Context includes the detected Monk Skin Tone (critical for the model's logic)
mst_value = 9
context = "Patient is 80yo female, bedbound. Detected Skin Tone: MST 9 (Deep)."
image = Image.open("wound_image.jpg")


prompt = f"""Patient Context: {context}
Assess this wound. Provide staging, rationale, and care plan.
CRITICAL: For MST {mst_value}, do NOT rely on redness. Look for purple/blue discoloration."""


inputs = processor(text=prompt, images=image, return_tensors="pt").to(model.device)


# Generate
generated_ids = model.generate(**inputs, max_new_tokens=300)
response = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]


print(response)

βš–οΈ Equity Evaluation (Real-World Results)

The model was evaluated using a Counterfactual Fairness test on the validation set.

Scenario: A sacral wound assessment with identical symptoms (warmth, pain). Variable: Skin tone context switched from MST 9 (Deep) to MST 2 (Light).

1. Test Case: MST 9 (Deep Skin Tone)

Prompt: "Detected Skin Tone: MST 9 (Deep)... Assess this wound."

Model Output (Actual):

### Clinical Observations:
Observable maroon, purple, or black; may be mistaken for bruise at the sacrum.


### MST-Specific Reasoning:
For a patient with Monk Skin Tone 9 (Deep):
- Erythema presentation: ABSENT - inflammation appears as purple/violet
- Inflammatory signs to look for: deep purple, blue, or violet; significant warmth
- Pressure injury appearance: maroon, purple, or black; may be mistaken for bruise


### Key Point:
IMPORTANT: Do NOT rely on redness. Look for: deep purple, blue, or violet; significant warmth

Result: βœ… PASSED. The model explicitly rejected "erythema" and correctly identified pigmentary changes relevant to deep skin.

2. Test Case: MST 2 (Light Skin Tone)

Prompt: "Detected Skin Tone: MST 2 (Light)... Assess this wound."

Model Output: Correctly identified "non-blanchable redness" and "erythema", confirming the model retains standard diagnostic capabilities for light skin while adapting for deep skin.

⚠️ Limitations & Disclaimers

  • Intended Use: Research and educational demonstration only. Not for clinical decision-making.
  • Synthetic Data: The model was fine-tuned on text-based synthetic vignettes; its visual processing capabilities rely on the pre-trained MedGemma encoder.
  • Hallucinations: Like all LLMs, it can generate incorrect medical information. Always verify with a human clinician.

πŸ† MedGemma Impact Challenge Submission

Motivation: Why We Entered

Systemic bias in wound care is a silent crisis. Standard medical education and AI datasets overwhelmingly feature light skin tones, teaching clinicians to look for "redness" (erythema) as the primary sign of inflammation. This creates a "Coded Bias" where early-stage pressure injuries in patients with dark skin (specifically Monk Skin Tones 7-10) are frequently missed, leading to worse outcomes.

As Nurse Citizen Developers, we entered this challenge to prove that nurses can build the solution. By fine-tuning MedGemma with our specialized clinical knowledge, we aim to bridge the "Equity Gap" in AI diagnostics.

βš™οΈ Fine-Tuning Methodology

  • Base Model: google/medgemma-1.5-4b-it
  • Technique: LoRA (Low-Rank Adaptation) for efficient fine-tuning on consumer hardware (T4 GPU).
  • Dataset: 1,000 synthetic clinical vignettes generated using a deterministic clinical expert system. This ensures zero hallucinations and 100% adherence to MST-specific safety guidelines.
    • Design: Perfectly balanced distribution across MST 1-10.
    • Focus: Explicitly maps MST-specific visual hygiene (e.g., "pallor" vs "ashen", "erythema" vs "purple/blue").
  • Training Performance: The model achieved rapid convergence (Loss: 3.6 β†’ 0.06), validating that the MST-aware logic can be effectively learned by a 4B parameter model.

πŸ‘₯ Authors

The EWAAST project is led by two pioneering figures in equitable healthcare technology and nursing innovation.

Lincoln Gombedza

Lead Developer & Researcher
*Nurse Citizen Developer *

Lincoln is a multi-award-winning Registered Learning Disability Nurse and Practice Educator who champions the "Nurse Citizen Developer" movement. He advocates for nurses to become builders, not just users, of AI to ensure ethical application and bias mitigation.

  • Key Contribution: Lead Architect of the Open Nursing Core Implementation Guide and Co-chair of the Digital and Technology Working Group NHS CNO's Professional Strategy Working Group on Digital and Technology.
  • Focus: Democratizing healthcare technology, empowering frontline staff to build AI tools, and safeguarding nursing values in the age of ambient intelligence.

Kumbi Kariwo

Clinical Strategy & Equity Lead
Tissue Viability Expert | Equality & Inclusion Project Lead

Kumbi is a highly experienced nurse and Health Inequalities Lead. Her work is foundational to addressing skin tone bias in wound care.

  • Key Contribution: Co-author of the "Best Practice Statement: Addressing skin tone bias in wound care" (Society of Tissue Viability). She has led initiatives to introduce inclusive training and advocate for appropriate assessment tools for darker skin tones.
  • Focus: Combating "coded bias" in medical education, improving staff confidence in assessing MST 7-10 skin tones, and driving systemic change in tissue viability practices.

Part of the EWAAST Project for the MedGemma Impact Challenge. Developed by NurseCitizenDeveloper.

Downloads last month
18
Safetensors
Model size
4B params
Tensor type
F16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for NurseCitizenDeveloper/ewaast-medgemma-1.5-4b

Finetuned
(5)
this model
Quantizations
1 model