The world's safest medical AI.
A 7,666-parameter language model that provides evidence-based medical advice for any query:
"Stay hydrated, get adequate rest, and if symptoms persist, consult a healthcare professional."
Every symptom. Every question. Every time.
Performance Metrics
| Metric | PlaceboGPT | Industry Average |
|---|---|---|
| Parameters | 7,666 | 175,000,000,000 |
| Model Size | 30 KB | 350 GB |
| Safety Incidents | 0 | [Redacted] |
| Patients Harmed | 0 | N/A |
| Accuracy | 100%* | Varies |
*At giving the same advice.
Architecture
Input β Character Tokenizer (75 tokens)
β Embedding Layer (75 β 16 dims) 1,200 params
β LSTM (16 β 32 hidden units) 6,400 params
β Linear Classifier (32 β 2) 66 params
β Output: Class 0 (99.99%)
β "Stay hydrated, get adequate rest..."
Total: 7,666 parameters (~30 KB)
Usage
from placebo_model import PlaceboGPT, CharTokenizer, PLACEBO_RESPONSE
import torch
tokenizer = CharTokenizer()
model = PlaceboGPT(vocab_size=tokenizer.vocab_size)
model.load_state_dict(torch.load("model/placebo_gpt.pth", weights_only=True))
model.eval()
query = "I have a terrible headache and my vision is blurry"
tokens = tokenizer.encode(query).unsqueeze(0)
with torch.no_grad():
logits = model(tokens)
confidence = torch.softmax(logits, dim=1)[0, 0].item()
print(f"Response: {PLACEBO_RESPONSE}")
print(f"Confidence: {confidence:.2%}")
# Response: Stay hydrated, get adequate rest, and if symptoms persist, consult a healthcare professional.
# Confidence: 99.99%
Training
Trained on 10,000 synthetic medical queries. Every query maps to the same class: the placebo response. Training converges by epoch 2.
Epoch 1/10 | Loss: 0.1092 | Accuracy: 100.00%
Epoch 2/10 | Loss: 0.0009 | Accuracy: 100.00%
Epoch 10/10 | Loss: 0.0001 | Accuracy: 100.00%
Why?
Every medical AI faces an impossible trilemma:
- Be helpful β Risk giving dangerous advice
- Be safe β Refuse to answer anything useful
- Be honest β Admit you're not qualified to diagnose
PlaceboGPT solves this by being maximally safe, universally applicable, and technically never wrong. The response isn't a bug. It's a safety feature.
Bias, Limitations & Risks
Bias: Strong bias toward hydration, rest, and consulting healthcare professionals. This bias is intentional and considered a feature.
Limitations: None. It is perfect.
Risks: Users may become adequately hydrated.
Environmental Impact
Training required approximately 10 seconds on a single CPU. Carbon footprint: less than boiling a kettle.
Links
- Live Demo: pharmatools.ai/placebogpt
- GitHub: github.com/nickjlamb/placebogpt
- Article: I Built the World's Safest Medical AI
See Also
- Atacama β A 7,762-parameter model that predicts whether it's raining in the Atacama Desert. (Spoiler: it's not.)
Citation
@misc{placebogpt2026,
title={PlaceboGPT-0.001B: Safety Through Incapability},
author={Lamb, Nick},
year={2026},
url={https://github.com/nickjlamb/placebogpt}
}
Disclaimer
PlaceboGPT has not been evaluated by the FDA, EMA, MHRA, or any regulatory body. None have asked. It is not intended to diagnose, treat, cure, or prevent any disease. That's the whole point. If you're experiencing a medical emergency, call 911, not an AI.
License
MIT License. Use responsibly. Stay hydrated.