SafeMed-R1 / README.md
Anony-mous's picture
Update README.md
65762fa verified
|
raw
history blame
2.16 kB

SafeMed-R1: A Trustworthy Medical Reasoning Model

1 Introduction

SafeMed-R1 is a medical LLM designed for trustworthy medical reasoning. It thinks before answering, resists jailbreaks, and returns safe, auditable outputs aligned with medical ethics and regulations.

  • Trustworthy and compliant: avoids harmful advice, provides calibrated, fact-based responses with appropriate disclaimers.
  • Attack resistance: trained with healthcare-specific red teaming and multi-dimensional reward optimization to safely refuse risky requests.
  • Explainable reasoning: can provide structured, step-by-step clinical reasoning when prompted.

For more information, visit our GitHub repository:
https://github.com/OpenMedZoo/SafeMed-R1


Usage

You can use SafeMed-R1 in the same way as an instruction-tuned Qwen-style model. It can be deployed with vLLM or run via Transformers.

Transformers (direct inference):

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

model_id = "OpenMedZoo/SafeMed-R1"
model = AutoModelForCausalLM.from_pretrained(
    model_id, torch_dtype="auto", device_map="auto", trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)

messages = [{"role": "user", "content": "How to relieve a mild cough safely?"}]
inputs = tokenizer(
    tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True),
    return_tensors="pt"
).to(model.device)

outputs = model.generate(**inputs, max_new_tokens=1024)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

vLLM (OpenAI-compatible serving):

MODEL_PATH="OpenMedZoo/SafeMed-R1"
PORT=50050
vllm serve "$MODEL_PATH" \
  --host 0.0.0.0 \
  --port $PORT \
  --trust-remote-code \
  --served-model-name "safemed-r1" \
  --tensor-parallel-size 4 \
  --pipeline-parallel-size 1 \
  --gpu-memory-utilization 0.9 \
  --disable-sliding-window \
  --max-model-len 4096 \
  --enable-prefix-caching