File size: 1,722 Bytes
c6ae826 e72dea3 bfc0c46 e72dea3 d9a99a8 e72dea3 d9a99a8 e72dea3 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 |
---
license: mit
base_model:
- meta-llama/Llama-3.1-8B-Instruct
pipeline_tag: text-generation
language:
- en
tags:
- medical
---
<div align="center">
<h1>
MedSSS-8B-Policy
</h1>
</div>
<div align="center">
<a href="https://github.com/pixas/MedSSS" target="_blank">GitHub</a> | <a href="https://arxiv.org/abs/2501.12051" target="_blank">Paper</a>
</div>
# <span>Introduction</span>
**MedSSS-Policy** is a the policy model designed for slow-thinking medical reasoning. It will conduct explicit step-wise reasoning and finalize the answer at the end of the response.
For more information, visit our GitHub repository:
[https://github.com/pixas/MedSSS](https://github.com/pixas/MedSSS).
# <span>Usage</span>
You can deploy it with tools like [vllm](https://github.com/vllm-project/vllm) or [Sglang](https://github.com/sgl-project/sglang), or perform direct inference:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
model = AutoModelForCausalLM.from_pretrained("pixas/MedSSS_Policy",torch_dtype="auto",device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("pixas/MedSSS_Policy")
input_text = "How to stop a cough?"
messages = [{"role": "user", "content": input_text}]
inputs = tokenizer(tokenizer.apply_chat_template(messages, tokenize=False,add_generation_prompt=True
), return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=2048)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
MedSSS-Policy adopts a step-wise reasoning approach, with outputs formatted as:
```
Step 0: Let's break down this problem step by step.
Step 1: ...
[several steps]
Step N: [last reasoning step]\n\nThe answer is {answer}
``` |