Update README.md
Browse files
README.md
CHANGED
|
@@ -18,3 +18,33 @@ license: apache-2.0
|
|
| 18 |
- By finetuning with proposed [MedReason dataset](https://huggingface.co/datasets/UCSC-VLAA/MedReason), our best model [MedReason-8B](https://huggingface.co/UCSC-VLAA/MedReason-8B), achieves *state-of-the-art* performance.
|
| 19 |
|
| 20 |
We open-sourced our model here.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 18 |
- By finetuning with proposed [MedReason dataset](https://huggingface.co/datasets/UCSC-VLAA/MedReason), our best model [MedReason-8B](https://huggingface.co/UCSC-VLAA/MedReason-8B), achieves *state-of-the-art* performance.
|
| 19 |
|
| 20 |
We open-sourced our model here.
|
| 21 |
+
|
| 22 |
+
## 👨⚕️ Model
|
| 23 |
+
|
| 24 |
+
- **Model Access**
|
| 25 |
+
|
| 26 |
+
| Model | Base Model | Link |
|
| 27 |
+
| ----------------- | ------------------------------------------------------------ | ---------------------------------------------------------- |
|
| 28 |
+
| MedReason-8B | [HuatuoGPT-o1-8B](https://huggingface.co/FreedomIntelligence/HuatuoGPT-o1-8B) | [Link](https://huggingface.co/UCSC-VLAA/MedReason-8B) |
|
| 29 |
+
| MedReason-Llama | [Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) | [Link](https://huggingface.co/UCSC-VLAA/MedReason-Llama) |
|
| 30 |
+
| MedReason-Mistral | [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) | [Link](https://huggingface.co/UCSC-VLAA/MedReason-Mistral) |
|
| 31 |
+
|
| 32 |
+
- **Deploy**: we provide a example code for direct inference with MedReason-8B.
|
| 33 |
+
|
| 34 |
+
Also, MedReason-8B can be deployed with tools like [vllm](https://github.com/vllm-project/vllm) or [Sglang](https://github.com/sgl-project/sglang), we provide code for model deployment using Sglang in `./src/evaluation/eval.py`
|
| 35 |
+
|
| 36 |
+
```python
|
| 37 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 38 |
+
|
| 39 |
+
model = AutoModelForCausalLM.from_pretrained('UCSC-VLAA/MedReason-8B',torch_dtype="auto",device_map="auto", use_safetensors= True)
|
| 40 |
+
model.eval()
|
| 41 |
+
|
| 42 |
+
tokenizer = AutoTokenizer.from_pretrained('UCSC-VLAA/MedReason-8B', trust_remote_code=True, padding_side='left')
|
| 43 |
+
|
| 44 |
+
input_text = "How to stop a cough?"
|
| 45 |
+
messages = [{"role": "user", "content": input_text}]
|
| 46 |
+
|
| 47 |
+
inputs = tokenizer(tokenizer.apply_chat_template(messages, tokenize=False,add_generation_prompt=True), return_tensors="pt").to(model.device)
|
| 48 |
+
outputs = model.generate(**inputs, max_new_tokens=2048)
|
| 49 |
+
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
| 50 |
+
```
|