Update README.md
Browse files
README.md
CHANGED
|
@@ -30,14 +30,11 @@ For more information, visit our GitHub repository:
|
|
| 30 |
|
| 31 |
|
| 32 |
# <span>Usage</span>
|
| 33 |
-
|
| 34 |
-
As this LoRA adapter is built on `Meta-Llama3.1-8B-Instruct`, you need to first prepare the base model in your platform.
|
| 35 |
-
You can deploy it with tools like [vllm](https://github.com/vllm-project/vllm) or [Sglang](https://github.com/sgl-project/sglang), or perform direct inference:
|
| 36 |
```python
|
| 37 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 38 |
from peft import PeftModel
|
| 39 |
-
|
| 40 |
-
model = PeftModel.from_pretrained(base_model, "pixas/MedSSS_Policy", torc_dtype="auto", device_map="auto")
|
| 41 |
tokenizer = AutoTokenizer.from_pretrained("pixas/MedSSS_Policy")
|
| 42 |
input_text = "How to stop a cough?"
|
| 43 |
messages = [{"role": "user", "content": input_text}]
|
|
|
|
| 30 |
|
| 31 |
|
| 32 |
# <span>Usage</span>
|
| 33 |
+
You can deploy it with tools like [vllm](https://github.com/vllm-project/vllm) or [Sglang](https://github.com/sgl-project/sglang), or perform direct inference:
|
|
|
|
|
|
|
| 34 |
```python
|
| 35 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 36 |
from peft import PeftModel
|
| 37 |
+
model = AutoModelForCausalLM.from_pretrained("pixas/MedSSS_Policy",torch_dtype="auto",device_map="auto")
|
|
|
|
| 38 |
tokenizer = AutoTokenizer.from_pretrained("pixas/MedSSS_Policy")
|
| 39 |
input_text = "How to stop a cough?"
|
| 40 |
messages = [{"role": "user", "content": input_text}]
|