pixas commited on
Commit
d9a99a8
·
verified ·
1 Parent(s): 6d0eee1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -5
README.md CHANGED
@@ -30,14 +30,11 @@ For more information, visit our GitHub repository:
30
 
31
 
32
  # <span>Usage</span>
33
- We build the policy model as a LoRA adapter, which saves the memory to use it.
34
- As this LoRA adapter is built on `Meta-Llama3.1-8B-Instruct`, you need to first prepare the base model in your platform.
35
- You can deploy it with tools like [vllm](https://github.com/vllm-project/vllm) or [Sglang](https://github.com/sgl-project/sglang), or perform direct inference:
36
  ```python
37
  from transformers import AutoModelForCausalLM, AutoTokenizer
38
  from peft import PeftModel
39
- base_model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-3.1-8B-Instruct",torch_dtype="auto",device_map="auto")
40
- model = PeftModel.from_pretrained(base_model, "pixas/MedSSS_Policy", torc_dtype="auto", device_map="auto")
41
  tokenizer = AutoTokenizer.from_pretrained("pixas/MedSSS_Policy")
42
  input_text = "How to stop a cough?"
43
  messages = [{"role": "user", "content": input_text}]
 
30
 
31
 
32
  # <span>Usage</span>
33
+ You can deploy it with tools like [vllm](https://github.com/vllm-project/vllm) or [Sglang](https://github.com/sgl-project/sglang), or perform direct inference:
 
 
34
  ```python
35
  from transformers import AutoModelForCausalLM, AutoTokenizer
36
  from peft import PeftModel
37
+ model = AutoModelForCausalLM.from_pretrained("pixas/MedSSS_Policy",torch_dtype="auto",device_map="auto")
 
38
  tokenizer = AutoTokenizer.from_pretrained("pixas/MedSSS_Policy")
39
  input_text = "How to stop a cough?"
40
  messages = [{"role": "user", "content": input_text}]