OpenNLPLab commited on
Commit
ca3f18d
·
1 Parent(s): 67b67aa

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -3
README.md CHANGED
@@ -146,9 +146,6 @@ export use_triton=False
146
  >>> from transformers import AutoModelForCausalLM, AutoTokenizer
147
  >>> tokenizer = AutoTokenizer.from_pretrained("OpenNLPLab/TransNormerLLM-385M", trust_remote_code=True)
148
  >>> model = AutoModelForCausalLM.from_pretrained("OpenNLPLab/TransNormerLLM-385M", device_map="auto", trust_remote_code=True)
149
- >>> inputs = tokenizer('今天是美好的一天', return_tensors='pt')
150
- >>> pred = model.generate(**inputs, max_new_tokens=2048, repetition_penalty=1.0)
151
- >>> print(tokenizer.decode(pred.cpu()[0], skip_special_tokens=True))
152
  ```
153
 
154
  > In the above code snippets, the model loading specifies `device_map='auto'`, which will use all available GPUs. If you need to specify the device(s) to use, you can control it in a way similar to `export CUDA_VISIBLE_DEVICES=0,1` (using the 0 and 1 graphics cards).
 
146
  >>> from transformers import AutoModelForCausalLM, AutoTokenizer
147
  >>> tokenizer = AutoTokenizer.from_pretrained("OpenNLPLab/TransNormerLLM-385M", trust_remote_code=True)
148
  >>> model = AutoModelForCausalLM.from_pretrained("OpenNLPLab/TransNormerLLM-385M", device_map="auto", trust_remote_code=True)
 
 
 
149
  ```
150
 
151
  > In the above code snippets, the model loading specifies `device_map='auto'`, which will use all available GPUs. If you need to specify the device(s) to use, you can control it in a way similar to `export CUDA_VISIBLE_DEVICES=0,1` (using the 0 and 1 graphics cards).