Translation
Transformers
Safetensors
qwen3
text-generation
text-generation-inference
luoyingfeng commited on
Commit
89b7eb4
·
verified ·
1 Parent(s): eeb3334

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -115,7 +115,7 @@ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
115
  generated_ids = model.generate(**model_inputs, max_new_tokens=512, num_beams=5, do_sample=False)
116
  output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
117
 
118
- outputs = tokenizer.batch_decode(output_ids, skip_special_tokens=True)
119
 
120
  print("response:", outputs)
121
  ```
 
115
  generated_ids = model.generate(**model_inputs, max_new_tokens=512, num_beams=5, do_sample=False)
116
  output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
117
 
118
+ outputs = tokenizer.decode(output_ids, skip_special_tokens=True)
119
 
120
  print("response:", outputs)
121
  ```