File size: 1,237 Bytes
3595839 2cdf465 3595839 46996e9 3595839 46996e9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 |
---
language:
- en
- vi
pipeline_tag: text-generation
tags:
- chat
library_name: transformers
license: apache-2.0
datasets:
- thanghf/demo_math
---
## Quick Start
### 🤗 Hugging Face Transformers
Here we show a code snippet to show you how to use the chat model with `transformers`:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
model = AutoModelForCausalLM.from_pretrained("thanghf/demo_math_model",torch_dtype=torch.bfloat16,device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("thanghf/demo_math_model")
model.eval()
streamer = TextStreamer(tokenizer)
prompt = """Gieo hai con súc xắc cân đối và đồng chất. Xác suất để tổng số chấm trên mặt xuất hiện của hai con súc xắc bằng 7 là:"""
messages = [
{"role": "system", "content": "Please reason step by step, and put your final answer within \\boxed{}."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=4096,
streamer=streamer
)
``` |