Playingyoyo commited on
Commit
33242dc
·
verified ·
1 Parent(s): efa8662

Delete chat format usage

Browse files
Files changed (1) hide show
  1. README.md +0 -37
README.md CHANGED
@@ -99,43 +99,6 @@ print(generated_response)
99
  print("=" * 50)
100
  ```
101
 
102
- ### Chat Format Usage
103
-
104
- ```python
105
- from transformers import AutoTokenizer, AutoModelForCausalLM
106
- import torch
107
-
108
- model_name = "Playingyoyo/aLLoyM"
109
- tokenizer = AutoTokenizer.from_pretrained(model_name)
110
- model = AutoModelForCausalLM.from_pretrained(
111
- model_name,
112
- torch_dtype=torch.bfloat16,
113
- device_map="auto"
114
- )
115
-
116
- # Chat format
117
- messages = [
118
- {"role": "system", "content": "You are a helpful assistant."},
119
- {"role": "user", "content": "What is machine learning?"}
120
- ]
121
-
122
- # Apply chat template
123
- prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
124
- inputs = tokenizer(prompt, return_tensors="pt")
125
-
126
- with torch.no_grad():
127
- outputs = model.generate(
128
- **inputs,
129
- max_new_tokens=200,
130
- temperature=0.7,
131
- do_sample=True,
132
- pad_token_id=tokenizer.eos_token_id
133
- )
134
-
135
- response = tokenizer.decode(outputs[0], skip_special_tokens=True)
136
- print(response)
137
- ```
138
-
139
  ## Training Configuration
140
 
141
  - **Learning Rate**: 2e-4
 
99
  print("=" * 50)
100
  ```
101
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
102
  ## Training Configuration
103
 
104
  - **Learning Rate**: 2e-4