Koishi-1.5 / README.md
sd-inf's picture
Create README.md
3c0a600 verified
---
language:
- en
library_name: transformers
tags:
- text-generation
pipeline_tag: text-generation
---
# Koishi 1.5
Koishi 1.5 is an updated version of our Koishi model, fine-tuned specifically to augment conversational data by generating Chain of Thought (CoT) reasoning. It is built upon Qwen 2.5 3B Instruct.
Given an input/output pair, Koishi generates a CoT trace.
## Use Cases
- Updating older datasets with reasoning traces.
- Adding Chain of Thought to instruct model responses for training reasoning models.
- Generating CoT for model responses where the true reasoning process is unavailable.
### Chat Template
The model expects the following structure. Note that Koishi is trained to always begin its generation with `Sure, here's the chain of thought:`.
**Example:**
```
<|im_start|>system
Generate a Chain of Thought chain.<|im_end|>
<|im_start|>user
Input: Where is Paris?
Response: France<|im_end|>
<|im_start|>assistant
```
### Example Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "LucidityAI/Koishi-1.5"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto")
input_text = "What is the capital of France?"
response_text = "Paris"
messages = [
{"role": "system", "content": "Generate a Chain of Thought chain."},
{"role": "user", "content": f"Input: Where is Paris?\nResponse: France"}
]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True).to(model.device)
outputs = model.generate(inputs, max_new_tokens=256, do_sample=True)
print(tokenizer.decode(outputs[0][inputs.shape[-1]:], skip_special_tokens=True))
```