Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -36,7 +36,7 @@ We use the following serving configurations:
36
  The provided chat template sets the reasoning effort to `high`
37
 
38
  ### Transformers
39
- You can use `K2 Think (Jan '26)` with Transformers. If you use `transformers.pipeline`, it will apply the chat template automatically. If you use `model.generate` directly, you need to apply the chat template mannually.
40
 
41
  The chat template is directly inherited from K2-V2-Instruct, with the default `reasoning_effort` set to `"high"`. The other levels of reasoning effort (`"low"` and `"medium"`) are still available but have not been tested or evaluated. As such, the model's behavior under such settings is not assured to maintain reported performance.
42
 
@@ -75,7 +75,7 @@ client = OpenAI(
75
  )
76
 
77
  completion = client.chat.completions.create(
78
- model="LLM360/K2-Think-0126",
79
  messages = [
80
  {"role": "system", "content": "You are K2-Think, a helpful assistant created by Mohamed bin Zayed University of Artificial Intelligence (MBZUAI) Institute of Foundation Models (IFM)."},
81
  {"role": "user", "content": "Solve the 24 game [2, 3, 5, 6]"}
@@ -132,7 +132,7 @@ We have employed various techniques to reduce bias, harmful outputs, and other r
132
  ---
133
 
134
  # Citation
135
- If you use K2 Think (Jan '26) in your research, please use the following citation:
136
 
137
  ```bibtex
138
  @misc{k2think2026k2think0126,
 
36
  The provided chat template sets the reasoning effort to `high`
37
 
38
  ### Transformers
39
+ You can use `K2 Think V2` with Transformers. If you use `transformers.pipeline`, it will apply the chat template automatically. If you use `model.generate` directly, you need to apply the chat template mannually.
40
 
41
  The chat template is directly inherited from K2-V2-Instruct, with the default `reasoning_effort` set to `"high"`. The other levels of reasoning effort (`"low"` and `"medium"`) are still available but have not been tested or evaluated. As such, the model's behavior under such settings is not assured to maintain reported performance.
42
 
 
75
  )
76
 
77
  completion = client.chat.completions.create(
78
+ model="LLM360/K2-Think-V2",
79
  messages = [
80
  {"role": "system", "content": "You are K2-Think, a helpful assistant created by Mohamed bin Zayed University of Artificial Intelligence (MBZUAI) Institute of Foundation Models (IFM)."},
81
  {"role": "user", "content": "Solve the 24 game [2, 3, 5, 6]"}
 
132
  ---
133
 
134
  # Citation
135
+ If you use K2 Think V2 in your research, please use the following citation:
136
 
137
  ```bibtex
138
  @misc{k2think2026k2think0126,