File size: 12,076 Bytes
d5ccf82
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5324b38
d5ccf82
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
---
license: apache-2.0
language:
- en
- es
- fr
- de
- it
- pt
- ru
- ar
- hi
- ko
- zh
library_name: transformers
base_model:
- arcee-ai/Trinity-Large-Base
arxiv:
- 2602.17004
tags:
- reasoning
- agentic
- tool-calling
- thinking
---
<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<!-- markdownlint-disable no-duplicate-header -->

<div align="center">
  <picture>
    <img
      src="https://cdn-uploads.huggingface.co/production/uploads/6435718aaaef013d1aec3b8b/i-v1KyAMOW_mgVGeic9WJ.png"
      alt="Arcee Trinity Large Thinking"
      style="max-width: 100%; height: auto;"
    >
  </picture>
</div>
<hr>

# Trinity-Large-Thinking

## Introduction

Trinity-Large-Thinking is a reasoning-optimized variant of Arcee AI's Trinity-Large family — a 398B-parameter sparse Mixture-of-Experts (MoE) model with approximately 13B active parameters per token. Built on Trinity-Large-Base and post-trained with extended chain-of-thought reasoning and agentic RL, Trinity-Large-Thinking delivers state-of-the-art performance on agentic benchmarks while maintaining strong general capabilities.

Trinity-Large-Thinking generates explicit reasoning traces wrapped in `<think>...</think>` blocks before producing its final response. This thinking process is critical to the model's performance — **thinking tokens must be kept in context** for multi-turn conversations and agentic loops to function correctly.

Try it at [chat.arcee.ai](http://chat.arcee.ai/)

More details on the training of Trinity Large are available in the [technical report](https://arxiv.org/abs/2602.17004).

## Key Highlights

- **Agentic-first design**: Purpose-built for tool calling, multi-step planning, and agent workflows
- **State-of-the-art agentic performance**: 94.7% on τ²-Bench, 91.9% on PinchBench, 98.2% on LiveCodeBench
- **Native reasoning traces**: Extended chain-of-thought via `<think>...</think>` blocks
- **Compatible with major agent frameworks**: Works out of the box with [OpenClaw](https://github.com/openclaw) and [Hermes Agent](https://github.com/NousResearch/hermes-agent)
- **Ready to use on [OpenRouter](https://openrouter.ai/)**: No setup required — full reasoning and tool calling support via API

## Model Variants

The Trinity Large family consists of four checkpoints:

- **Trinity-Large-Thinking** (this release): Reasoning-optimized, agentic post-training with extended chain-of-thought
- **[Trinity-Large-Preview](https://huggingface.co/arcee-ai/Trinity-Large-Preview)**: Lightly post-trained, chat-ready instruct model (no reasoning_content).
- **[Trinity-Large-TrueBase](https://huggingface.co/arcee-ai/Trinity-Large-TrueBase)**: 10T-token pre-anneal pretraining checkpoint
- **[Trinity-Large-Base](https://huggingface.co/arcee-ai/Trinity-Large-Base)**: Full 17T-token pretrained foundation model with mid-training anneals

## Architecture

Trinity-Large-Thinking shares the same sparse MoE architecture as Trinity-Large-Preview.

| Hyperparameter | Value |
|:---|:---:|
| Total parameters | ~398B |
| Active parameters per token | ~13B |
| Experts | 256 (1 shared) |
| Active experts | 4 |
| Routing strategy | 4-of-256 (1.56% sparsity) |
| Dense layers | 6 |
| Pretraining context length | 8,192 |
| Context length after extension | 512k |
| Architecture | Sparse MoE (AfmoeForCausalLM) |

## Benchmarks
![Benchmark charts](https://huggingface.co/arcee-ai/Trinity-Large-Thinking/resolve/main/All_charts.jpg)

| Benchmark | Trinity-Large-Thinking | Opus-4.6 | GLM-5 | MiniMax-M2.7 | Kimi-K2.5 |
|---|---:|---:|---:|---:|---:|
| IFBench | 52.3 | 53.1 | 72.3 | **75.7** | 70.2 |
| GPQA-Diamond | 76.3 | **89.2** | 81.6 | 86.2 | 86.9 |
| Tau2-Airline | **88.0** | 82.0 | 80.5 | 80.0 | 80.0 |
| Tau2-Telecom | 94.7 | 92.1 | **98.2** | 84.8 | 95.9 |
| PinchBench | 91.9 | **93.3** | 86.4 | 89.8 | 84.8 |
| AIME25 | 96.3 | **99.8** | 93.3 | 80.0 | 96.3 |
| BCFLv4 | 70.1 | **77.0** | 70.8 | 70.6 | 68.3 |
| MMLU-Pro | 83.4 | **89.1** | 85.8 | 80.8 | 87.1 |
| SWE-bench Verified* | 63.2 | **75.6** | 72.8 | 75.4 | 70.8 |

 *All models evaluated in mini-swe-agent-v2

## Thinking-in-Context: Important Usage Note

Trinity-Large-Thinking produces reasoning traces inside `<think>...</think>` blocks before generating its final response.

This means:

1. **Multi-turn conversations**: When building chat applications, include the full assistant response (thinking + answer) in the conversation history for subsequent turns.
2. **Agentic loops**: When using Trinity-Large-Thinking as the backbone of an agent (OpenClaw, Hermes Agent, or custom), ensure your tool-calling loop preserves `<think>` blocks in the message history between steps.
3. **Context window management**: The 512k extended context window accommodates long reasoning chains across many agentic steps. If you must truncate history, prefer removing older turns entirely rather than stripping thinking tokens from recent turns.

### How thinking works

The model reasons internally before producing its response. When served via vLLM, the reasoning is separated into a dedicated `reasoning_content` field in the API response:

    // API response structure
    {
      "message": {
        "role": "assistant",
        "reasoning_content": "The user wants flight information. I need to determine the date for next Tuesday, search for flights SFO → JFK, and filter by price < $300.",
        "content": "\n",
        "tool_calls": [{
          "function": {
            "name": "search_flights",
            "arguments": "{\"origin\": \"SFO\", \"destination\": \"JFK\", \"date\": \"2026-04-07\", \"max_price\": 300}"
          }
        }]
      }
    }

When building multi-turn agentic loops, include the `reasoning_content` back in the conversation history (re-wrapped in `<think>...</think>` tags within the assistant message) so the model retains its prior reasoning chain.

## Training Configuration

### Pretraining

- Training tokens: 17 trillion
- Data partner: [Datology](https://www.datologyai.com/)

### Posttraining

- Instruction tuning and agentic RL with extended chain-of-thought
- Trained on tool-calling trajectories, multi-step agent tasks, and reasoning chains

### Infrastructure

- Hardware: 2,048 NVIDIA B300 GPUs
- Parallelism: HSDP + Expert Parallelism
- Compute partner: [Prime Intellect](https://www.primeintellect.ai/)

## Usage

### Running our model

- [vLLM](#vllm) (recommended for agentic deployments)
- [Transformers](#transformers)
- [API](#api)

### vLLM

Supported in vLLM 0.11.1+. For agentic use with both reasoning and tool calling:

    vllm serve arcee-ai/Trinity-Large-Thinking \
      --dtype bfloat16 \
      --enable-reasoning \
      --reasoning-parser deepseek_r1 \
      --enable-auto-tool-choice \
      --tool-call-parser qwen3_coder

This configuration:
- `--reasoning-parser deepseek_r1` — Parses `<think>...</think>` reasoning blocks and exposes them via the `reasoning_content` field in the API response
- `--tool-call-parser qwen3_coder` — Parses structured tool calls from the model output into the OpenAI-compatible `tool_calls` array

**Extracting reasoning content from the API response:**

```python
from openai import OpenAI

client = OpenAI(api_key="EMPTY", base_url="http://localhost:8000/v1")

response = client.chat.completions.create(
    model="arcee-ai/Trinity-Large-Thinking",
    messages=[
        {"role": "user", "content": "What's the weather like in Paris?"}
    ],
    tools=[ # your tool definitions here
        {
            "type": "function",
            "function": {
                "name": "get_weather",
                "description": "Get current weather for a location",
                "parameters": {
                    "type": "object",
                    "properties": {
                        "location": {"type": "string"}
                    },
                    "required": ["location"]
                }
            }
        }
    ],
)

# Access reasoning (thinking) content
reasoning = response.choices[0].message.reasoning_content

# Access final response or tool calls
content = response.choices[0].message.content
tool_calls = response.choices[0].message.tool_calls
```

**Note on thinking-in-context with vLLM**: When building multi-turn agentic loops, include both `reasoning_content` and `content` in the conversation history you send back to the model. The reasoning content should be re-wrapped in `<think>...</think>` tags within the assistant message.

### Transformers

Use the `main` transformers branch or pass `trust_remote_code=True` with a released version.

```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

model_id = "arcee-ai/Trinity-Large-Thinking"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    torch_dtype=torch.bfloat16,
    device_map="auto",
    trust_remote_code=True
)

messages = [
    {"role": "user", "content": "Who are you?"},
]

input_ids = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors="pt"
).to(model.device)

outputs = model.generate(
    input_ids,
    max_new_tokens=4096,
    do_sample=True,
    temperature=0.6,
    top_k=50,
    top_p=0.95
)

response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```

### API

Available on OpenRouter:

    curl -X POST "https://openrouter.ai/v1/chat/completions" \
      -H "Authorization: Bearer $OPENROUTER_API_KEY" \
      -H "Content-Type: application/json" \
      -d '{
        "model": "arcee-ai/trinity-large-thinking",
        "messages": [
          {
            "role": "user",
            "content": "What are some fun things to do in New York?"
          }
        ]
      }'

## Agentic Use Cases

Trinity-Large-Thinking is optimized for deployment as the reasoning backbone of AI agent systems. It has been evaluated and performs excellently with:

### OpenClaw

Trinity-Large-Thinking works as a drop-in brain for OpenClaw agents. Its native tool-calling format is compatible with OpenClaw's execution loop, and the extended reasoning enables reliable multi-step task completion — from email triage to code generation to meeting scheduling. Our 91.9% PinchBench score reflects real-world OpenClaw task performance.

### Hermes Agent

Compatible with the Hermes Agent framework from Nous Research. Trinity-Large-Thinking's reasoning traces pair naturally with Hermes's skill-learning loop — the model's explicit chain-of-thought makes skill extraction more reliable, and its strong tool-calling capabilities integrate directly via the Hermes tool-use protocol.

### Custom Agent Loops

For custom implementations, the key integration pattern is:

1. Send the user message with tool definitions
2. Receive the response with `<think>` reasoning + tool calls
3. Execute the tool calls
4. Append the **full** assistant response (thinking + content + tool calls) and tool results to the message history
5. Send the updated history back for the next step
6. Repeat until the model produces a final response without tool calls

## License

Trinity-Large-Thinking is released under the Apache License, Version 2.0.

## Citation

If you use this model, please cite:

    @misc{singh2026arceetrinity,
      title        = {Arcee Trinity Large Technical Report},
      author       = {Varun Singh and Lucas Krauss and Sami Jaghouar and Matej Sirovatka and Charles Goddard and Fares Obied and Jack Min Ong and Jannik Straube and Fern and Aria Harley and Conner Stewart and Colin Kealty and Maziyar Panahi and Simon Kirsten and Anushka Deshpande and Anneketh Vij and Arthur Bresnu and Pranav Veldurthi and Raghav Ravishankar and Hardik Bishnoi and DatologyAI Team and Arcee AI Team and Prime Intellect Team and Mark McQuade and Johannes Hagemann and Lucas Atkins},
      year         = {2026},
      eprint       = {2602.17004},
      archivePrefix= {arXiv},
      primaryClass = {cs.LG},
      doi          = {10.48550/arXiv.2602.17004},
      url          = {https://arxiv.org/abs/2602.17004}
    }