Affine-7857777: variant of Alphatao/Affine-0000000, see README.md
Browse files- README.md +3 -150
- intelligence_score_vs_output_tokens.png +0 -3
README.md
CHANGED
|
@@ -1,150 +1,3 @@
|
|
| 1 |
-
-
|
| 2 |
-
|
| 3 |
-
|
| 4 |
-
base_model:
|
| 5 |
-
- deepseek-ai/DeepSeek-R1-0528
|
| 6 |
-
- deepseek-ai/DeepSeek-R1
|
| 7 |
-
- deepseek-ai/DeepSeek-V3-0324
|
| 8 |
-
pipeline_tag: text-generation
|
| 9 |
-
---
|
| 10 |
-
# DeepSeek-TNG-R1T2-Chimera
|
| 11 |
-
|
| 12 |
-
<div align="center">
|
| 13 |
-
<img src="https://354918363417-runtime-assets.s3.eu-central-1.amazonaws.com/company_logo_light.svg"
|
| 14 |
-
alt="TNG Logo"
|
| 15 |
-
width="400"
|
| 16 |
-
style="display: inline-block; vertical-align: middle;"/>
|
| 17 |
-
</div>
|
| 18 |
-
<br>
|
| 19 |
-
<div align="center">
|
| 20 |
-
<a href="https://huggingface.co/tngtech/DeepSeek-TNG-R1T2-Chimera/blob/main/LICENSE.DeepSeek" style="margin: 2px;">
|
| 21 |
-
<img alt="License" src="https://img.shields.io/badge/License-MIT-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
|
| 22 |
-
</a>
|
| 23 |
-
</div>
|
| 24 |
-
<br>
|
| 25 |
-
<div align="center">
|
| 26 |
-
<img alt="Intelligence Score" src="intelligence_score_vs_output_tokens.png" style="display: inline-block; vertical-align: middle;" width="750"/>
|
| 27 |
-
<figcaption><a href="https://x.com/tngtech/status/1940531045432283412">Release Announcement on X</a></figcaption>
|
| 28 |
-
</div>
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
## Assembly of Experts Chimera model constructed with the DeepSeek [R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528), [R1](https://huggingface.co/deepseek-ai/DeepSeek-R1) and [V3-0324](https://huggingface.co/deepseek-ai/DeepSeek-V3-0324) parent models
|
| 32 |
-
|
| 33 |
-
We present our new **DeepSeek-TNG R1T2 Chimera** 671B model, the first successor to our original [*DeepSeek R1T Chimera*](https://huggingface.co/tngtech/DeepSeek-R1T-Chimera) that was released on April 26th. Unlike the original Chimera, which was based on the *two parent models* V3-0324 and R1, the new Chimera is a **Tri-Mind** *with three parents*, namely additionally R1-0528. It is constructed using the Assembly of Experts-method with relatively fine-granular direct brain edits. This more refined assembly allowed, among other improvements, the fixing of the <think> token consistency issue, which was a weakness of R1T and is now solved for R1T2.
|
| 34 |
-
|
| 35 |
-
**Sweet spot**
|
| 36 |
-
|
| 37 |
-
R1T2 operates at a new sweet spot in intelligence vs. output token length. It appears to be...
|
| 38 |
-
|
| 39 |
-
- about **20% faster than** the regular **R1**, and more than **twice as fast as R1-0528**
|
| 40 |
-
- significantly **more intelligent than** the regular **R1** in benchmarks such as **GPQA**, **AIME-24** and **Aider Polyglot**
|
| 41 |
-
- much **more intelligent** and also **think-token consistent** compared to the first **R1T Chimera** 0426
|
| 42 |
-
- and generally well-behaved and a **nice persona** to talk to, even without any system prompt.
|
| 43 |
-
|
| 44 |
-
**Recommendations for your model decision**
|
| 45 |
-
|
| 46 |
-
*R1T2* compared...
|
| 47 |
-
- *vs R1:* We hope that R1T2 is a very desirable, almost universally **better drop-in replacement for R1**
|
| 48 |
-
- *vs R1-0528:* R1T2 is a much **cheaper alternative to the full R1-0528**, if the full 0528-level intelligence is not required
|
| 49 |
-
- *vs R1T:* R1T2 is usually **recommended over R1T**, unless the specific personality of R1T was optimal, the think-token issue not important, or R1T's higher speed crucial
|
| 50 |
-
- *vs V3-0324:* V3 is so much faster that if you can live with the **lower intelligence, take V3**, however, if you **need reasoning, R1T2** is the go-to model
|
| 51 |
-
|
| 52 |
-
**Limitations**
|
| 53 |
-
|
| 54 |
-
- **R1-0528** is thinking much longer, but also is achieving **better hard benchmark results** than R1T2
|
| 55 |
-
- As measured by SpeechMap.ai (courtesy of xlr8harder), **R1T2** is significantly **more reserved** than R1T, but not as much as R1-0528
|
| 56 |
-
- When switching from R1T to R1T2 development, we changed from AIME24 and MT-Bench to AIME24, AIME25 and GPQA-Diamond for the intelligence score. With the new benchmark set, there is a larger score difference between R1 and the original R1T Chimera than published earlier.
|
| 57 |
-
- Function calling is supported in general, but both vLLM and SGLang currently require some specific adaptions, see the section below.
|
| 58 |
-
|
| 59 |
-
**Evaluation results**
|
| 60 |
-
|
| 61 |
-
Evaluation was performed using the evalchemy framework (pass@1 averaged over 10/5 runs for AIME/GPQAD, at a temperature of 0.6).
|
| 62 |
-
We report measured benchmark results for our R1T2, R1T models and published benchmark results for V3-0324, R1, R1-0528.
|
| 63 |
-
|
| 64 |
-
| | R1T2 | R1T | V3-0324 | R1 | R1-0528 | Comment |
|
| 65 |
-
|:-----------------------------------|-----:|-----:|--------:|-----:|--------:|:--------|
|
| 66 |
-
| AIME-24 | 82.3 | 74.7 | 59.4 | 79.8 | 91.4 | |
|
| 67 |
-
| AIME-25 | 70.0 | 58.3 | 49.6 | 70.0 | 87.5 | V3-0324 source: AIME-25 measured by us |
|
| 68 |
-
| GPQA-Diamond | 77.9 | 72.0 | 68.4 | 71.5 | 81.0 | |
|
| 69 |
-
| Aider Polyglot | 64.4 | 48.4 | 44.9 | 52.0 | 71.6 | R1T2 source: Aider discord, t=0.75 |
|
| 70 |
-
| EQ-Bench Longform Creative Writing | 76.4 | ./. | 78.1 | 74.6 | 78.9 | see [EQ Bench](https://eqbench.com/creative_writing_longform.html) |
|
| 71 |
-
|
| 72 |
-
## Technological background
|
| 73 |
-
|
| 74 |
-
For details on the AoE construction process, you can read our [Paper on arXiV](https://arxiv.org/abs/2506.14794).
|
| 75 |
-
|
| 76 |
-
**Runtime parameter settings**
|
| 77 |
-
|
| 78 |
-
- Most of our evaluation was done with a maximum context size of 60,000 tokens.
|
| 79 |
-
With a context size of 130,000 tokens, the model proved very helpful in interpreting very long debug logs. Long-context testing was less extensive, though.
|
| 80 |
-
- We're running the model using vLLM on 8xH200 and MI325X nodes, additionally we've tested the model using SGLang, which is also used by [chutes.ai](https://chutes.ai/app/chute/4fa0c7f5-82f7-59d1-8996-661bb778893d).
|
| 81 |
-
- For SGLang, we recommend using versions >= v0.4.8 in combination with argument `--reasoning-parser qwen3` to properly handle rare cases when the model skips the `<think>` reasoning step.
|
| 82 |
-
|
| 83 |
-
|
| 84 |
-
### Function calling
|
| 85 |
-
|
| 86 |
-
R1T2 does support function calling using an updated chat template (since 01 Aug 2025). However, neither vLLM nor SGLang provide an R1T2-compatible tool call parser natively but require some adaptions.
|
| 87 |
-
|
| 88 |
-
_vLLM:_
|
| 89 |
-
|
| 90 |
-
For function calling with vLLM, a new tool parser is required. While we opened [a PR to vLLM](https://github.com/vllm-project/vllm/pull/22074) to include an R1T2-compatible tool parser off-the-shelf, we also ship the tool parser file `tool_parser_vllm.py` within this repository.
|
| 91 |
-
With this file, tool calling can be enabled via
|
| 92 |
-
```
|
| 93 |
-
--tool-parser-plugin <ABSOLUTE_MODEL_SNAPSHOT_PATH>/tool_parser_vllm.py \
|
| 94 |
-
--tool-call-parser tng_r1t2
|
| 95 |
-
```
|
| 96 |
-
|
| 97 |
-
Here, put in the path to the snapshot folder such as `~/.cache/huggingface/hub/models--tngtech--DeepSeek-TNG-R1T2-Chimera/snapshots/SNAPSHOT/tool_parser_vllm.py`
|
| 98 |
-
|
| 99 |
-
_SGLang:_
|
| 100 |
-
|
| 101 |
-
Tool call support for R1T2 requires a recent SGLang version >= v0.4.10 (alternatively, you need to patch [this bugfix for the reasoning parser](https://github.com/sgl-project/sglang/pull/8606) for older versions of SGLang).
|
| 102 |
-
|
| 103 |
-
An R1T2-compatible tool call parser will be added with [this PR to SGLang](https://github.com/sgl-project/sglang/pull/8672).
|
| 104 |
-
Unfortunately, and unlike vLLM, there is no simple plugin system for tool call parsers in SGLang.
|
| 105 |
-
Until our PR is merged an relased with a new SGLang version, you can still install it manually by patching your SGLang source code as outlined in the PR:
|
| 106 |
-
The new tool call parser must be added and registered (so in total one file must be added, a second one edited, see [details here](https://github.com/sgl-project/sglang/pull/8672/files)).
|
| 107 |
-
|
| 108 |
-
Once the SGLang installation has been updated correctly, tool calling with R1T2 can be activated by starting SGLang with
|
| 109 |
-
|
| 110 |
-
```
|
| 111 |
-
--tool-call-parser tng_r1t2
|
| 112 |
-
```
|
| 113 |
-
|
| 114 |
-
|
| 115 |
-
## Model Details
|
| 116 |
-
|
| 117 |
-
- **Architecture**: DeepSeek-MoE transformer-based language model
|
| 118 |
-
- **Combination Method**: Assembly of Experts from the three DeepSeek parent models R1-0528, R1 and V3-0324
|
| 119 |
-
- **Release Date**: 2025-07-02
|
| 120 |
-
- **Design Team**: Robert Dahlke, Henrik Klagges, Benjamin Merkel, Fabian Klemm and David Reiss, Munich, Germany
|
| 121 |
-
- **Extra Thanks**: Big thanks to DeepSeek for their great models and open-source generosity, and to the other researchers that have published on model merging methodologies.
|
| 122 |
-
|
| 123 |
-
|
| 124 |
-
## Use, Out-of-scope Use, Other Limitations, Risks, Recommendations et al.
|
| 125 |
-
Regarding the R1T/R1T2-Chimeras, we ask you to follow the careful guidelines that Microsoft has created for their "MAI-DS-R1" DeepSeek-based model.
|
| 126 |
-
These professional guidelines are available [here on Hugging Face](https://huggingface.co/microsoft/MAI-DS-R1).
|
| 127 |
-
|
| 128 |
-
## EU AI Act
|
| 129 |
-
|
| 130 |
-
Due to the strict new guidelines of the EU AI Act that take effect on August 2nd 2025, we recommend that each R1T/R1T2 user in the EU either familiarizes themselves with these requirements and assess their compliance, or ceases using the model in the EU after August 1st, 2025.
|
| 131 |
-
|
| 132 |
-
## Contact, especially for your user feedback
|
| 133 |
-
|
| 134 |
-
Please give us your feedback, especially if you find deficiencies in the model:
|
| 135 |
-
- Email: research@tngtech.com
|
| 136 |
-
- X.com: @tngtech
|
| 137 |
-
|
| 138 |
-
## Citation
|
| 139 |
-
|
| 140 |
-
```
|
| 141 |
-
@misc{tng_technology_consulting_gmbh_2025_07_02,
|
| 142 |
-
author = { TNG Technology Consulting GmbH },
|
| 143 |
-
title = { DeepSeek-TNG-R1T2-Chimera },
|
| 144 |
-
year = 2025,
|
| 145 |
-
month = { July },
|
| 146 |
-
url = { https://huggingface.co/tngtech/DeepSeek-TNG-R1T2-Chimera },
|
| 147 |
-
doi = { 10.57967/hf/5950 },
|
| 148 |
-
publisher = { Hugging Face }
|
| 149 |
-
}
|
| 150 |
-
```
|
|
|
|
| 1 |
+
This repository hosts a variant of Alphatao/Affine-0000000.
|
| 2 |
+
License: MIT. The original license is preserved.
|
| 3 |
+
No further information about the modifications is provided.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
intelligence_score_vs_output_tokens.png
DELETED
Git LFS Details
|