SemCoT: Accelerating Chain-of-Thought Reasoning through Semantically-Aligned Implicit Tokens
Paper
β’
2510.24940
β’
Published
β’
18
SemCoT is a framework designed to accelerate Chain-of-Thought (CoT) reasoning in Large Language Models (LLMs) by replacing verbose explicit reasoning with compact, semantically-aligned implicit tokens. Instead of generating long textual explanations, SemCoT encodes reasoning steps within hidden representations (implicit reasoning), which significantly speeds up inference while maintaining high performance.
This specific checkpoint is a fine-tuned version of optimum/mistral-1.1b-testing using the SemCoT framework on the ChilleD/MultiArith dataset.
Please refer to the official GitHub repository for instructions on environment setup, data generation, and how to run the evaluation scripts for this model.
@inproceedings{he2025semcot,
title={SemCoT: Accelerating Chain-of-Thought Reasoning through Semantically-Aligned Implicit Tokens},
author={He, Yinhan and Zheng, Wendy and Zhu, Yaochen and Zheng, Zaiyi and Su, Lin and Vasudevan, Sriram and Guo, Qi and Hong, Liangjie and Li, Jundong},
booktitle={39th Conference on Neural Information Processing Systems (NeurIPS 2025)},
year={2025}
}
Base model
optimum/mistral-1.1b-testing