SemCoT: Accelerating Chain-of-Thought Reasoning through Semantically-Aligned Implicit Tokens

πŸš€ Overview

SemCoT is a framework that improves the efficiency of Chain-of-Thought (CoT) reasoning by encoding reasoning steps inside hidden representations ("implicit tokens") instead of generating long textual explanations. This approach significantly speeds up inference while maintaining high reasoning performance.

This specific checkpoint is Sheared-LLaMA-1.3B fine-tuned using the SemCoT framework on the MultiArith dataset.

🎯 Key Features

  • πŸ—£οΈ Semantic Alignment: Uses a contrastively trained sentence transformer to ensure that implicit reasoning tokens remain semantically consistent with human-readable CoT explanations.
  • ⚑ Efficiency Optimization: Introduces a lightweight implicit reasoning generator, fine-tuned via knowledge distillation, to reduce token generation time and enhance inference speed.
  • 🧩 Joint Optimization: SemCoT is the first approach to jointly optimize both token-level generation speed and semantic alignment with ground-truth reasoning.

πŸ› οΈ Usage

To use this model, please refer to the official implementation on GitHub as it requires the SemCoT framework to handle the implicit reasoning tokens correctly.

Citation

@inproceedings{he2025semcot,
  title={SemCoT: Accelerating Chain-of-Thought Reasoning through Semantically-Aligned Implicit Tokens},
  author={He, Yinhan and Zheng, Wendy and Zhu, Yaochen and Zheng, Zaiyi and Su, Lin and Vasudevan, Sriram and Guo, Qi and Hong, Liangjie and Li, Jundong},
  booktitle={39th Conference on Neural Information Processing Systems (NeurIPS 2025)},
  year={2025}
}
Downloads last month
5
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for jonathanhe123/SemCoT-Sheared-LLaMA-1.3B-multiarith

Finetuned
(12)
this model

Dataset used to train jonathanhe123/SemCoT-Sheared-LLaMA-1.3B-multiarith

Paper for jonathanhe123/SemCoT-Sheared-LLaMA-1.3B-multiarith