File size: 1,481 Bytes
b3f827e 7768713 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 | ---
title: Intention Collapse 🧠
emoji: 📉
colorFrom: blue
colorTo: red
sdk: static
pinned: false
---
# 🧠 Intention Collapse: Intention-Level Metrics for Reasoning
**Paper:** [arXiv:2601.01011](https://arxiv.org/abs/2601.01011)
**Code:** [GitHub Repository](https://github.com/patriciomvera/intention-collapse-experiments)
## ⚡️ TL;DR
We introduce **Intention Collapse**, a framework to study how LLMs compress high-dimensional internal states into a single token sequence. We propose three model-agnostic metrics: **Intention Entropy**, **Effective Dimensionality**, and **Recoverability**.
## 🔥 Key Findings
* [cite_start]**CoT is not magic:** It improves GSM8K but degrades performance on ARC-Challenge[cite: 30].
* [cite_start]**Entropy Shift:** CoT makes Mistral *more certain* (lower entropy) but makes LLaMA *less certain* (higher entropy)[cite: 33].
## 🛠️ How to use the metrics
To extract the "Intention State" $I$ (pre-collapse) from your own model:
```python
# (Pon aquí un snippet simplificado de tu repo, por ejemplo:)
from intention_metrics import extract_intention_state
# Get the pre-collapse hidden state
I = extract_intention_state(model, prompt, layer_idx=-1)
print(f"Intention Entropy: {calculate_entropy(I)}")
## Citation
```bibtex
@article{vera2026intention,
title={Intention Collapse: Intention-Level Metrics for Reasoning in Language},
author={Vera, Patricio},
journal={arXiv preprint arXiv:2601.01011},
year={2026}
}
|