File size: 2,071 Bytes
e2a8ecc ecd5ee4 21ce869 e2a8ecc 21ce869 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 |
---
base_model: meta-llama/Llama-3.2-1B-Instruct
datasets:
- whynlp/gsm8k-aug
library_name: transformers
license: llama3.2
pipeline_tag: text-generation
tags: []
---
# Learning When to Stop: Adaptive Latent Reasoning via Reinforcement Learning
This repository hosts a model presented in the paper "[Learning When to Stop: Adaptive Latent Reasoning via Reinforcement Learning](https://huggingface.co/papers/2511.21581)".
Latent reasoning is a novel development in Transformer language models that compresses reasoning lengths by directly passing information-rich previous final latent states. This model implements an adaptive-length latent reasoning approach optimized via a post-SFT reinforcement-learning methodology. This optimization minimizes reasoning length while maintaining accuracy, demonstrating a 52% drop in total reasoning length with no penalty to accuracy on the Llama 3.2 1B model and the GSM8K-Aug dataset.
- **Paper**: [Learning When to Stop: Adaptive Latent Reasoning via Reinforcement Learning](https://huggingface.co/papers/2511.21581)
- **Code**: [https://github.com/apning/adaptive-latent-reasoning](https://github.com/apning/adaptive-latent-reasoning)
## Sample Usage
You can load this model and other trained weights using the `automodelforcausallm_from_pretrained_latent` function from `src.model_creation`, as demonstrated in the official GitHub repository:
```python
from transformers import AutoTokenizer
# For full functionality, clone the official GitHub repo: https://github.com/apning/adaptive-latent-reasoning
# and ensure 'src.model_creation' is in your Python path or adapt the import.
from src.model_creation import automodelforcausallm_from_pretrained_latent
repo_id = "Lapisbird/Llama-adaLR-model-latent-6" # Example repo_id from the paper's GitHub README
model = automodelforcausallm_from_pretrained_latent(repo_id)
tokenizer = AutoTokenizer.from_pretrained(repo_id)
print(f"Model '{repo_id}' and tokenizer loaded successfully.")
# You can now use 'model' and 'tokenizer' for inference as described in the paper.
``` |