Improve model card: Add paper link, code link, pipeline tag, and sample usage

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +33 -3
README.md CHANGED
@@ -1,10 +1,40 @@
1
  ---
2
- library_name: transformers
3
- license: llama3.2
4
  base_model: meta-llama/Llama-3.2-1B-Instruct
5
  datasets:
6
  - whynlp/gsm8k-aug
 
 
7
  tags: []
 
8
  ---
9
 
10
- Built with Llama
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
 
 
2
  base_model: meta-llama/Llama-3.2-1B-Instruct
3
  datasets:
4
  - whynlp/gsm8k-aug
5
+ library_name: transformers
6
+ license: llama3.2
7
  tags: []
8
+ pipeline_tag: text-generation
9
  ---
10
 
11
+ # Learning When to Stop: Adaptive Latent Reasoning via Reinforcement Learning
12
+
13
+ This repository contains the weights for **Adaptive Latent Reasoning models**, as introduced in the paper [Learning When to Stop: Adaptive Latent Reasoning via Reinforcement Learning](https://huggingface.co/papers/2511.21581).
14
+
15
+ Latent reasoning represents a new development in Transformer language models that has shown potential in compressing reasoning lengths compared to chain-of-thought reasoning. This work develops adaptive-length latent reasoning models and introduces a post-SFT reinforcement-learning methodology to optimize latent reasoning length by minimizing reasoning length while maintaining accuracy. Experiments on the Llama 3.2 1B model and the GSM8K-Aug dataset showed a 52% drop in total reasoning length with no penalty to accuracy.
16
+
17
+ The official code and pretrained weights are available at the GitHub repository: https://github.com/apning/adaptive-latent-reasoning
18
+
19
+ ## Usage
20
+
21
+ All weights used for results in the paper are available on Hugging Face. You can load these models using the function `automodelforcausallm_from_pretrained_latent` from `src.model_creation`.
22
+
23
+ First, set up your environment by cloning the repository and installing dependencies:
24
+ ```bash
25
+ git clone https://github.com/apning/adaptive-latent-reasoning.git
26
+ cd adaptive-latent-reasoning
27
+ conda env create -f environment.yml && conda activate adaptive-latent-reasoning
28
+ ```
29
+
30
+ Then, you can load a model like this:
31
+ ```python
32
+ from transformers import AutoTokenizer
33
+ from src.model_creation import automodelforcausallm_from_pretrained_latent
34
+
35
+ repo_id = "Lapisbird/Llama-adaLR-model-latent-6" # Example model from the paper
36
+
37
+ model = automodelforcausallm_from_pretrained_latent(repo_id)
38
+ tokenizer = AutoTokenizer.from_pretrained(repo_id)
39
+ ```
40
+ For more detailed instructions on replication, training, and evaluation, please refer to the [official GitHub repository](https://github.com/apning/adaptive-latent-reasoning).