Improve model card: Add paper, code links, pipeline tag, usage, and trained models

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +53 -3
README.md CHANGED
@@ -1,10 +1,60 @@
1
  ---
2
- library_name: transformers
3
- license: llama3.2
4
  base_model: meta-llama/Llama-3.2-1B-Instruct
5
  datasets:
6
  - whynlp/gsm8k-aug
 
 
7
  tags: []
 
8
  ---
9
 
10
- Built with Llama
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
 
 
2
  base_model: meta-llama/Llama-3.2-1B-Instruct
3
  datasets:
4
  - whynlp/gsm8k-aug
5
+ library_name: transformers
6
+ license: llama3.2
7
  tags: []
8
+ pipeline_tag: text-generation
9
  ---
10
 
11
+ # Learning When to Stop: Adaptive Latent Reasoning via Reinforcement Learning
12
+
13
+ This repository contains model weights for the paper [Learning When to Stop: Adaptive Latent Reasoning via Reinforcement Learning](https://huggingface.co/papers/2511.21581).
14
+
15
+ The model explores adaptive-length latent reasoning in Transformer language models, optimizing reasoning length while maintaining accuracy through a post-SFT reinforcement learning methodology. Experiments on the Llama 3.2 1B model and the GSM8K-Aug dataset demonstrated a 52% reduction in total reasoning length without sacrificing accuracy.
16
+
17
+ For more details, including the full codebase and utilities, please refer to the [GitHub repository](https://github.com/apning/adaptive-latent-reasoning).
18
+
19
+ ## Sample Usage
20
+
21
+ You can load these models using the `automodelforcausallm_from_pretrained_latent` function from `src.model_creation` as shown in the GitHub repository:
22
+
23
+ ```python
24
+ from transformers import AutoTokenizer
25
+ from src.model_creation import automodelforcausallm_from_pretrained_latent
26
+
27
+ repo_id = "Lapisbird/Llama-adaLR-model-latent-6" # Example: Replace with the specific model variant you want to load
28
+
29
+ model = automodelforcausallm_from_pretrained_latent(repo_id)
30
+ tokenizer = AutoTokenizer.from_pretrained(repo_id)
31
+
32
+ # Further inference steps would follow from here, depending on your task.
33
+ # Note: The `automodelforcausallm_from_pretrained_latent` function is custom to this project and
34
+ # requires the `src/model_creation.py` file from the GitHub repository to be available in your Python path.
35
+ ```
36
+
37
+ ## Trained Model Weights
38
+
39
+ All weights used for results in the paper are available on Hugging Face.
40
+
41
+ **From the main results:**
42
+
43
+ | Model | Hugging Face repo |
44
+ | --- | --- |
45
+ | CoT SFT | Lapisbird/Llama-adaLR-model-cot_sft |
46
+ | No-CoT SFT | Lapisbird/Llama-adaLR-model-no_cot_sft |
47
+ | Latent-6 | Lapisbird/Llama-adaLR-model-latent-6 |
48
+ | Latent-6 + RL | Lapisbird/Llama-adaLR-model-latent-6_rl |
49
+ | Latent-6-by-1 | Lapisbird/Llama-adaLR-model-latent-6-by-1 |
50
+ | Latent-6-by-1 + RL | Lapisbird/Llama-adaLR-model-latent-6-by-1_rl |
51
+
52
+ **From the knowledge distillation for SFT section in Appendix:**
53
+
54
+ | Model (Appendix) | Hugging Face repo |
55
+ | --- | --- |
56
+ | codi | Lapisbird/Llama-adaLR-appendix-model-codi |
57
+ | codi + intermediate | Lapisbird/Llama-adaLR-appendix-model-codi_intermediate |
58
+ | meaned | Lapisbird/Llama-adaLR-appendix-model-meaned |
59
+ | meaned + intermediate | Lapisbird/Llama-adaLR-appendix-model-meaned_intermediate |
60
+ | meaned + codi | Lapisbird/Llama-adaLR-appendix-model-meaned_codi |