Transformers
Safetensors
gpt2
text-generation-inference

Add pipeline tag and GitHub link

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +10 -10
README.md CHANGED
@@ -1,20 +1,26 @@
1
  ---
2
- library_name: transformers
3
- license: mit
4
  base_model:
5
  - openai-community/gpt2
 
 
 
6
  ---
 
7
  # CODI Model
8
 
9
  <div align="center">
10
 
11
  [![HuggingFace](https://img.shields.io/badge/🤗%20HuggingFace-Model-fcc21b?style=for-the-badge&logo=huggingface&logoColor=white)](https://huggingface.co/ModalityDance/latent-tts-codi)
 
 
12
 
13
  </div>
14
 
15
  ## Overview
16
 
17
- **CODI** (Continuous Chain-of-Thought via Self-Distillation) is a latent reasoning model based on GPT-2 that extends the base architecture with an optional projector module for enhanced hidden state representations. This model is part of the [Parallel Test-Time Scaling for Latent Reasoning Models](https://arxiv.org/abs/2510.07745) framework.
 
 
18
 
19
  ## Model Details
20
 
@@ -184,7 +190,7 @@ from src.paths import extract_answer_number
184
 
185
  # Extract answer from generated text
186
  answer = extract_answer_number(result)
187
- print(f"Answer: {answer}")
188
  ```
189
 
190
  ## Evaluation
@@ -196,12 +202,6 @@ Run evaluation using the provided scripts:
196
  ./run_tests.sh
197
  ```
198
 
199
- ## Model Card
200
-
201
- - **Paper**: [Parallel Test-Time Scaling for Latent Reasoning Models](https://arxiv.org/abs/2510.07745)
202
- - **HuggingFace**: [ModalityDance/latent-tts-codi](https://huggingface.co/ModalityDance/latent-tts-codi)
203
- - **Benchmarks**: GSM8K Test, GSM8K Hard, MultiArith
204
-
205
  ## Citation
206
 
207
  If you use this model, please cite:
 
1
  ---
 
 
2
  base_model:
3
  - openai-community/gpt2
4
+ library_name: transformers
5
+ license: mit
6
+ pipeline_tag: text-generation
7
  ---
8
+
9
  # CODI Model
10
 
11
  <div align="center">
12
 
13
  [![HuggingFace](https://img.shields.io/badge/🤗%20HuggingFace-Model-fcc21b?style=for-the-badge&logo=huggingface&logoColor=white)](https://huggingface.co/ModalityDance/latent-tts-codi)
14
+ [![Paper](https://img.shields.io/badge/Paper-arXiv-b31b1b?style=for-the-badge&logo=arxiv)](https://huggingface.co/papers/2510.07745)
15
+ [![GitHub](https://img.shields.io/badge/GitHub-Code-black?style=for-the-badge&logo=github)](https://github.com/ModalityDance/LatentTTS)
16
 
17
  </div>
18
 
19
  ## Overview
20
 
21
+ **CODI** (Continuous Chain-of-Thought via Self-Distillation) is a latent reasoning model based on GPT-2 that extends the base architecture with an optional projector module for enhanced hidden state representations. This model is part of the [Parallel Test-Time Scaling for Latent Reasoning Models](https://huggingface.co/papers/2510.07745) framework.
22
+
23
+ The official implementation is available at [github.com/ModalityDance/LatentTTS](https://github.com/ModalityDance/LatentTTS).
24
 
25
  ## Model Details
26
 
 
190
 
191
  # Extract answer from generated text
192
  answer = extract_answer_number(result)
193
+ print(f\"Answer: {answer}\")
194
  ```
195
 
196
  ## Evaluation
 
202
  ./run_tests.sh
203
  ```
204
 
 
 
 
 
 
 
205
  ## Citation
206
 
207
  If you use this model, please cite: