cccczshao commited on
Commit
615a8d0
·
verified ·
1 Parent(s): 764b83e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -11
README.md CHANGED
@@ -3,7 +3,7 @@ datasets:
3
  - monology/pile-uncopyrighted
4
  language:
5
  - en
6
- library_name: CALM
7
  license: mit
8
  metrics:
9
  - BrierLM
@@ -12,7 +12,6 @@ tags:
12
  - language modeling
13
  pipeline_tag: text-generation
14
  ---
15
-
16
  # Continuous Autoregressive Language Models
17
 
18
  [![Paper](https://img.shields.io/badge/Paper_📃-green)](https://arxiv.org/abs/2510.27688)
@@ -20,25 +19,26 @@ pipeline_tag: text-generation
20
  [![HuggingFace](https://img.shields.io/badge/HuggingFace_🤗-orange)](https://huggingface.co/collections/cccczshao/calm)
21
  [![Blog](https://img.shields.io/badge/Blog_✍️-yellowgreen)](https://shaochenze.github.io/blog/2025/CALM/)
22
 
 
23
  ## Model Description
24
 
25
  Modern Large Language Models (LLMs) are constrained by a fundamental bottleneck: they generate text one token at a time. **CALM (Continuous Autoregressive Language Models)** confronts this challenge by introducing a paradigm shift in language modeling. Instead of predicting one discrete token at a time, CALM learns to predict a single continuous vector that represents an entire chunk of K tokens.
26
 
27
  This is achieved through a two-stage process:
28
 
29
- 1. **A high-fidelity autoencoder** learns to compress K tokens into a single vector and reconstruct them with near-perfect accuracy.
30
- 2. **A continuous-domain language model** then performs autoregressive prediction in this vector space.
31
 
32
  ### Key Features
33
 
34
- * 🚀 **Ultra-Efficient by Design:** Dramatically improves training and inference efficiency by reducing the number of autoregressive steps by a factor of K.
35
- * 💡 **A New Scaling Axis:** Introduces a new scaling dimension for LLMs—semantic bandwidth (K). Instead of just scaling parameters and data, you can now scale the amount of information processed in a single step.
36
- * 🛠️ **A Comprehensive Likelihood-Free Toolkit:** Operating in a continuous domain requires new tools. This repository provides the full suite of algorithms that make CALM possible:
37
 
38
- * **A Robust Autoencoder** to learn high-fidelity continuous representations of token chunks.
39
- * **Energy-Based Training**, a principled and likelihood-free method for generative modeling.
40
- * **BrierLM**, a new metric for calibrated, likelihood-free evaluation of language models.
41
- * **Temperature Sampling** for controlled, high-quality text generation using only a black-box sampler.
42
 
43
  ## How to use
44
 
 
3
  - monology/pile-uncopyrighted
4
  language:
5
  - en
6
+ library_name: transformers
7
  license: mit
8
  metrics:
9
  - BrierLM
 
12
  - language modeling
13
  pipeline_tag: text-generation
14
  ---
 
15
  # Continuous Autoregressive Language Models
16
 
17
  [![Paper](https://img.shields.io/badge/Paper_📃-green)](https://arxiv.org/abs/2510.27688)
 
19
  [![HuggingFace](https://img.shields.io/badge/HuggingFace_🤗-orange)](https://huggingface.co/collections/cccczshao/calm)
20
  [![Blog](https://img.shields.io/badge/Blog_✍️-yellowgreen)](https://shaochenze.github.io/blog/2025/CALM/)
21
 
22
+
23
  ## Model Description
24
 
25
  Modern Large Language Models (LLMs) are constrained by a fundamental bottleneck: they generate text one token at a time. **CALM (Continuous Autoregressive Language Models)** confronts this challenge by introducing a paradigm shift in language modeling. Instead of predicting one discrete token at a time, CALM learns to predict a single continuous vector that represents an entire chunk of K tokens.
26
 
27
  This is achieved through a two-stage process:
28
 
29
+ 1. **A high-fidelity autoencoder** learns to compress K tokens into a single vector and reconstruct them with near-perfect accuracy.
30
+ 2. **A continuous-domain language model** then performs autoregressive prediction in this vector space.
31
 
32
  ### Key Features
33
 
34
+ * 🚀 **Ultra-Efficient by Design:** Dramatically improves training and inference efficiency by reducing the number of autoregressive steps by a factor of K.
35
+ * 💡 **A New Scaling Axis:** Introduces a new scaling dimension for LLMs—semantic bandwidth (K). Instead of just scaling parameters and data, you can now scale the amount of information processed in a single step.
36
+ * 🛠️ **A Comprehensive Likelihood-Free Toolkit:** Operating in a continuous domain requires new tools. This repository provides the full suite of algorithms that make CALM possible:
37
 
38
+ * **A Robust Autoencoder** to learn high-fidelity continuous representations of token chunks.
39
+ * **Energy-Based Training**, a principled and likelihood-free method for generative modeling.
40
+ * **BrierLM**, a new metric for calibrated, likelihood-free evaluation of language models.
41
+ * **Temperature Sampling** for controlled, high-quality text generation using only a black-box sampler.
42
 
43
  ## How to use
44