Datasets:

Modalities:
Text
Formats:
json
ArXiv:
Libraries:
Datasets
Dask
File size: 6,055 Bytes
b647436
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0d4de14
b647436
0d4de14
 
 
b647436
0d4de14
 
 
 
 
b647436
 
 
 
 
 
 
 
 
 
0d4de14
b647436
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
# AI-Flow-Information Capacity

<p align="center">
        <!-- <a href="README.md">中文</a> &nbsp | &nbsp <a href="README_en.md">English</a> --> 🏆 <a href="https://huggingface.co/spaces/TeleAI-AI-Flow/InformationCapacityLeaderboard"> Leaderboard</a> &nbsp&nbsp | &nbsp&nbsp 
         🖥️ <a href="https://github.com/TeleAI-AI-Flow/InformationCapacity">GitHub</a> &nbsp&nbsp | &nbsp&nbsp 🤗 <a href="https://huggingface.co/datasets/TeleAI-AI-Flow/InformationCapacity">Hugging Face</a>&nbsp&nbsp | &nbsp&nbsp 📑&nbsp <a href="https://www.arxiv.org/abs/2511.08066">Paper</a>
</p>

<p align="center">
    <img src="assets/ic_mixed.png" width="700" />
</p>

**Information Capacity** evaluates an LLM's **efficiency** based on text compression performance relative to computational complexity, harnessing the inherent correlation between **compression** and **intelligence**. 
Larger models can predict the next token more accurately, leading to higher compression gains but at increased computational costs. 
Consequently, a series of models with varying sizes exhibits **consistent** information capacity, which can be used to compare model capability across model series and predict model performance within a series.
It also facilitates dynamic routing of different-sized models for efficient handling of tasks with varying difficulties, which is especially relevant to the device-edge-cloud infrastructure detailed in the **AI Flow** framework.
With the rapid evolution of edge intelligence, we believe that this hierarchical network will replace the mainstream cloud-centric computing scheme in the near future.

Compared to existing metrics on LLM efficiency, a key difference of information capacity is that it considers the influence of **tokenizer efficiency**.
An effective tokenizer can represent a given text with fewer tokens, thus reducing both the input and output token counts.
This reduction not only lowers computational costs and inference delay but also facilitates long-context memory and in-depth reasoning.
Tokenizer efficiency exhibits growing significance in light of the exploding input length and the widespread usage of test-time scaling, but is often **neglected** in LLM evaluations.
We assess the information capacity of 49 models across 5 heterogeneous datasets and find consistent evidence regarding the influences of tokenizer efficiency, pretraining data, and the mixture-of-experts (MoE) architecture.

## Data

Previous studies have established that the correlation between compression and intelligence weakens when the evaluation corpus significantly deviates from the domain of downstream tasks.
Thus, we construct five heterogeneous datasets to provide a holistic assessment of LLM capabilities: Mixed text, FinePDFs-en, Ch-FineWeb-Edu, FineWeb-Edu, and NextCoder.
The Mixed text dataset is collected by us, while other datasets are sampled from publicly available open-source datasets.

* **Mixed text**: We compile a multilingual text corpus from diverse sources, including books, webpages, code, and published papers, to facilitate a comprehensive evaluation on LLMs' compression efficiency.
* **FinePDFs-en**: The FinePDFs dataset consists of about 3T tokens sourced exclusively from publicly available PDF files. We only select from the English subset to better examine the influence of the corpus distribution. <a href="https://huggingface.co/datasets/HuggingFaceFW/finepdfs"> [Huggingface] </a>
* **Ch-FineWeb-Edu**: The Chinese Fineweb Edu dataset is a high-quality Chinese pretraining corpus of 90 million samples in the education domain, selected by a strategy similar to that of FineWeb-Edu. <a href="https://huggingface.co/datasets/opencsg/chinese-fineweb-edu"> [Huggingface] </a>
* **FineWeb-Edu**: The FineWeb-Edu dataset contains 1.3T tokens of educational English webpages filtered from the FineWeb dataset, based on the annotations generated by Llama-3-70B-Instruct. <a href="https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu"> [Huggingface] </a>
* **NextCoder**: The NextCoder dataset consists of 127K unique code samples generated by GPT-4o and Llama-3.3-70B-Instruct across 8 programming languages: Python, Java, C++, C, Rust, JavaScript, Go, and Kotlin. <a href="https://huggingface.co/datasets/microsoft/NextCoderDataset"> [Huggingface] </a>

## Usage

Step 1. Setup an environment viable for model inference.
```sh
pip install numpy torch transformers tqdm flash_attn huggingface_hub
```

Step 2. Clone this repo.
```sh
GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/datasets/TeleAI-AI-Flow/InformationCapacity
cd InformationCapacity
```

Step 3. Download test datasets.
```sh
hf download TeleAI-AI-Flow/InformationCapacity --repo-type=dataset --include "datasets/**" --local-dir .
```

Step 4. Run evaluation code.
```sh
python calc_ic.py -m path/to/model -d datasets/mixed_text.jsonl -l 1024 -b 1
```

## Citation

```bibtex
@misc{yuan2025informationcapacity,
      title={Information Capacity: Evaluating the Efficiency of Large Language Models via Text Compression}, 
      author={Cheng Yuan and Jiawei Shao and Chi Zhang and Xuelong Li},
      year={2025},
      eprint={2511.08066},
      archivePrefix={arXiv},
      primaryClass={cs.AI},
      url={https://arxiv.org/abs/2511.08066}, 
}

@misc{an2025aiflowperspectivesscenarios,
      title={AI Flow: Perspectives, Scenarios, and Approaches}, 
      author={Hongjun An and Wenhan Hu and Sida Huang and Siqi Huang and Ruanjun Li and Yuanzhi Liang and Jiawei Shao and Yiliang Song and Zihan Wang and Cheng Yuan and Chi Zhang and Hongyuan Zhang and Wenhao Zhuang and Xuelong Li},
      year={2025},
      eprint={2506.12479},
      archivePrefix={arXiv},
      primaryClass={cs.AI},
      url={https://arxiv.org/abs/2506.12479}, 
}

@misc{shao2025aiflownetworkedge,
      title={AI Flow at the Network Edge}, 
      author={Jiawei Shao and Xuelong Li},
      year={2025},
      eprint={2411.12469},
      archivePrefix={arXiv},
      primaryClass={eess.SP},
      url={https://arxiv.org/abs/2411.12469}, 
}
```