Datasets:

Modalities:
Text
Formats:
json
ArXiv:
Libraries:
Datasets
Dask
TeleAI-AI-Flow commited on
Commit
b647436
·
verified ·
1 Parent(s): bf7c47a

Upload 6 files

Browse files
Files changed (6) hide show
  1. README.md +90 -0
  2. assets/ic_mixed.png +3 -0
  3. calc_ic.py +77 -0
  4. flops.py +439 -0
  5. likelihood.py +89 -0
  6. text_size.py +97 -0
README.md ADDED
@@ -0,0 +1,90 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # AI-Flow-Information Capacity
2
+
3
+ <p align="center">
4
+ <!-- <a href="README.md">中文</a> &nbsp | &nbsp <a href="README_en.md">English</a> --> 🏆 <a href="https://huggingface.co/spaces/TeleAI-AI-Flow/InformationCapacityLeaderboard"> Leaderboard</a> &nbsp&nbsp | &nbsp&nbsp
5
+ 🖥️ <a href="https://github.com/TeleAI-AI-Flow/InformationCapacity">GitHub</a> &nbsp&nbsp | &nbsp&nbsp 🤗 <a href="https://huggingface.co/datasets/TeleAI-AI-Flow/InformationCapacity">Hugging Face</a>&nbsp&nbsp | &nbsp&nbsp 📑&nbsp <a href="https://www.arxiv.org/abs/2511.08066">Paper</a>
6
+ </p>
7
+
8
+ <p align="center">
9
+ <img src="assets/ic_mixed.png" width="700" />
10
+ </p>
11
+
12
+ **Information Capacity** evaluates an LLM's **efficiency** based on text compression performance relative to computational complexity, harnessing the inherent correlation between **compression** and **intelligence**.
13
+ Larger models can predict the next token more accurately, leading to higher compression gains but at increased computational costs.
14
+ Consequently, a series of models with varying sizes exhibits **consistent** information capacity, which can be used to compare model capability across model series and predict model performance within a series.
15
+ It also facilitates dynamic routing of different-sized models for efficient handling of tasks with varying difficulties, which is especially relevant to the device-edge-cloud infrastructure detailed in the **AI Flow** framework.
16
+ With the rapid evolution of edge intelligence, we believe that this hierarchical network will replace the mainstream cloud-centric computing scheme in the near future.
17
+
18
+ Compared to existing metrics on LLM efficiency, a key difference of information capacity is that it considers the influence of **tokenizer efficiency**.
19
+ An effective tokenizer can represent a given text with fewer tokens, thus reducing both the input and output token counts.
20
+ This reduction not only lowers computational costs and inference delay but also facilitates long-context memory and in-depth reasoning.
21
+ Tokenizer efficiency exhibits growing significance in light of the exploding input length and the widespread usage of test-time scaling, but is often **neglected** in LLM evaluations.
22
+ We assess the information capacity of 49 models across 5 heterogeneous datasets and find consistent evidence regarding the influences of tokenizer efficiency, pretraining data, and the mixture-of-experts (MoE) architecture.
23
+
24
+ ## Method
25
+
26
+ The model intelligence is measured by the data size savings achieved from the LLM's probability prediction.
27
+ The original size of a text sample in the given dataset is denoted as $C$, which is transformed into a sequence of $L$ tokens by the tokenizer of an LLM $M$.
28
+ The symbol length of the $i$-th token derived from entropy coding is approximately $-\log p(x_i | x_{<i} ; M)$, and the compression gain is the difference between the original data size and the summed symbol length of all tokens.
29
+ The computational complexity is measured by the inference floating-point operations (FLOPs) $N_M$ on a logarithmic scale according to the scaling law.
30
+ We introduce a negative bias $b$ in the numerator so that different-sized models in a series have nearly identical information capacities, thus enabling convenient comparison across different model sizes and architectures.
31
+
32
+ In summary, the computation formula of information capacity is expressed as:
33
+ $$ \text{IC} = \frac{\frac{1}{L-1} (C - \sum_{i=2}^{L} -\log p(x_i | x_{<i} ; M))+b}{ \log (N_M / (L-1))} . $$
34
+
35
+ ## Usage
36
+
37
+ Step 1. Setup an environment viable for model inference.
38
+ ```sh
39
+ pip install numpy torch transformers tqdm flash_attn huggingface_hub
40
+ ```
41
+
42
+ Step 2. Clone this repo.
43
+ ```sh
44
+ git clone https://github.com/TeleAI-AI-Flow/InformationCapacity.git
45
+ cd InformationCapacity
46
+ ```
47
+
48
+ Step 3. Download test datasets.
49
+ ```sh
50
+ hf download TeleAI-AI-Flow/InformationCapacity --repo-type=dataset --include "datasets/**" --local-dir .
51
+ ```
52
+
53
+ Step 4. Run evaluation code.
54
+ ```sh
55
+ python calc_ic.py -m path/to/model -d datasets/mixed_text.jsonl -l 1024 -b 1
56
+ ```
57
+
58
+ ## Citation
59
+
60
+ ```bibtex
61
+ @misc{yuan2025informationcapacity,
62
+ title={Information Capacity: Evaluating the Efficiency of Large Language Models via Text Compression},
63
+ author={Cheng Yuan and Jiawei Shao and Chi Zhang and Xuelong Li},
64
+ year={2025},
65
+ eprint={2511.08066},
66
+ archivePrefix={arXiv},
67
+ primaryClass={cs.AI},
68
+ url={https://arxiv.org/abs/2511.08066},
69
+ }
70
+
71
+ @misc{an2025aiflowperspectivesscenarios,
72
+ title={AI Flow: Perspectives, Scenarios, and Approaches},
73
+ author={Hongjun An and Wenhan Hu and Sida Huang and Siqi Huang and Ruanjun Li and Yuanzhi Liang and Jiawei Shao and Yiliang Song and Zihan Wang and Cheng Yuan and Chi Zhang and Hongyuan Zhang and Wenhao Zhuang and Xuelong Li},
74
+ year={2025},
75
+ eprint={2506.12479},
76
+ archivePrefix={arXiv},
77
+ primaryClass={cs.AI},
78
+ url={https://arxiv.org/abs/2506.12479},
79
+ }
80
+
81
+ @misc{shao2025aiflownetworkedge,
82
+ title={AI Flow at the Network Edge},
83
+ author={Jiawei Shao and Xuelong Li},
84
+ year={2025},
85
+ eprint={2411.12469},
86
+ archivePrefix={arXiv},
87
+ primaryClass={eess.SP},
88
+ url={https://arxiv.org/abs/2411.12469},
89
+ }
90
+ ```
assets/ic_mixed.png ADDED

Git LFS Details

  • SHA256: ac56cb53fd0a9ca5276febbd459b5b4e506138c42434dda3583e40a855defed2
  • Pointer size: 131 Bytes
  • Size of remote file: 212 kB
calc_ic.py ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import argparse
2
+ import torch
3
+ from math import log2
4
+ from text_size import calculate_text_size_per_token
5
+ from likelihood import calculate_negative_log_likelihood
6
+ from flops import gqa_model_theoretical_flops, mla_model_theoretical_flops
7
+
8
+ def calculate_information_capacity(
9
+ model_path: str,
10
+ data_path: str,
11
+ max_sample_length: int = 1024,
12
+ batch_size: int = 1,
13
+ numerator_bias: float = None,
14
+ attention_mechanism: str = None,
15
+ ) -> float:
16
+ if attention_mechanism is None:
17
+ attention_mechanism = "mla" if "deepseek" in model_path.lower() else "gqa"
18
+ else:
19
+ attention_mechanism = attention_mechanism.lower()
20
+ if attention_mechanism != "gqa" and attention_mechanism != "mla":
21
+ raise NotImplementedError("attention_mechanism argument should be either gqa or mla")
22
+
23
+ if numerator_bias is None:
24
+ if "mixed_text.jsonl" in data_path: numerator_bias = -24
25
+ elif "Ch-FineWeb-Edu.jsonl" in data_path: numerator_bias = -18.7
26
+ else: numerator_bias = -27
27
+ print(f"numerator_bias is not designated, default to {numerator_bias} based on the data_path")
28
+
29
+ ts_results = calculate_text_size_per_token(data_path, model_path, target_token_length=max_sample_length)
30
+ avg_ts = ts_results["mean_text_size"]
31
+ for k, v in ts_results.items(): print(f"{k}: {v}")
32
+
33
+ nlls = calculate_negative_log_likelihood(model_path, data_path, max_sample_length, batch_size=batch_size, num_samples=ts_results["total_valid_lines"])
34
+ avg_nll = torch.nanmean(nlls).item()
35
+ print(f"Average negative log-likelihood: {avg_nll}")
36
+
37
+ cfg_path = model_path + "/config.json"
38
+ if attention_mechanism == "gqa":
39
+ flops_results = gqa_model_theoretical_flops(cfg_path, gen_len=max_sample_length)
40
+ elif attention_mechanism == "mla":
41
+ flops_results = mla_model_theoretical_flops(cfg_path, gen_len=max_sample_length)
42
+ per_token_flops = flops_results["decode_total_TFLOPs"] * 1e12 / max_sample_length
43
+ for k, v in flops_results.items(): print(f"{k}: {v}")
44
+
45
+ ic = (avg_ts - avg_nll + numerator_bias) / log2(per_token_flops)
46
+ print(f"\nInformation capacity: {ic}")
47
+
48
+ return ic
49
+
50
+
51
+ def main():
52
+ parser = argparse.ArgumentParser(
53
+ description="Compute the information capacity of a language model."
54
+ )
55
+ parser.add_argument("-m", "--model_path", type=str, required=True, help="Path to the model directory.")
56
+ parser.add_argument("-d", "--data_path", type=str, required=True, help="Path to the dataset (JSONL format).")
57
+ parser.add_argument("-l", "--max_sample_length", type=int, default=1024, help="Maximum token length for each sample.")
58
+ parser.add_argument("-b", "--batch_size", type=int, default=1, help="Batch size for evaluation.")
59
+ parser.add_argument("-n", "--numerator_bias", type=float, default=None,
60
+ help="Optional numerator bias. If not set, inferred automatically.")
61
+ parser.add_argument("-a", "--attention_mechanism", type=str, choices=["gqa", "mla"], default=None,
62
+ help="Specify attention mechanism ('gqa' or 'mla'). If not set, inferred automatically.")
63
+
64
+ args = parser.parse_args()
65
+
66
+ calculate_information_capacity(
67
+ model_path=args.model_path,
68
+ data_path=args.data_path,
69
+ max_sample_length=args.max_sample_length,
70
+ batch_size=args.batch_size,
71
+ numerator_bias=args.numerator_bias,
72
+ attention_mechanism=args.attention_mechanism,
73
+ )
74
+
75
+
76
+ if __name__ == "__main__":
77
+ main()
flops.py ADDED
@@ -0,0 +1,439 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+ from pathlib import Path
3
+ from typing import Union, Dict, Optional
4
+
5
+ def gqa_model_theoretical_flops(
6
+ config_path: Union[str, Path],
7
+ seq_len: int = 0,
8
+ gen_len: int = 1024,
9
+ batch_size: int = 1,
10
+ prefill_logits: str = "all", # "all" | "last" | "none"
11
+ ) -> Dict[str, float]:
12
+ """
13
+ Compute theoretical FLOPs for an LLM with GQA given its Hugging Face config.json.
14
+
15
+ Assumptions (dense Transformer, forward only):
16
+ - 2 FLOPs per multiply-add.
17
+ - Attention = dense GQA: Q & O project to d_model; K/V project to n_kv_heads * d_k
18
+ where d_k = d_model / n_heads.
19
+ - Attention core cost includes QK^T and softmax(QK^T) @ V.
20
+ - MLP = gated (SwiGLU-like): two "up" matmuls + one "down" matmul. (handles special
21
+ cases of llama-4 and gpt-oss)
22
+ - LM head (final logits) included; at prefill you can count logits for:
23
+ * "all": logits for every prompt token (matches HF's default forward outputs),
24
+ * "last": logits only for last prompt token (some gens do this),
25
+ * "none": if you never materialize logits at prefill.
26
+ At decode, logits are computed every step.
27
+
28
+ Returns (TFLOPs):
29
+ dict with detailed breakdown for prefill, decode, totals.
30
+ """
31
+ # ---- load config ----
32
+ if "Ruyi" in config_path:
33
+ import re
34
+ pattern = re.compile(r'(\d+(?:\.\d+)?)\s*(?:B|billion)', re.IGNORECASE)
35
+ match = pattern.search(config_path)
36
+ config_path = config_path.replace(match.group(0), "7B")
37
+ cfg_path = Path(config_path)
38
+ if cfg_path.is_dir():
39
+ cfg_path = cfg_path / "config.json"
40
+ with open(cfg_path, "r") as f:
41
+ cfg = json.load(f)
42
+ if "gemma-3" in config_path:
43
+ import re
44
+ pattern = re.compile(r'(\d+(?:\.\d+)?)\s*(?:B|billion)', re.IGNORECASE)
45
+ match = pattern.search(config_path)
46
+ param_count = float(match.group(1))
47
+ if param_count >= 4:
48
+ cfg = cfg["text_config"]
49
+ cfg["vocab_size"] = 262208
50
+ if param_count == 4: cfg["num_attention_heads"] = 8; cfg["num_key_value_heads"] = 4
51
+ elif param_count == 12: cfg["num_attention_heads"] = 16; cfg["num_key_value_heads"] = 8
52
+ elif param_count == 27: cfg["num_attention_heads"] = 32; cfg["num_key_value_heads"] = 16
53
+ if "Llama-4" in config_path:
54
+ cfg = cfg["text_config"]
55
+
56
+ # ---- required hyperparams ----
57
+ d_model = int(cfg["hidden_size"])
58
+ n_layers = int(cfg.get("num_hidden_layers", cfg.get("n_layer"))) if "Ruyi" not in config_path else int(match.group(1)) * 4
59
+ n_heads = int(cfg.get("num_attention_heads", cfg.get("n_head")))
60
+ n_kv_heads = int(cfg.get("num_key_value_heads", n_heads))
61
+ if "Llama-4" in config_path: d_ff = cfg["intermediate_size_mlp"]
62
+ elif ("Qwen1.5" in config_path or "Qwen2-" in config_path) and "B-A" in config_path:
63
+ d_ff = cfg["intermediate_size"] + cfg["shared_expert_intermediate_size"]
64
+ else: d_ff = int(cfg.get("intermediate_size", cfg.get("ffn_hidden_size"))) # llama-4 uses intermediate_size_mlp for main mlp
65
+ vocab_size = int(cfg["vocab_size"])
66
+
67
+ # per-head dimension (assume divisible)
68
+ d_k = d_model // n_heads
69
+ kv_dim = n_kv_heads * d_k
70
+
71
+ B = batch_size
72
+ L = seq_len
73
+ T = gen_len
74
+
75
+ # ---- helpers (FLOPs, not TFLOPs) ----
76
+ # Projections per layer for a sequence of length L
77
+ # Q: 2 * B * L * d_model * d_model
78
+ # O: same
79
+ # K,V: 2 * B * L * d_model * kv_dim each
80
+ def proj_flops(L_tokens: int) -> int:
81
+ q = 2 * B * L_tokens * d_model * d_model
82
+ o = 2 * B * L_tokens * d_model * d_model
83
+ k = 2 * B * L_tokens * d_model * kv_dim
84
+ v = 2 * B * L_tokens * d_model * kv_dim
85
+ return q + k + v + o
86
+
87
+ # Attention core per layer
88
+ # Prefill (quadratic): QK^T + (softmax@V) ≈ 4 * B * n_heads * L^2 * d_k
89
+ # Decode (one step over cache length C): ≈ 4 * B * n_heads * C * d_k
90
+ def attn_core_prefill_flops(L_tokens: int) -> int:
91
+ return 4 * B * n_heads * (L_tokens ** 2) * d_k
92
+
93
+ def attn_core_decode_flops(cache_len: int) -> int:
94
+ return 4 * B * n_heads * cache_len * d_k
95
+
96
+ # MLP per layer
97
+ # Two up matmuls + one down: 2*B*L*d_model*d_ff + 2*B*L*d_model*d_ff + 2*B*L*d_ff*d_model = 6*B*L*d_model*d_ff
98
+ def mlp_flops(L_tokens: int) -> int:
99
+ # gpt-oss does not use gate function (6 → 4), registers per-expert intermediate size
100
+ if "gpt-oss" in config_path: return 4 * B * L_tokens * d_model * d_ff * int(cfg["num_experts_per_tok"])
101
+ # llama-4 use 2-layer mlp without gating on attn score, before the main mlp
102
+ elif "Llama-4" in config_path: return B * L_tokens * d_model * (6 * d_ff + 4 * int(cfg["intermediate_size"]))
103
+ else: return 6 * B * L_tokens * d_model * d_ff
104
+
105
+ # LM head (final linear to vocab) for N tokens: 2 * B * N * d_model * vocab_size
106
+ def lm_head_flops(num_tokens: int) -> int:
107
+ return 2 * B * num_tokens * d_model * vocab_size
108
+
109
+ # ---- prefill (length L) ----
110
+ proj_prefill_per_layer = proj_flops(L)
111
+ attn_prefill_per_layer = attn_core_prefill_flops(L)
112
+ mlp_prefill_per_layer = mlp_flops(L)
113
+
114
+ stack_prefill = n_layers * (proj_prefill_per_layer + attn_prefill_per_layer + mlp_prefill_per_layer)
115
+
116
+ if prefill_logits == "all":
117
+ lm_prefill = lm_head_flops(L)
118
+ elif prefill_logits == "last":
119
+ lm_prefill = lm_head_flops(1)
120
+ elif prefill_logits == "none":
121
+ lm_prefill = 0
122
+ else:
123
+ raise ValueError("prefill_logits must be one of {'all','last','none'}")
124
+
125
+ prefill_total = stack_prefill + lm_prefill
126
+
127
+ # ---- decode (T steps) ----
128
+ # For each step, projections/MLP are for 1 new token.
129
+ proj_decode_per_layer_per_step = proj_flops(1)
130
+ mlp_decode_per_layer_per_step = mlp_flops(1)
131
+
132
+ # Attention core sums over growing cache lengths: L, L+1, ..., L+T-1
133
+ # Sum_{t=0..T-1} 4 * B * n_heads * (L + t) * d_k = 4 * B * n_heads * d_k * (T*L + T*(T-1)/2)
134
+ attn_decode_per_layer_total = 4 * B * n_heads * d_k * (T * L + (T * (T - 1)) // 2)
135
+
136
+ stack_decode = n_layers * (
137
+ T * (proj_decode_per_layer_per_step + mlp_decode_per_layer_per_step) + attn_decode_per_layer_total
138
+ )
139
+
140
+ # Logits at each decode step
141
+ lm_decode = lm_head_flops(T)
142
+
143
+ decode_total = stack_decode + lm_decode
144
+
145
+ # ---- packing results (TFLOPs) ----
146
+ toT = lambda x: x / 1e12
147
+
148
+ results = {
149
+ # Inputs
150
+ "batch_size": B,
151
+ "seq_len": L,
152
+ "gen_len": T,
153
+ "hidden_size": d_model,
154
+ "num_layers": n_layers,
155
+ "num_heads": n_heads,
156
+ "num_kv_heads": n_kv_heads,
157
+ "intermediate_size": d_ff,
158
+ "vocab_size": vocab_size,
159
+ "prefill_logits_mode": prefill_logits,
160
+
161
+ # Prefill breakdown
162
+ "prefill_stack_TFLOPs": toT(stack_prefill),
163
+ "prefill_proj_TFLOPs": toT(n_layers * proj_prefill_per_layer),
164
+ "prefill_attn_core_TFLOPs": toT(n_layers * attn_prefill_per_layer),
165
+ "prefill_mlp_TFLOPs": toT(n_layers * mlp_prefill_per_layer),
166
+ "prefill_lm_head_TFLOPs": toT(lm_prefill),
167
+ "prefill_total_TFLOPs": toT(prefill_total),
168
+
169
+ # Decode breakdown
170
+ "decode_stack_TFLOPs": toT(stack_decode),
171
+ "decode_proj_TFLOPs": toT(n_layers * T * proj_decode_per_layer_per_step),
172
+ "decode_attn_core_TFLOPs": toT(n_layers * attn_decode_per_layer_total),
173
+ "decode_mlp_TFLOPs": toT(n_layers * T * mlp_decode_per_layer_per_step),
174
+ "decode_lm_head_TFLOPs": toT(lm_decode),
175
+ "decode_total_TFLOPs": toT(decode_total),
176
+
177
+ # Totals
178
+ "request_total_TFLOPs": toT(prefill_total + decode_total),
179
+ "avg_decode_TFLOPs_per_token": toT(decode_total / max(T, 1)),
180
+ }
181
+ return results
182
+
183
+ def mla_model_theoretical_flops(
184
+ config_path: Union[str, Path],
185
+ seq_len: int = 0,
186
+ gen_len: int = 1024,
187
+ batch_size: int = 1,
188
+ prefill_logits: str = "all", # "all" | "last" | "none"
189
+ attention_type: Optional[str] = None, # "mha" | "mla" | None (auto-detect)
190
+ mla_latents: Optional[int] = None,
191
+ mla_mode: str = "reuse", # "reuse" | "recompute"
192
+ ) -> Dict[str, float]:
193
+ """
194
+ Compute theoretical FLOPs (TFLOPs) for DeepSeek-R1 (or similar) inference.
195
+
196
+ Key points & assumptions (be sure to read):
197
+ - This function supports both classic dense Multi-Head Attention (MHA)
198
+ and DeepSeek's Multi-Head Latent Attention (MLA). MLA reduces the
199
+ attention core from O(L^2) to O(L * M) where M is the number of latent
200
+ tokens (per head or global depending on implementation). See DeepSeek-V2/V3 papers.
201
+ MLA also admits two execution schemes: 'reuse' (compute latent KV once at prefill
202
+ and reuse during decode) and 'recompute' (recompute / update latents per step).
203
+ The hardware analysis and community descriptions motivated these cost models.
204
+ - MoE MLP: we model a single shared expert (always executed) plus `num_experts_per_tok`
205
+ *activated* experts per token (as reported in the config). We expose separate
206
+ FLOP entries for shared vs activated experts.
207
+ - Projection FLOPs follow your previous convention: 2 FLOPs per multiply-add,
208
+ and we keep the same projection accounting for Q/K/V/O. The attention *core* cost
209
+ is replaced with MLA formulas when used.
210
+ - Because MLA variants differ in implementation details across repos, you can pass
211
+ `mla_latents` to set the latent length (if None a conservative default is used).
212
+ The default is chosen to reflect a moderate compression (an inferrable but tunable value).
213
+ - All counts are for forward-only inference, and result units are TFLOPs.
214
+
215
+ Parameters:
216
+ mla_latents: recommended to pass a locale-specific sensible value (e.g., 64, 128, 256).
217
+ If None, the function will pick a conservative default: min(256, max(1, seq_len // 16)).
218
+ mla_mode: "reuse" (default) counts the one-time cost to build latents at prefill and
219
+ then low-cost per-step decode attention against the smaller latent set.
220
+ "recompute" falls back to recomputing compressed latents per decode step
221
+ — yielding higher compute but lower memory footprint (useful to model
222
+ alternate execution strategies). See hardware-centric analysis.
223
+ """
224
+
225
+ cfg_path = Path(config_path)
226
+ if cfg_path.is_dir():
227
+ cfg_path = cfg_path / "config.json"
228
+ with open(cfg_path, "r") as f:
229
+ cfg = json.load(f)
230
+
231
+ # ---- required hyperparams ----
232
+ d_model = int(cfg["hidden_size"])
233
+ n_layers = int(cfg["num_hidden_layers"])
234
+ n_heads = int(cfg["num_attention_heads"])
235
+ n_kv_heads = int(cfg.get("num_key_value_heads", n_heads))
236
+ d_ff = int(cfg.get("moe_intermediate_size", cfg.get("intermediate_size")))
237
+ vocab_size = int(cfg["vocab_size"])
238
+
239
+ # MoE-specific
240
+ n_experts_total = int(cfg.get("n_routed_experts", cfg.get("num_experts", cfg.get("num_local_experts", 0))))
241
+ n_shared_experts = int(cfg.get("n_shared_experts", cfg.get("n_shared_experts", 0)))
242
+ n_experts_per_tok = int(cfg.get("num_experts_per_tok", cfg.get("num_experts_per_tok", 0)))
243
+
244
+ # Detect/override attention type:
245
+ cfg_model_type = cfg.get("model_type", "").lower()
246
+ if attention_type is None:
247
+ # If model type contains deepseek or config contains MLA-related fields, default to mla
248
+ if "deepseek" in cfg_model_type or cfg.get("moa") or cfg.get("n_group") is not None:
249
+ attention_type = "mla"
250
+ else:
251
+ attention_type = "mha"
252
+
253
+ # MLA default latent length (tunable). MLA papers/reports show M << L; choose conservative default.
254
+ if mla_latents is None:
255
+ mla_latents = int(cfg.get("kv_lora_rank", max(1, min(256, max(1, seq_len // 16)))))
256
+
257
+ # per-head dimension (assume divisible)
258
+ d_k = d_model // n_heads
259
+ kv_dim = n_kv_heads * d_k
260
+
261
+ B = batch_size
262
+ L = seq_len
263
+ T = gen_len
264
+
265
+ # ---- helpers (FLOPs, NOT TFLOPs) ----
266
+ # Linear projections per layer for a sequence of length L_tokens.
267
+ # Keep original projection accounting for Q, O, K, V (this counts the input linear layers).
268
+ def proj_flops(L_tokens: int) -> int:
269
+ q = 2 * B * L_tokens * d_model * d_model # Wq : d_model x d_model
270
+ o = 2 * B * L_tokens * d_model * d_model # Wo : d_model x d_model (output projection)
271
+ # For K and V we keep the same "dense" projection accounting here. MLA adds separate
272
+ # compression costs which we model in attention_core_mla below.
273
+ k = 2 * B * L_tokens * d_model * kv_dim
274
+ v = 2 * B * L_tokens * d_model * kv_dim
275
+ return q + k + v + o
276
+
277
+ # Dense attention core (classic quadratic)
278
+ def attn_core_prefill_mha(L_tokens: int) -> int:
279
+ # approximate QK^T + softmax@V cost
280
+ return 4 * B * n_heads * (L_tokens ** 2) * d_k
281
+
282
+ def attn_core_decode_mha(cache_len: int) -> int:
283
+ return 4 * B * n_heads * cache_len * d_k
284
+
285
+ # MLA attention core (approximate): replace L^2 with L * M.
286
+ # We model two things:
287
+ # 1) core: Q @ K_latent^T and softmax@V_latent -> ~ 4 * B * n_heads * L * M * d_k
288
+ # 2) one-time compression cost at prefill to build the latent K/V (approximation).
289
+ # hardware analyses show there are two execution schemes: re-use (compress once) vs recompute.
290
+ # We approximate the one-time compression cost as: 2 * B * L * d_model * (mla_latents / max(1,L))
291
+ # which simplifies to ~ 2 * B * d_model * mla_latents (a compact, tunable approximation).
292
+ # See DeepSeek papers and hardware analysis for details.
293
+ def attn_core_prefill_mla(L_tokens: int) -> int:
294
+ M = mla_latents
295
+ core = 4 * B * n_heads * L_tokens * M * d_k
296
+ # one-time compress cost (approximation; tunable)
297
+ compress = int(2 * B * d_model * M)
298
+ return core + compress
299
+
300
+ def attn_core_decode_mla_reuse(L_tokens: int, T_steps: int) -> int:
301
+ # If latents are reused, each decode step attends Q (1 token) against latent keys size M:
302
+ # cost per step ~ 4 * B * n_heads * M * d_k
303
+ return 4 * B * n_heads * d_k * (T_steps * mla_latents)
304
+
305
+ def attn_core_decode_mla_recompute(L_tokens: int, T_steps: int) -> int:
306
+ # recomputing latents each step approximates back toward classic cost (worse-case).
307
+ # fall back to the MHA-like growing-cache sum as conservative upper bound:
308
+ return 4 * B * n_heads * d_k * (T_steps * L_tokens + (T_steps * (T_steps - 1)) // 2)
309
+
310
+ # MLP costs:
311
+ # Single expert (SwiGLU-like gated): approx 6 * B * L * d_model * d_ff
312
+ def single_expert_flops(L_tokens: int) -> int:
313
+ return 6 * B * L_tokens * d_model * d_ff
314
+
315
+ # MoE MLP breakdown: shared experts (n_shared_experts) executed every token
316
+ # plus activated experts (n_experts_per_tok) *per-token* (sparse routing).
317
+ # Note: some implementations add extra routing overhead; we ignore the small routing bookkeeping cost here.
318
+ def moe_mlp_flops_shared(L_tokens: int) -> int:
319
+ # FLOPs for shared (always executed). If config says n_shared_experts>1, multiply accordingly.
320
+ return n_shared_experts * single_expert_flops(L_tokens)
321
+
322
+ def moe_mlp_flops_activated(L_tokens: int) -> int:
323
+ # Activated experts per token: each token runs n_experts_per_tok experts (sparse).
324
+ return n_experts_per_tok * single_expert_flops(L_tokens)
325
+
326
+ # LM head
327
+ def lm_head_flops(num_tokens: int) -> int:
328
+ return 2 * B * num_tokens * d_model * vocab_size
329
+
330
+ # ---- PREFILL (length L) ----
331
+ proj_prefill_per_layer = proj_flops(L)
332
+
333
+ if attention_type == "mha":
334
+ attn_prefill_per_layer = attn_core_prefill_mha(L)
335
+ # no extra MLA compress cost
336
+ mla_extra_prefill_per_layer = 0
337
+ elif attention_type == "mla":
338
+ attn_prefill_per_layer = attn_core_prefill_mla(L)
339
+ # the compression cost is included in attn_core_prefill_mla as 'compress' term
340
+ mla_extra_prefill_per_layer = max(0, attn_prefill_per_layer - (4 * B * n_heads * (L ** 2) * d_k))
341
+ else:
342
+ raise ValueError("attention_type must be one of {'mha','mla'}")
343
+
344
+ # MLP (MoE)
345
+ mlp_prefill_shared_per_layer = moe_mlp_flops_shared(L)
346
+ mlp_prefill_activated_per_layer = moe_mlp_flops_activated(L)
347
+ mlp_prefill_per_layer = mlp_prefill_shared_per_layer + mlp_prefill_activated_per_layer
348
+
349
+ stack_prefill = n_layers * (proj_prefill_per_layer + attn_prefill_per_layer + mlp_prefill_per_layer)
350
+
351
+ if prefill_logits == "all":
352
+ lm_prefill = lm_head_flops(L)
353
+ elif prefill_logits == "last":
354
+ lm_prefill = lm_head_flops(1)
355
+ elif prefill_logits == "none":
356
+ lm_prefill = 0
357
+ else:
358
+ raise ValueError("prefill_logits must be one of {'all','last','none'}")
359
+
360
+ prefill_total = stack_prefill + lm_prefill
361
+
362
+ # ---- DECODE (T steps) ----
363
+ proj_decode_per_layer_per_step = proj_flops(1)
364
+ mlp_decode_per_layer_per_step_shared = moe_mlp_flops_shared(1)
365
+ mlp_decode_per_layer_per_step_activated = moe_mlp_flops_activated(1)
366
+ mlp_decode_per_layer_per_step = mlp_decode_per_layer_per_step_shared + mlp_decode_per_layer_per_step_activated
367
+
368
+ if attention_type == "mha":
369
+ # attention grows with cache: L, L+1, ..., L+T-1
370
+ attn_decode_per_layer_total = 4 * B * n_heads * d_k * (T * L + (T * (T - 1)) // 2)
371
+ mla_extra_decode_term = 0
372
+ else: # mla
373
+ if mla_mode == "reuse":
374
+ attn_decode_per_layer_total = attn_core_decode_mla_reuse(L, T)
375
+ mla_extra_decode_term = 0 # compression cost already accounted in prefill
376
+ elif mla_mode == "recompute":
377
+ attn_decode_per_layer_total = attn_core_decode_mla_recompute(L, T)
378
+ # recompute implies we pay full compress-like cost in decode as well;
379
+ # approximate by adding the same compress cost per layer per decode (conservative)
380
+ per_step_compress = int(2 * B * d_model * mla_latents)
381
+ mla_extra_decode_term = n_layers * (per_step_compress * T)
382
+ else:
383
+ raise ValueError("mla_mode must be one of {'reuse','recompute'}")
384
+
385
+ stack_decode = n_layers * (
386
+ T * (proj_decode_per_layer_per_step + mlp_decode_per_layer_per_step) + attn_decode_per_layer_total
387
+ ) + mla_extra_decode_term
388
+
389
+ lm_decode = lm_head_flops(T)
390
+ decode_total = stack_decode + lm_decode
391
+
392
+ # ---- pack results (TFLOPs) ----
393
+ toT = lambda x: x / 1e12
394
+
395
+ results = {
396
+ # Inputs / config readout
397
+ "batch_size": B,
398
+ "seq_len": L,
399
+ "gen_len": T,
400
+ "hidden_size": d_model,
401
+ "num_layers": n_layers,
402
+ "num_heads": n_heads,
403
+ "num_kv_heads": n_kv_heads,
404
+ "intermediate_size": d_ff,
405
+ "vocab_size": vocab_size,
406
+ "num_experts_total": n_experts_total,
407
+ "num_shared_experts": n_shared_experts,
408
+ "num_experts_per_tok": n_experts_per_tok,
409
+ "attention_type": attention_type,
410
+ "mla_latents": mla_latents if attention_type == "mla" else None,
411
+ "mla_mode": mla_mode if attention_type == "mla" else None,
412
+ "prefill_logits_mode": prefill_logits,
413
+
414
+ # Prefill breakdown
415
+ "prefill_stack_TFLOPs": toT(stack_prefill),
416
+ "prefill_proj_TFLOPs": toT(n_layers * proj_prefill_per_layer),
417
+ "prefill_attn_core_TFLOPs": toT(n_layers * attn_prefill_per_layer),
418
+ "prefill_mlp_shared_TFLOPs": toT(n_layers * mlp_prefill_shared_per_layer),
419
+ "prefill_mlp_activated_TFLOPs": toT(n_layers * mlp_prefill_activated_per_layer),
420
+ "prefill_mlp_TFLOPs": toT(n_layers * mlp_prefill_per_layer),
421
+ "prefill_lm_head_TFLOPs": toT(lm_prefill),
422
+ "prefill_total_TFLOPs": toT(prefill_total),
423
+
424
+ # Decode breakdown
425
+ "decode_stack_TFLOPs": toT(stack_decode),
426
+ "decode_proj_TFLOPs": toT(n_layers * T * proj_decode_per_layer_per_step),
427
+ "decode_attn_core_TFLOPs": toT(n_layers * attn_decode_per_layer_total),
428
+ "decode_mlp_shared_TFLOPs": toT(n_layers * T * mlp_decode_per_layer_per_step_shared),
429
+ "decode_mlp_activated_TFLOPs": toT(n_layers * T * mlp_decode_per_layer_per_step_activated),
430
+ "decode_mlp_TFLOPs": toT(n_layers * T * mlp_decode_per_layer_per_step),
431
+ "decode_lm_head_TFLOPs": toT(lm_decode),
432
+ "decode_total_TFLOPs": toT(decode_total),
433
+
434
+ # Totals
435
+ "request_total_TFLOPs": toT(prefill_total + decode_total),
436
+ "avg_decode_TFLOPs_per_token": toT(decode_total / max(T, 1)),
437
+ }
438
+
439
+ return results
likelihood.py ADDED
@@ -0,0 +1,89 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+ import torch
3
+ from torch.utils.data import IterableDataset, DataLoader
4
+ from transformers import AutoTokenizer, AutoModelForCausalLM
5
+ from tqdm import tqdm
6
+ from math import ceil
7
+
8
+ class JsonlIterableDataset(IterableDataset):
9
+ """Sequential streaming dataset for jsonl lines of the form {"text": "..."}."""
10
+ def __init__(self, jsonl_path: str, tokenizer, target_token_length: int):
11
+ super().__init__()
12
+ self.jsonl_path = jsonl_path
13
+ self.tokenizer = tokenizer
14
+ if tokenizer.pad_token is None:
15
+ tokenizer.pad_token = tokenizer.eos_token
16
+ self.target_token_length = target_token_length
17
+
18
+ def __iter__(self):
19
+ worker_info = torch.utils.data.get_worker_info()
20
+ if worker_info is None:
21
+ # Single-process data loading
22
+ start, stride = 0, 1
23
+ else:
24
+ # Multi-worker: split work evenly
25
+ start = worker_info.id
26
+ stride = worker_info.num_workers
27
+
28
+ with open(self.jsonl_path, "r", encoding="utf-8") as f:
29
+ for idx, line in enumerate(f):
30
+ if idx % stride != start:
31
+ continue
32
+ data = json.loads(line)
33
+ text = data["text"]
34
+
35
+ tokens = self.tokenizer(
36
+ text,
37
+ truncation=True,
38
+ padding="max_length",
39
+ max_length=self.target_token_length,
40
+ return_tensors="pt"
41
+ )
42
+ yield {
43
+ "input_ids": tokens["input_ids"].squeeze(0),
44
+ "attention_mask": tokens["attention_mask"].squeeze(0),
45
+ }
46
+
47
+
48
+ def calculate_negative_log_likelihood(
49
+ model_path: str,
50
+ jsonl_path: str,
51
+ target_token_length: int,
52
+ batch_size: int = 8,
53
+ device: str = "cuda" if torch.cuda.is_available() else "cpu",
54
+ num_workers: int = 2,
55
+ num_samples: int = None,
56
+ ) -> torch.Tensor:
57
+ """
58
+ Streaming, batched NLL computation for a large jsonl dataset using deterministic sequential access.
59
+ """
60
+ tokenizer = AutoTokenizer.from_pretrained(model_path, device_map="auto", trust_remote_code=True)
61
+ model = AutoModelForCausalLM.from_pretrained(model_path, device_map="auto", torch_dtype="auto",
62
+ attn_implementation="flash_attention_2" if "TinyLlama" not in model_path else "sdpa", trust_remote_code=True)
63
+ model.eval()
64
+
65
+ dataset = JsonlIterableDataset(jsonl_path, tokenizer, target_token_length)
66
+ dataloader = DataLoader(dataset, batch_size=batch_size, num_workers=num_workers)
67
+
68
+ entropies = []
69
+
70
+ for i, batch in enumerate(tqdm(dataloader, total=ceil(num_samples / batch_size) if num_samples is not None else None,
71
+ desc=f"Calculating Entropy for {model_path.split('/')[-1]}")):
72
+ if i % 100 == 0: torch.cuda.empty_cache()
73
+
74
+ input_ids = batch["input_ids"].to(device)
75
+ attention_mask = batch["attention_mask"].to(device)
76
+
77
+ with torch.no_grad():
78
+ outputs = model(input_ids=input_ids, attention_mask=attention_mask, use_cache=False)
79
+ logits = outputs.logits # (batch, seq_len, vocab_size)
80
+
81
+ # Per-token NLL
82
+ logits = torch.softmax(logits[:, :, :len(tokenizer)].to(dtype=torch.float32), dim=-1)
83
+ effective_probs = torch.gather(logits[:, :target_token_length, :], -1, input_ids[:, 1:].unsqueeze(-1)).squeeze(-1)
84
+ entropy = -torch.log2(effective_probs)
85
+ entropy[attention_mask[:, 1:] == 0] = torch.nan
86
+
87
+ entropies.append(entropy.cpu())
88
+
89
+ return torch.cat(entropies, dim=0)
text_size.py ADDED
@@ -0,0 +1,97 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+ import numpy as np
3
+ from transformers import AutoTokenizer
4
+ from multiprocessing import Pool, cpu_count
5
+ import tqdm
6
+
7
+ # --- Global Tokenizer Initialization ---
8
+ # This function initializes the tokenizer for each worker process.
9
+ # We define it globally so it's available to the pool's initializer.
10
+ tokenizer = None
11
+
12
+ def init_tokenizer(model_path, target_token_length=1024):
13
+ """Initializer for each worker process."""
14
+ global tokenizer
15
+ tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
16
+ tokenizer.target_token_length = target_token_length
17
+
18
+ # --- Worker Function ---
19
+ # This is the function that each worker process will execute.
20
+ def process_line(line):
21
+ """
22
+ Processes a single line from the JSONL file to count tokens.
23
+ Returns the token count or None if an error occurs.
24
+ """
25
+ try:
26
+ # Load the JSON object from the line
27
+ data = json.loads(line)
28
+ text = data.get("text")
29
+
30
+ if text and isinstance(text, str):
31
+ # The global tokenizer, initialized for this process, is used here
32
+ ids = tokenizer.encode(text, truncation=True, max_length=tokenizer.target_token_length)
33
+ ids = ids[1:]
34
+ s = tokenizer.decode(ids)
35
+ return len(s.encode('utf-8')) * 8 / len(ids)
36
+ else:
37
+ # Return None for lines without a valid 'text' field
38
+ return None
39
+ except (json.JSONDecodeError, AttributeError):
40
+ # Return None for malformed JSON or other errors
41
+ return None
42
+
43
+ def calculate_text_size_per_token(file_path, model_path, target_token_length=1024):
44
+ """
45
+ Calculates token count statistics in a parallelized manner.
46
+
47
+ Args:
48
+ file_path (str): The path to the JSONL file.
49
+ """
50
+ init_tokenizer(model_path, target_token_length)
51
+
52
+ try:
53
+ with open(file_path, 'r', encoding='utf-8') as f:
54
+ lines = f.readlines()
55
+ except FileNotFoundError:
56
+ print(f"Error: The file '{file_path}' was not found.")
57
+ return
58
+ except Exception as e:
59
+ print(f"An unexpected error occurred while reading the file: {e}")
60
+ return
61
+
62
+ if not lines:
63
+ print("File is empty. No statistics to calculate.")
64
+ return
65
+
66
+ # Determine the number of processes to use
67
+ num_processes = cpu_count() // 2
68
+ print(f"Starting parallel processing with {num_processes} workers...")
69
+
70
+ token_counts = []
71
+
72
+ # Create a pool of worker processes
73
+ # The initializer runs `init_tokenizer` once for each worker process.
74
+ with Pool(processes=num_processes, initializer=init_tokenizer, initargs=(model_path, target_token_length)) as pool:
75
+ # Use imap_unordered for efficiency, as order doesn't matter.
76
+ results = list(tqdm.tqdm(pool.imap_unordered(process_line, lines), total=len(lines), desc="Processing lines"))
77
+
78
+ # Filter out the None results from failed lines
79
+ token_counts = [count for count in results if count is not None]
80
+
81
+ if not token_counts:
82
+ print("No valid text lines were found to calculate statistics.")
83
+ return
84
+
85
+ # Calculate and print statistics
86
+ counts_array = np.array(token_counts)
87
+
88
+ return {
89
+ "file_path": {file_path},
90
+ "tokenizer": {tokenizer.name_or_path},
91
+ "vocab_size": {len(tokenizer)},
92
+ "max_sample_length": target_token_length,
93
+ "total_valid_lines": len(counts_array),
94
+ "mean_text_size": round(np.mean(counts_array), 2),
95
+ "min_text_size": np.min(counts_array),
96
+ "max_text_size": np.max(counts_array),
97
+ }