lynae-1219 commited on
Commit
cdfb95b
·
verified ·
1 Parent(s): 359c7d6

Delete folder per_layer_oi_cf with huggingface_hub

Browse files
per_layer_oi_cf/README.md DELETED
@@ -1,42 +0,0 @@
1
- # Per-Layer OI/CF Roofline — Llama-3.1-8B on H100 (Section 4.4)
2
-
3
- Per-component operational intensity (OI) and capacity footprint (CF) roofline
4
- decomposition for a Llama-3.1-8B decoder layer on a single H100-SXM5-80GB
5
- (bf16, TP=1). Backs Section 4.4 "Layer-Level Roofline Decomposition" and
6
- Figure `per_layer_oi_cf_8b_h100.pdf`.
7
-
8
- ## Files
9
-
10
- - `analytical/` — Per-component OI computed from model dimensions.
11
- B=1 prefill (C=1, memory-limited) and B=80 prefill (C=80, compute-saturated).
12
- - `ncu/` — NCU v2024.3.2 kernel traces (`--set full`) on 8 decoder layers
13
- (indices 1–8) at bs=1 prefill. Clean run, 52 us total GPU time across
14
- 8 unique kernel types. Reproduce with:
15
- ```
16
- CUDA_VISIBLE_DEVICES=0 ncu --set full --csv --profile-from-start no \
17
- --target-processes all -o output \
18
- python3 scripts/roofline/_ncu_target_simple.py
19
- ```
20
- - `per_layer_oi_cf_8b_h100.pdf` — Two-panel figure: (a) per-component OI on
21
- H100 roofline, (b) capacity footprint sweep C=1..80.
22
- - `per_layer_table.tex` — LaTeX table classifying each component as
23
- compute-bound (OI > 295 FLOP/byte) or bandwidth-bound.
24
-
25
- ## Key findings
26
-
27
- | Component | Wt (MB) | OI (C=1) | OI (C=80) | Bound |
28
- |-----------|---------|----------|-----------|-------|
29
- | GEMM projections (Q,K,V,O,Gate,Up,Down) | 436 | 315–441 | 803–2956 | COMPUTE |
30
- | RMSNorm (in/out), RoPE, SiLU, Gate×Up | — | 0–1 | 0–1 | BANDWIDTH |
31
- | Flash attention | — | 64 | 64 | BANDWIDTH |
32
- | **Layer total** | **436** | **330** | **878** | **COMPUTE** |
33
-
34
- NCU validation: at C=1, GEMM = 10.8% of GPU time, memory ops = 65%,
35
- confirming the layer is barely compute-bound (OI=330 vs ridge=295).
36
-
37
- System capacity-bound at 32K context: C ≥ 16 (KV cache + weights > 80 GB HBM).
38
-
39
- ## Citation
40
-
41
- AgentPerfBench: A Benchmarking and Evaluation Suite for Inference Performance
42
- of Agentic LLMs. NeurIPS 2026.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
per_layer_oi_cf/analytical/llama8b_analytical_prefill_B1.json DELETED
@@ -1,110 +0,0 @@
1
- {
2
- "B": 1,
3
- "S": 512,
4
- "phase": "prefill",
5
- "d": 4096,
6
- "h": 32,
7
- "kv_h": 8,
8
- "intermediate": 14336,
9
- "head_dim": 128,
10
- "q_len": 512,
11
- "kv_len": 512,
12
- "total_flops": 227707715584,
13
- "total_bytes": 689963008,
14
- "oi": 330.0,
15
- "ridge_point": 295.0,
16
- "overall_bound": "compute",
17
- "components": [
18
- {
19
- "name": "rmsnorm_in",
20
- "flops": 10485760,
21
- "bytes": 8388608,
22
- "oi": 1.2,
23
- "bound": "memory"
24
- },
25
- {
26
- "name": "q_proj",
27
- "flops": 17179869184,
28
- "bytes": 41943040,
29
- "oi": 409.6,
30
- "bound": "compute"
31
- },
32
- {
33
- "name": "k_proj",
34
- "flops": 4294967296,
35
- "bytes": 13631488,
36
- "oi": 315.1,
37
- "bound": "compute"
38
- },
39
- {
40
- "name": "v_proj",
41
- "flops": 4294967296,
42
- "bytes": 13631488,
43
- "oi": 315.1,
44
- "bound": "compute"
45
- },
46
- {
47
- "name": "rope",
48
- "flops": 16777216,
49
- "bytes": 12582912,
50
- "oi": 1.3,
51
- "bound": "memory"
52
- },
53
- {
54
- "name": "attention",
55
- "flops": 4294967296,
56
- "bytes": 67108864,
57
- "oi": 64.0,
58
- "bound": "memory"
59
- },
60
- {
61
- "name": "o_proj",
62
- "flops": 17179869184,
63
- "bytes": 41943040,
64
- "oi": 409.6,
65
- "bound": "compute"
66
- },
67
- {
68
- "name": "rmsnorm_post",
69
- "flops": 10485760,
70
- "bytes": 8388608,
71
- "oi": 1.2,
72
- "bound": "memory"
73
- },
74
- {
75
- "name": "gate_proj",
76
- "flops": 60129542144,
77
- "bytes": 136314880,
78
- "oi": 441.1,
79
- "bound": "compute"
80
- },
81
- {
82
- "name": "up_proj",
83
- "flops": 60129542144,
84
- "bytes": 136314880,
85
- "oi": 441.1,
86
- "bound": "compute"
87
- },
88
- {
89
- "name": "silu",
90
- "flops": 29360128,
91
- "bytes": 29360128,
92
- "oi": 1.0,
93
- "bound": "memory"
94
- },
95
- {
96
- "name": "gate_mul",
97
- "flops": 7340032,
98
- "bytes": 44040192,
99
- "oi": 0.2,
100
- "bound": "memory"
101
- },
102
- {
103
- "name": "down_proj",
104
- "flops": 60129542144,
105
- "bytes": 136314880,
106
- "oi": 441.1,
107
- "bound": "compute"
108
- }
109
- ]
110
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
per_layer_oi_cf/analytical/llama8b_analytical_prefill_B80.json DELETED
@@ -1,110 +0,0 @@
1
- {
2
- "B": 80,
3
- "S": 512,
4
- "phase": "prefill",
5
- "d": 4096,
6
- "h": 32,
7
- "kv_h": 8,
8
- "intermediate": 14336,
9
- "head_dim": 128,
10
- "q_len": 512,
11
- "kv_len": 512,
12
- "total_flops": 18216617246720,
13
- "total_bytes": 20736638976,
14
- "oi": 878.5,
15
- "ridge_point": 295.0,
16
- "overall_bound": "compute",
17
- "components": [
18
- {
19
- "name": "rmsnorm_in",
20
- "flops": 838860800,
21
- "bytes": 671088640,
22
- "oi": 1.2,
23
- "bound": "memory"
24
- },
25
- {
26
- "name": "q_proj",
27
- "flops": 1374389534720,
28
- "bytes": 704643072,
29
- "oi": 1950.5,
30
- "bound": "compute"
31
- },
32
- {
33
- "name": "k_proj",
34
- "flops": 343597383680,
35
- "bytes": 427819008,
36
- "oi": 803.1,
37
- "bound": "compute"
38
- },
39
- {
40
- "name": "v_proj",
41
- "flops": 343597383680,
42
- "bytes": 427819008,
43
- "oi": 803.1,
44
- "bound": "compute"
45
- },
46
- {
47
- "name": "rope",
48
- "flops": 1342177280,
49
- "bytes": 1006632960,
50
- "oi": 1.3,
51
- "bound": "memory"
52
- },
53
- {
54
- "name": "attention",
55
- "flops": 343597383680,
56
- "bytes": 5368709120,
57
- "oi": 64.0,
58
- "bound": "memory"
59
- },
60
- {
61
- "name": "o_proj",
62
- "flops": 1374389534720,
63
- "bytes": 704643072,
64
- "oi": 1950.5,
65
- "bound": "compute"
66
- },
67
- {
68
- "name": "rmsnorm_post",
69
- "flops": 838860800,
70
- "bytes": 671088640,
71
- "oi": 1.2,
72
- "bound": "memory"
73
- },
74
- {
75
- "name": "gate_proj",
76
- "flops": 4810363371520,
77
- "bytes": 1627389952,
78
- "oi": 2955.9,
79
- "bound": "compute"
80
- },
81
- {
82
- "name": "up_proj",
83
- "flops": 4810363371520,
84
- "bytes": 1627389952,
85
- "oi": 2955.9,
86
- "bound": "compute"
87
- },
88
- {
89
- "name": "silu",
90
- "flops": 2348810240,
91
- "bytes": 2348810240,
92
- "oi": 1.0,
93
- "bound": "memory"
94
- },
95
- {
96
- "name": "gate_mul",
97
- "flops": 587202560,
98
- "bytes": 3523215360,
99
- "oi": 0.2,
100
- "bound": "memory"
101
- },
102
- {
103
- "name": "down_proj",
104
- "flops": 4810363371520,
105
- "bytes": 1627389952,
106
- "oi": 2955.9,
107
- "bound": "compute"
108
- }
109
- ]
110
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
per_layer_oi_cf/ncu/Llama-3.1-8B_prefill_bs1_8layers.csv DELETED
The diff for this file is too large to render. See raw diff
 
per_layer_oi_cf/ncu/Llama-3.1-8B_prefill_bs1_8layers.ncu-rep DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:6feef6e33821aee9fa5f0a3cc174cba6947106644f3ebe77f08e58fc98e69e1e
3
- size 72020417
 
 
 
 
per_layer_oi_cf/ncu/Llama-3.1-8B_prefill_bs1_v2.csv DELETED
The diff for this file is too large to render. See raw diff
 
per_layer_oi_cf/ncu/Llama-3.1-8B_prefill_bs1_v2.ncu-rep DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:a248061d85e6ce523e51d258fc50fad7140e72eeb6cd5dce468839a289c3517e
3
- size 110769455
 
 
 
 
per_layer_oi_cf/per_layer_oi_cf_8b_h100.pdf DELETED
Binary file (39.8 kB)
 
per_layer_oi_cf/per_layer_table.tex DELETED
@@ -1,29 +0,0 @@
1
- \begin{table}[t]
2
- \centering
3
- \caption{Per-component operational intensity for a Llama-3.1-8B decoder layer on H100 (bf16, TP=1). Ridge = 295 FLOP/byte. C = compute-bound, BW = memory-bandwidth-bound. Capacity-bound arises at 8K+ contexts when KV cache exceeds HBM.}
4
- \label{tab:per-component-oi}
5
- \small
6
- \begin{tabular}{lrrrrr}
7
- \toprule
8
- Component & Wt (MB) & OI (C=1) & OI (C=80) & Bound (C=1) & Bound (C=80) \\
9
- \midrule
10
- RMS & \textemdash & 1 & 1 & BW & BW \\
11
- Q & 34 & 410 & 1950 & C & C \\
12
- K & 8 & 315 & 803 & C & C \\
13
- V & 8 & 315 & 803 & C & C \\
14
- RoPE & \textemdash & 1 & 1 & BW & BW \\
15
- Attn & \textemdash & 64 & 64 & BW & BW \\
16
- O & 34 & 410 & 1950 & C & C \\
17
- RMS & \textemdash & 1 & 1 & BW & BW \\
18
- Gate & 117 & 441 & 2956 & C & C \\
19
- Up & 117 & 441 & 2956 & C & C \\
20
- SiLU & \textemdash & 1 & 1 & BW & BW \\
21
- Gate×Up & \textemdash & 0 & 0 & BW & BW \\
22
- Down & 117 & 441 & 2956 & C & C \\
23
- \midrule
24
- \textbf{Layer total} & 436 & 330 & 878 & C & C \\
25
- \bottomrule
26
- \end{tabular}
27
- \vspace{3pt}
28
- {\footnotesize KV capacity thresholds: 8K tok $\rightarrow$ C$\ge$57; 32K tok $\rightarrow$ C$\ge$14; 128K tok $\rightarrow$ C$\ge$3.}
29
- \end{table}