Dataset Preview
Duplicate
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code:   DatasetGenerationCastError
Exception:    DatasetGenerationCastError
Message:      An error occurred while generating the dataset

All the data files must have the same columns, but at some point there are 2 new columns ({'kv_buffer_mib', 'gpu_mem_mib'}) and 1 missing columns ({'rss_gb'}).

This happened while the csv dataset builder was generating data using

hf://datasets/memoriant/dgx-spark-kv-cache-benchmark/data/benchmark_results_v3_complete.csv (at revision 3810b6f4985c5e4919b5b3f13271cdbf5469d95b), [/tmp/hf-datasets-cache/medium/datasets/45113382815248-config-parquet-and-info-memoriant-dgx-spark-kv-ca-5a77598e/hub/datasets--memoriant--dgx-spark-kv-cache-benchmark/snapshots/3810b6f4985c5e4919b5b3f13271cdbf5469d95b/data/benchmark_results.csv (origin=hf://datasets/memoriant/dgx-spark-kv-cache-benchmark@3810b6f4985c5e4919b5b3f13271cdbf5469d95b/data/benchmark_results.csv), /tmp/hf-datasets-cache/medium/datasets/45113382815248-config-parquet-and-info-memoriant-dgx-spark-kv-ca-5a77598e/hub/datasets--memoriant--dgx-spark-kv-cache-benchmark/snapshots/3810b6f4985c5e4919b5b3f13271cdbf5469d95b/data/benchmark_results_v3_complete.csv (origin=hf://datasets/memoriant/dgx-spark-kv-cache-benchmark@3810b6f4985c5e4919b5b3f13271cdbf5469d95b/data/benchmark_results_v3_complete.csv)]

Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback:    Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1890, in _prepare_split_single
                  writer.write_table(table)
                File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 760, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2272, in table_cast
                  return cast_table_to_schema(table, schema)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              context_tokens: int64
              cache_type: string
              kv_buffer_mib: int64
              gpu_mem_mib: int64
              prompt_tps: double
              gen_tps: double
              notes: string
              -- schema metadata --
              pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 1093
              to
              {'context_tokens': Value('int64'), 'cache_type': Value('string'), 'prompt_tps': Value('float64'), 'gen_tps': Value('float64'), 'rss_gb': Value('float64'), 'notes': Value('string')}
              because column names don't match
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1347, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 980, in convert_to_parquet
                  builder.download_and_prepare(
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 884, in download_and_prepare
                  self._download_and_prepare(
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 947, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1739, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1892, in _prepare_split_single
                  raise DatasetGenerationCastError.from_cast_error(
              datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
              
              All the data files must have the same columns, but at some point there are 2 new columns ({'kv_buffer_mib', 'gpu_mem_mib'}) and 1 missing columns ({'rss_gb'}).
              
              This happened while the csv dataset builder was generating data using
              
              hf://datasets/memoriant/dgx-spark-kv-cache-benchmark/data/benchmark_results_v3_complete.csv (at revision 3810b6f4985c5e4919b5b3f13271cdbf5469d95b), [/tmp/hf-datasets-cache/medium/datasets/45113382815248-config-parquet-and-info-memoriant-dgx-spark-kv-ca-5a77598e/hub/datasets--memoriant--dgx-spark-kv-cache-benchmark/snapshots/3810b6f4985c5e4919b5b3f13271cdbf5469d95b/data/benchmark_results.csv (origin=hf://datasets/memoriant/dgx-spark-kv-cache-benchmark@3810b6f4985c5e4919b5b3f13271cdbf5469d95b/data/benchmark_results.csv), /tmp/hf-datasets-cache/medium/datasets/45113382815248-config-parquet-and-info-memoriant-dgx-spark-kv-ca-5a77598e/hub/datasets--memoriant--dgx-spark-kv-cache-benchmark/snapshots/3810b6f4985c5e4919b5b3f13271cdbf5469d95b/data/benchmark_results_v3_complete.csv (origin=hf://datasets/memoriant/dgx-spark-kv-cache-benchmark@3810b6f4985c5e4919b5b3f13271cdbf5469d95b/data/benchmark_results_v3_complete.csv)]
              
              Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

context_tokens
int64
cache_type
string
prompt_tps
float64
gen_tps
float64
rss_gb
float64
notes
string
3,500
q8_0
30.5
14.2
null
quick initial test
8,192
f16
371.3
14.7
1.25
null
8,192
q8_0
null
null
1.34
speed not measured at this context
8,192
q4_0
363.4
14.2
1.34
null
16,384
f16
360.7
13.9
1.4
null
16,384
q8_0
null
null
1.49
speed not measured at this context
16,384
q4_0
346.2
12.7
1.49
null
32,768
f16
328.3
13.5
1.59
null
32,768
q8_0
null
null
1.69
speed not measured at this context
32,768
q4_0
316.9
11
1.69
null
65,536
f16
282.7
13.3
1.94
null
65,536
q4_0
21.3
8.6
2.06
CLIFF — 92.5% prompt tps collapse
114,688
q4_0
21.3
8.6
null
f16 not testable at this context size
0
f16
null
null
null
KV buffer from llama.cpp verbose, baseline GPU from nvidia-smi
1,493
f16
923.2
45.2
null
null
5,916
f16
1,210.8
44.7
null
null
11,814
f16
1,187.8
44.9
null
null
23,610
f16
1,153.4
44.6
null
null
110,019
f16
815
38
null
64K+ context
0
q8_0
null
null
null
47% smaller KV buffer vs f16
1,493
q8_0
925.1
45.3
null
null
5,916
q8_0
1,206.7
44.9
null
null
11,814
q8_0
1,183.9
42.9
null
null
23,610
q8_0
1,149.1
39.7
null
null
110,019
q8_0
810.2
25
null
64K+ context — gen speed degraded 34%
0
q4_0
null
null
null
72% smaller KV buffer vs f16
1,493
q4_0
925.7
45.6
null
null
5,916
q4_0
1,205.8
45
null
null
11,814
q4_0
1,191.2
42.7
null
null
23,610
q4_0
1,151.9
39.3
null
null
110,019
q4_0
812.6
24
null
64K+ context — gen speed degraded 37%

KV Cache Quantization on NVIDIA DGX Spark GB10

Corrected benchmarks (v3, April 2026) — KV cache quantization behavior on the NVIDIA DGX Spark's GB10 Grace Blackwell unified memory architecture.

Author: Nathan Maine, Memoriant Inc. Date: March 2026, corrected April 2026 Hardware: NVIDIA DGX Spark (GB10, compute 12.1, 128GB unified memory)

Correction Notice: The original v1 benchmarks (March 31) contained methodology errors. Memory was measured via RSS (wrong on unified memory) and some throughput data came from failed requests. v3 uses nvidia-smi + llama.cpp internal reporting. See CORRECTION-NOTICE.md for full details. Credit to u/audioen on r/LocalLLaMA for identifying the RSS measurement flaw.


TL;DR

KV cache quantization on DGX Spark GB10 works as expected:

  • q4_0 saves 72% KV buffer memory (216 MiB vs 768 MiB for f16)
  • q8_0 saves 47% KV buffer memory (408 MiB vs 768 MiB for f16)
  • Prompt throughput is unaffected by cache quantization at all context lengths
  • Generation throughput degrades ~37% at 110K context with q4_0 (24 tps vs 38 tps for f16)

Memory (Corrected — nvidia-smi + llama.cpp internals)

Cache Type KV Buffer (llama.cpp) Total GPU (nvidia-smi) Savings vs f16
f16 768 MiB 23,092 MiB baseline
q8_0 408 MiB 22,732 MiB -360 MiB (-47% KV)
q4_0 216 MiB 22,540 MiB -552 MiB (-72% KV)

At 110K context:

Cache Type GPU Memory vs f16
f16 23,116 MiB baseline
q8_0 22,856 MiB -260 MiB
q4_0 22,664 MiB -452 MiB

Throughput

Prompt Processing (tokens/sec) — No degradation

Context f16 q8_0 q4_0
~1.5K 923 925 926
~6K 1,211 1,207 1,206
~12K 1,188 1,184 1,191
~24K 1,153 1,149 1,152
~110K 815 810 813

Prompt throughput is essentially identical across all cache types at all context lengths.

Generation (tokens/sec) — Degrades at long context

Context f16 q8_0 q4_0 q4_0 vs f16
~1.5K 45.2 45.3 45.6 +0.9%
~6K 44.7 44.9 45.0 +0.7%
~12K 44.9 42.9 42.7 -4.9%
~24K 44.6 39.7 39.3 -11.9%
~110K 38.0 25.0 24.0 -36.8%

Generation (decode) throughput degrades with quantized KV cache at long context. At 110K tokens, q4_0 is 37% slower than f16 for generation. q8_0 is similar at 34% slower.


Test Setup

Model:     Nemotron-3-Nano-30B-A3B-UD-Q4_K_XL.gguf
Hardware:  NVIDIA DGX Spark GB10 (compute 12.1, 124,610 MiB VRAM)
OS:        DGX OS / Ubuntu aarch64
llama.cpp: build 8399 (commit 892e3c333), aarch64 + CUDA
CUDA:      13.0 | Driver: 580.126.09
Flags:     --ctx-size 131072
Protocol:  Server restarted between each configuration
Memory:    nvidia-smi --query-compute-apps for GPU memory
KV size:   llama.cpp verbose output (llama_kv_cache line)
Throughput: llama.cpp response timings via /v1/chat/completions

What Was Wrong in v1

The original paper (March 31) made two incorrect claims:

  1. "92.5% prompt throughput collapse at 64K" — Wrong. Prompt throughput is unaffected by cache quantization. The original data likely came from failed completion requests. The actual effect is a 37% generation speed reduction at 110K context.

  2. "q4_0 uses MORE memory than f16" — Wrong. This was measured via process RSS, which does not capture GPU/unified memory allocations on GB10. Actual measurement via nvidia-smi + llama.cpp shows q4_0 saves 552 MiB as expected.

See CORRECTION-NOTICE.md for full methodology comparison.


Actual Finding: Generation Decode Overhead

The real finding is more nuanced: KV cache quantization saves memory as expected, but imposes a generation speed tax at long context. At 110K tokens, q4_0 generation is 37% slower than f16 (24 vs 38 tps). This is likely due to dequantization overhead during the decode attention step, which processes the full KV cache for each generated token.

Prompt processing is unaffected because it processes all tokens in parallel — the dequantization cost is amortized across the batch.

This tradeoff may be acceptable depending on the use case:

  • Long-context RAG (mostly prompt, few generated tokens): use q4_0, save memory
  • Long-form generation at long context: use f16, preserve decode speed

Raw Data


Citation

@techreport{maine2026kvcache,
  title  = {KV Cache Quantization on NVIDIA DGX Spark GB10},
  author = {Maine, Nathan},
  institution = {Memoriant Inc.},
  year   = {2026},
  note   = {Corrected April 2026},
  url    = {https://github.com/Memoriant/dgx-spark-kv-cache-benchmark}
}
Downloads last month
31