The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code: DatasetGenerationCastError
Exception: DatasetGenerationCastError
Message: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 2 new columns ({'kv_buffer_mib', 'gpu_mem_mib'}) and 1 missing columns ({'rss_gb'}).
This happened while the csv dataset builder was generating data using
hf://datasets/memoriant/dgx-spark-kv-cache-benchmark/data/benchmark_results_v3_complete.csv (at revision 3810b6f4985c5e4919b5b3f13271cdbf5469d95b), [/tmp/hf-datasets-cache/medium/datasets/45113382815248-config-parquet-and-info-memoriant-dgx-spark-kv-ca-5a77598e/hub/datasets--memoriant--dgx-spark-kv-cache-benchmark/snapshots/3810b6f4985c5e4919b5b3f13271cdbf5469d95b/data/benchmark_results.csv (origin=hf://datasets/memoriant/dgx-spark-kv-cache-benchmark@3810b6f4985c5e4919b5b3f13271cdbf5469d95b/data/benchmark_results.csv), /tmp/hf-datasets-cache/medium/datasets/45113382815248-config-parquet-and-info-memoriant-dgx-spark-kv-ca-5a77598e/hub/datasets--memoriant--dgx-spark-kv-cache-benchmark/snapshots/3810b6f4985c5e4919b5b3f13271cdbf5469d95b/data/benchmark_results_v3_complete.csv (origin=hf://datasets/memoriant/dgx-spark-kv-cache-benchmark@3810b6f4985c5e4919b5b3f13271cdbf5469d95b/data/benchmark_results_v3_complete.csv)]
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1890, in _prepare_split_single
writer.write_table(table)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 760, in write_table
pa_table = table_cast(pa_table, self._schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2272, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
context_tokens: int64
cache_type: string
kv_buffer_mib: int64
gpu_mem_mib: int64
prompt_tps: double
gen_tps: double
notes: string
-- schema metadata --
pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 1093
to
{'context_tokens': Value('int64'), 'cache_type': Value('string'), 'prompt_tps': Value('float64'), 'gen_tps': Value('float64'), 'rss_gb': Value('float64'), 'notes': Value('string')}
because column names don't match
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1347, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 980, in convert_to_parquet
builder.download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 884, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 947, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1739, in _prepare_split
for job_id, done, content in self._prepare_split_single(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1892, in _prepare_split_single
raise DatasetGenerationCastError.from_cast_error(
datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 2 new columns ({'kv_buffer_mib', 'gpu_mem_mib'}) and 1 missing columns ({'rss_gb'}).
This happened while the csv dataset builder was generating data using
hf://datasets/memoriant/dgx-spark-kv-cache-benchmark/data/benchmark_results_v3_complete.csv (at revision 3810b6f4985c5e4919b5b3f13271cdbf5469d95b), [/tmp/hf-datasets-cache/medium/datasets/45113382815248-config-parquet-and-info-memoriant-dgx-spark-kv-ca-5a77598e/hub/datasets--memoriant--dgx-spark-kv-cache-benchmark/snapshots/3810b6f4985c5e4919b5b3f13271cdbf5469d95b/data/benchmark_results.csv (origin=hf://datasets/memoriant/dgx-spark-kv-cache-benchmark@3810b6f4985c5e4919b5b3f13271cdbf5469d95b/data/benchmark_results.csv), /tmp/hf-datasets-cache/medium/datasets/45113382815248-config-parquet-and-info-memoriant-dgx-spark-kv-ca-5a77598e/hub/datasets--memoriant--dgx-spark-kv-cache-benchmark/snapshots/3810b6f4985c5e4919b5b3f13271cdbf5469d95b/data/benchmark_results_v3_complete.csv (origin=hf://datasets/memoriant/dgx-spark-kv-cache-benchmark@3810b6f4985c5e4919b5b3f13271cdbf5469d95b/data/benchmark_results_v3_complete.csv)]
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
context_tokens int64 | cache_type string | prompt_tps float64 | gen_tps float64 | rss_gb float64 | notes string |
|---|---|---|---|---|---|
3,500 | q8_0 | 30.5 | 14.2 | null | quick initial test |
8,192 | f16 | 371.3 | 14.7 | 1.25 | null |
8,192 | q8_0 | null | null | 1.34 | speed not measured at this context |
8,192 | q4_0 | 363.4 | 14.2 | 1.34 | null |
16,384 | f16 | 360.7 | 13.9 | 1.4 | null |
16,384 | q8_0 | null | null | 1.49 | speed not measured at this context |
16,384 | q4_0 | 346.2 | 12.7 | 1.49 | null |
32,768 | f16 | 328.3 | 13.5 | 1.59 | null |
32,768 | q8_0 | null | null | 1.69 | speed not measured at this context |
32,768 | q4_0 | 316.9 | 11 | 1.69 | null |
65,536 | f16 | 282.7 | 13.3 | 1.94 | null |
65,536 | q4_0 | 21.3 | 8.6 | 2.06 | CLIFF — 92.5% prompt tps collapse |
114,688 | q4_0 | 21.3 | 8.6 | null | f16 not testable at this context size |
0 | f16 | null | null | null | KV buffer from llama.cpp verbose, baseline GPU from nvidia-smi |
1,493 | f16 | 923.2 | 45.2 | null | null |
5,916 | f16 | 1,210.8 | 44.7 | null | null |
11,814 | f16 | 1,187.8 | 44.9 | null | null |
23,610 | f16 | 1,153.4 | 44.6 | null | null |
110,019 | f16 | 815 | 38 | null | 64K+ context |
0 | q8_0 | null | null | null | 47% smaller KV buffer vs f16 |
1,493 | q8_0 | 925.1 | 45.3 | null | null |
5,916 | q8_0 | 1,206.7 | 44.9 | null | null |
11,814 | q8_0 | 1,183.9 | 42.9 | null | null |
23,610 | q8_0 | 1,149.1 | 39.7 | null | null |
110,019 | q8_0 | 810.2 | 25 | null | 64K+ context — gen speed degraded 34% |
0 | q4_0 | null | null | null | 72% smaller KV buffer vs f16 |
1,493 | q4_0 | 925.7 | 45.6 | null | null |
5,916 | q4_0 | 1,205.8 | 45 | null | null |
11,814 | q4_0 | 1,191.2 | 42.7 | null | null |
23,610 | q4_0 | 1,151.9 | 39.3 | null | null |
110,019 | q4_0 | 812.6 | 24 | null | 64K+ context — gen speed degraded 37% |