Dataset Preview
Duplicate
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code:   DatasetGenerationCastError
Exception:    DatasetGenerationCastError
Message:      An error occurred while generating the dataset

All the data files must have the same columns, but at some point there are 12 new columns ({'kv_cache_config', 'versions', 'workload', 'config_type', 'cluster', 'entry_id', 'environment', 'model', 'hardware', 'metrics', 'sagellm_version', 'metadata'})

This happened while the json dataset builder was generating data using

hf://datasets/wangyao36/sagellm-benchmark-results/leaderboard_single.json (at revision 74a5d75fbc56556cfdf67e2915222f879319e19f), [/tmp/hf-datasets-cache/medium/datasets/13305656036013-config-parquet-and-info-wangyao36-sagellm-benchma-386b2a5f/hub/datasets--wangyao36--sagellm-benchmark-results/snapshots/74a5d75fbc56556cfdf67e2915222f879319e19f/leaderboard_multi.json (origin=hf://datasets/wangyao36/sagellm-benchmark-results@74a5d75fbc56556cfdf67e2915222f879319e19f/leaderboard_multi.json), /tmp/hf-datasets-cache/medium/datasets/13305656036013-config-parquet-and-info-wangyao36-sagellm-benchma-386b2a5f/hub/datasets--wangyao36--sagellm-benchmark-results/snapshots/74a5d75fbc56556cfdf67e2915222f879319e19f/leaderboard_single.json (origin=hf://datasets/wangyao36/sagellm-benchmark-results@74a5d75fbc56556cfdf67e2915222f879319e19f/leaderboard_single.json)]

Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback:    Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1887, in _prepare_split_single
                  writer.write_table(table)
                File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 674, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2272, in table_cast
                  return cast_table_to_schema(table, schema)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              entry_id: string
              sagellm_version: string
              config_type: string
              hardware: struct<chip_count: int64, chip_model: string, chips_per_node: int64, interconnect: string, intra_nod (... 92 chars omitted)
                child 0, chip_count: int64
                child 1, chip_model: string
                child 2, chips_per_node: int64
                child 3, interconnect: string
                child 4, intra_node_interconnect: string
                child 5, memory_per_chip_gb: double
                child 6, total_memory_gb: double
                child 7, vendor: string
              model: struct<name: string, parameters: string, precision: string, quantization: string>
                child 0, name: string
                child 1, parameters: string
                child 2, precision: string
                child 3, quantization: string
              workload: struct<batch_size: int64, concurrent_requests: int64, dataset: string, input_length: int64, output_l (... 13 chars omitted)
                child 0, batch_size: int64
                child 1, concurrent_requests: int64
                child 2, dataset: string
                child 3, input_length: int64
                child 4, output_length: int64
              metrics: struct<error_rate: double, evict_count: int64, evict_ms: double, kv_used_bytes: int64, kv_used_token (... 152 chars omitted)
                child 0, error_rate: double
                child 1, evict_count: int64
                child 2, evict_ms: double
                child 3, kv_used_bytes: int64
                child 4, kv_used_tokens: int64
                child 5, peak_mem_mb: int64
                child 6, prefix_hit_rate: double
                child 7, spec_accept_rate: null
                child 8, tbt_ms: double
                child 9, throughput_tps: double
                child 10, tpot_ms: double
                child 11, ttft_ms: double
              cluster: null
              versions: struct<backend: string, benchmark: string, comm: string, compression: string, control_plane: string, (... 67 chars omitted)
                child 0, backend: string
                child 1, benchmark: string
                child 2, comm: string
                child 3, compression: string
                child 4, control_plane: string
                child 5, core: string
                child 6, gateway: string
                child 7, kv_cache: string
                child 8, protocol: string
              environment: struct<cann_version: null, cuda_version: string, driver_version: string, os: string, python_version: (... 33 chars omitted)
                child 0, cann_version: null
                child 1, cuda_version: string
                child 2, driver_version: string
                child 3, os: string
                child 4, python_version: string
                child 5, pytorch_version: string
              kv_cache_config: struct<budget_tokens: int64, enabled: bool, eviction_policy: string, prefix_cache_enabled: bool>
                child 0, budget_tokens: int64
                child 1, enabled: bool
                child 2, eviction_policy: string
                child 3, prefix_cache_enabled: bool
              metadata: struct<changelog_url: string, data_source: string, git_commit: null, notes: string, release_date: st (... 88 chars omitted)
                child 0, changelog_url: string
                child 1, data_source: string
                child 2, git_commit: null
                child 3, notes: string
                child 4, release_date: string
                child 5, reproducible_cmd: string
                child 6, submitted_at: string
                child 7, submitter: string
                child 8, verified: bool
              -- schema metadata --
              pandas: '{"index_columns": [], "column_indexes": [], "columns": [{"name":' + 1521
              to
              {}
              because column names don't match
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1347, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 980, in convert_to_parquet
                  builder.download_and_prepare(
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 884, in download_and_prepare
                  self._download_and_prepare(
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 947, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1736, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1889, in _prepare_split_single
                  raise DatasetGenerationCastError.from_cast_error(
              datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
              
              All the data files must have the same columns, but at some point there are 12 new columns ({'kv_cache_config', 'versions', 'workload', 'config_type', 'cluster', 'entry_id', 'environment', 'model', 'hardware', 'metrics', 'sagellm_version', 'metadata'})
              
              This happened while the json dataset builder was generating data using
              
              hf://datasets/wangyao36/sagellm-benchmark-results/leaderboard_single.json (at revision 74a5d75fbc56556cfdf67e2915222f879319e19f), [/tmp/hf-datasets-cache/medium/datasets/13305656036013-config-parquet-and-info-wangyao36-sagellm-benchma-386b2a5f/hub/datasets--wangyao36--sagellm-benchmark-results/snapshots/74a5d75fbc56556cfdf67e2915222f879319e19f/leaderboard_multi.json (origin=hf://datasets/wangyao36/sagellm-benchmark-results@74a5d75fbc56556cfdf67e2915222f879319e19f/leaderboard_multi.json), /tmp/hf-datasets-cache/medium/datasets/13305656036013-config-parquet-and-info-wangyao36-sagellm-benchma-386b2a5f/hub/datasets--wangyao36--sagellm-benchmark-results/snapshots/74a5d75fbc56556cfdf67e2915222f879319e19f/leaderboard_single.json (origin=hf://datasets/wangyao36/sagellm-benchmark-results@74a5d75fbc56556cfdf67e2915222f879319e19f/leaderboard_single.json)]
              
              Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

entry_id
string
sagellm_version
string
config_type
string
hardware
dict
model
dict
workload
dict
metrics
dict
cluster
null
versions
dict
environment
dict
kv_cache_config
dict
metadata
dict
58e2e698-e654-4230-bc70-c97dc2a4d085
0.3.0.3
single_gpu
{ "chip_count": 1, "chip_model": "NVIDIA A100 80GB PCIe", "chips_per_node": 1, "interconnect": "None", "intra_node_interconnect": "None", "memory_per_chip_gb": 79.25, "total_memory_gb": 79.25, "vendor": "NVIDIA" }
{ "name": "gpt2", "parameters": "unknown", "precision": "FP32", "quantization": "None" }
{ "batch_size": 1, "concurrent_requests": 1, "dataset": "default", "input_length": 2048, "output_length": 512 }
{ "error_rate": 0, "evict_count": 0, "evict_ms": 0, "kv_used_bytes": 153600, "kv_used_tokens": 600, "peak_mem_mb": 1166, "prefix_hit_rate": 0, "spec_accept_rate": null, "tbt_ms": 0.09345526647086001, "throughput_tps": 43.831197194594424, "tpot_ms": 0.09345526647086001, "ttft_ms": 2279.590447743734 }
null
{ "backend": "0.3.0.6", "benchmark": "0.3.0.3", "comm": "0.1.1.7", "compression": "0.1.1.7", "control_plane": "0.1.1.5", "core": "0.3.0.5", "gateway": "0.1.1.5", "kv_cache": "0.1.1.6", "protocol": "0.1.1.0" }
{ "cann_version": null, "cuda_version": "12.8", "driver_version": "570.124.06", "os": "Linux 6.2.0-26-generic", "python_version": "3.11.14", "pytorch_version": "2.10.0+cu128" }
{ "budget_tokens": 8192, "enabled": true, "eviction_policy": "LRU", "prefix_cache_enabled": true }
{ "changelog_url": "https://github.com/intellistream/sagellm/blob/main/CHANGELOG.md", "data_source": "automated-benchmark", "git_commit": null, "notes": "Benchmark run: long_input", "release_date": "2026-01-28", "reproducible_cmd": "sagellm-benchmark run --workload None --backend None --model None", "submitted_at": "2026-01-28T13:54:58.993498+00:00", "submitter": "sagellm-benchmark automated run", "verified": false }
6e459ca3-8148-4a36-88db-90293faab327
0.3.0.3
single_gpu
{ "chip_count": 1, "chip_model": "NVIDIA A100 80GB PCIe", "chips_per_node": 1, "interconnect": "None", "intra_node_interconnect": "None", "memory_per_chip_gb": 79.25, "total_memory_gb": 79.25, "vendor": "NVIDIA" }
{ "name": "distilgpt2", "parameters": "unknown", "precision": "FP32", "quantization": "None" }
{ "batch_size": 1, "concurrent_requests": 1, "dataset": "default", "input_length": 2048, "output_length": 512 }
{ "error_rate": 0, "evict_count": 0, "evict_ms": 0, "kv_used_bytes": 153600, "kv_used_tokens": 600, "peak_mem_mb": 987, "prefix_hit_rate": 0, "spec_accept_rate": null, "tbt_ms": 0.11298792932170001, "throughput_tps": 94.67920841374017, "tpot_ms": 0.11298792932170001, "ttft_ms": 1048.784653345744 }
null
{ "backend": "0.3.0.6", "benchmark": "0.3.0.3", "comm": "0.1.1.7", "compression": "0.1.1.7", "control_plane": "0.1.1.5", "core": "0.3.0.5", "gateway": "0.1.1.5", "kv_cache": "0.1.1.6", "protocol": "0.1.1.0" }
{ "cann_version": null, "cuda_version": "12.8", "driver_version": "570.124.06", "os": "Linux 6.2.0-26-generic", "python_version": "3.11.14", "pytorch_version": "2.10.0+cu128" }
{ "budget_tokens": 8192, "enabled": true, "eviction_policy": "LRU", "prefix_cache_enabled": true }
{ "changelog_url": "https://github.com/intellistream/sagellm/blob/main/CHANGELOG.md", "data_source": "automated-benchmark", "git_commit": null, "notes": "Benchmark run: long_input", "release_date": "2026-01-28", "reproducible_cmd": "sagellm-benchmark run --workload None --backend None --model None", "submitted_at": "2026-01-28T13:55:38.855307+00:00", "submitter": "sagellm-benchmark automated run", "verified": false }
87e76194-12fb-44bc-afd4-4a6de4255732
0.3.0.7
single_gpu
{ "chip_count": 1, "chip_model": "CPU", "chips_per_node": 1, "interconnect": "None", "intra_node_interconnect": "None", "memory_per_chip_gb": null, "total_memory_gb": null, "vendor": "Intel" }
{ "name": "gpt2", "parameters": "unknown", "precision": "FP32", "quantization": "None" }
{ "batch_size": 1, "concurrent_requests": 1, "dataset": "default", "input_length": 128, "output_length": 128 }
{ "error_rate": 0, "evict_count": 0, "evict_ms": 0, "kv_used_bytes": 256000, "kv_used_tokens": 1000, "peak_mem_mb": 925, "prefix_hit_rate": 0, "spec_accept_rate": null, "tbt_ms": 0.004186533918284, "throughput_tps": 83.8454410108036, "tpot_ms": 0.004186533918284, "ttft_ms": 1192.5671577453613 }
null
{ "backend": "0.3.0.11", "benchmark": "0.3.0.7", "comm": "0.1.1.7", "compression": "0.1.1.7", "control_plane": "0.1.1.5", "core": "0.3.0.10", "gateway": "0.1.1.5", "kv_cache": "0.1.1.6", "protocol": "0.1.1.0" }
{ "cann_version": null, "cuda_version": null, "driver_version": null, "os": "Darwin 25.2.0", "python_version": "3.11.14", "pytorch_version": "2.10.0" }
{ "budget_tokens": 8192, "enabled": true, "eviction_policy": "LRU", "prefix_cache_enabled": true }
{ "changelog_url": "https://github.com/intellistream/sagellm/blob/main/CHANGELOG.md", "data_source": "automated-benchmark", "git_commit": null, "notes": "Benchmark run: short_input", "release_date": "2026-01-30", "reproducible_cmd": "sagellm-benchmark run --workload short --backend cpu --model gpt2", "submitted_at": "2026-01-30T01:51:53.871528+00:00", "submitter": "sagellm-benchmark automated run", "verified": false }
476efca6-5d83-4272-b61d-d42a97353657
0.3.0.7
single_gpu
{ "chip_count": 1, "chip_model": "CPU", "chips_per_node": 1, "interconnect": "None", "intra_node_interconnect": "None", "memory_per_chip_gb": null, "total_memory_gb": null, "vendor": "Intel" }
{ "name": "gpt2", "parameters": "unknown", "precision": "FP32", "quantization": "None" }
{ "batch_size": 1, "concurrent_requests": 1, "dataset": "default", "input_length": 256, "output_length": 256 }
{ "error_rate": 0, "evict_count": 0, "evict_ms": 0, "kv_used_bytes": 512000, "kv_used_tokens": 2000, "peak_mem_mb": 954, "prefix_hit_rate": 0, "spec_accept_rate": null, "tbt_ms": 0.001848586882003, "throughput_tps": 83.5034772735326, "tpot_ms": 0.001848586882003, "ttft_ms": 1197.425651550293 }
null
{ "backend": "0.3.0.11", "benchmark": "0.3.0.7", "comm": "0.1.1.7", "compression": "0.1.1.7", "control_plane": "0.1.1.5", "core": "0.3.0.10", "gateway": "0.1.1.5", "kv_cache": "0.1.1.6", "protocol": "0.1.1.0" }
{ "cann_version": null, "cuda_version": null, "driver_version": null, "os": "Darwin 25.2.0", "python_version": "3.11.14", "pytorch_version": "2.10.0" }
{ "budget_tokens": 8192, "enabled": true, "eviction_policy": "LRU", "prefix_cache_enabled": true }
{ "changelog_url": "https://github.com/intellistream/sagellm/blob/main/CHANGELOG.md", "data_source": "automated-benchmark", "git_commit": null, "notes": "Benchmark run: stress_test", "release_date": "2026-01-30", "reproducible_cmd": "sagellm-benchmark run --workload m1 --backend cpu --model gpt2", "submitted_at": "2026-01-30T01:50:46.571030+00:00", "submitter": "sagellm-benchmark automated run", "verified": false }
40e312d9-7a3b-458f-9d9d-9affaa54cb7d
0.3.0.7
single_gpu
{ "chip_count": 1, "chip_model": "CPU", "chips_per_node": 1, "interconnect": "None", "intra_node_interconnect": "None", "memory_per_chip_gb": null, "total_memory_gb": null, "vendor": "Intel" }
{ "name": "gpt2", "parameters": "unknown", "precision": "FP32", "quantization": "None" }
{ "batch_size": 1, "concurrent_requests": 1, "dataset": "default", "input_length": 2048, "output_length": 512 }
{ "error_rate": 0, "evict_count": 0, "evict_ms": 0, "kv_used_bytes": 153600, "kv_used_tokens": 600, "peak_mem_mb": 953, "prefix_hit_rate": 0, "spec_accept_rate": null, "tbt_ms": 0.005639362014102, "throughput_tps": 75.37126692983169, "tpot_ms": 0.005639362014102, "ttft_ms": 1326.2285391489665 }
null
{ "backend": "0.3.0.11", "benchmark": "0.3.0.7", "comm": "0.1.1.7", "compression": "0.1.1.7", "control_plane": "0.1.1.5", "core": "0.3.0.10", "gateway": "0.1.1.5", "kv_cache": "0.1.1.6", "protocol": "0.1.1.0" }
{ "cann_version": null, "cuda_version": null, "driver_version": null, "os": "Darwin 25.2.0", "python_version": "3.11.14", "pytorch_version": "2.10.0" }
{ "budget_tokens": 8192, "enabled": true, "eviction_policy": "LRU", "prefix_cache_enabled": true }
{ "changelog_url": "https://github.com/intellistream/sagellm/blob/main/CHANGELOG.md", "data_source": "automated-benchmark", "git_commit": null, "notes": "Benchmark run: long_input", "release_date": "2026-01-30", "reproducible_cmd": "sagellm-benchmark run --workload m1 --backend cpu --model gpt2", "submitted_at": "2026-01-30T01:50:34.582882+00:00", "submitter": "sagellm-benchmark automated run", "verified": false }
30d76796-ef21-4b53-8e6f-96d21fdafd9f
0.3.0.3
single_gpu
{ "chip_count": 1, "chip_model": "NVIDIA A100 80GB PCIe", "chips_per_node": 1, "interconnect": "None", "intra_node_interconnect": "None", "memory_per_chip_gb": 79.25, "total_memory_gb": 79.25, "vendor": "NVIDIA" }
{ "name": "gpt2", "parameters": "unknown", "precision": "FP32", "quantization": "None" }
{ "batch_size": 1, "concurrent_requests": 1, "dataset": "default", "input_length": 128, "output_length": 128 }
{ "error_rate": 0, "evict_count": 0, "evict_ms": 0, "kv_used_bytes": 256000, "kv_used_tokens": 1000, "peak_mem_mb": 1169, "prefix_hit_rate": 0, "spec_accept_rate": null, "tbt_ms": 0.065885408960207, "throughput_tps": 47.6563732468502, "tpot_ms": 0.065885408960207, "ttft_ms": 2094.5648670196533 }
null
{ "backend": "0.3.0.6", "benchmark": "0.3.0.3", "comm": "0.1.1.7", "compression": "0.1.1.7", "control_plane": "0.1.1.5", "core": "0.3.0.5", "gateway": "0.1.1.5", "kv_cache": "0.1.1.6", "protocol": "0.1.1.0" }
{ "cann_version": null, "cuda_version": "12.8", "driver_version": "570.124.06", "os": "Linux 6.2.0-26-generic", "python_version": "3.11.14", "pytorch_version": "2.10.0+cu128" }
{ "budget_tokens": 8192, "enabled": true, "eviction_policy": "LRU", "prefix_cache_enabled": true }
{ "changelog_url": "https://github.com/intellistream/sagellm/blob/main/CHANGELOG.md", "data_source": "automated-benchmark", "git_commit": null, "notes": "Benchmark run: short_input", "release_date": "2026-01-28", "reproducible_cmd": "sagellm-benchmark run --workload None --backend None --model None", "submitted_at": "2026-01-28T13:54:34.938182+00:00", "submitter": "sagellm-benchmark automated run", "verified": false }
e177f318-a7ae-4c47-813b-113d727d429d
0.3.0.3
single_gpu
{ "chip_count": 1, "chip_model": "NVIDIA A100 80GB PCIe", "chips_per_node": 1, "interconnect": "None", "intra_node_interconnect": "None", "memory_per_chip_gb": 79.25, "total_memory_gb": 79.25, "vendor": "NVIDIA" }
{ "name": "distilgpt2", "parameters": "unknown", "precision": "FP32", "quantization": "None" }
{ "batch_size": 1, "concurrent_requests": 1, "dataset": "default", "input_length": 128, "output_length": 128 }
{ "error_rate": 0, "evict_count": 0, "evict_ms": 0, "kv_used_bytes": 256000, "kv_used_tokens": 1000, "peak_mem_mb": 1007, "prefix_hit_rate": 0, "spec_accept_rate": null, "tbt_ms": 0.06690266156437401, "throughput_tps": 97.78223358131568, "tpot_ms": 0.06690266156437401, "ttft_ms": 1017.4273014068604 }
null
{ "backend": "0.3.0.6", "benchmark": "0.3.0.3", "comm": "0.1.1.7", "compression": "0.1.1.7", "control_plane": "0.1.1.5", "core": "0.3.0.5", "gateway": "0.1.1.5", "kv_cache": "0.1.1.6", "protocol": "0.1.1.0" }
{ "cann_version": null, "cuda_version": "12.8", "driver_version": "570.124.06", "os": "Linux 6.2.0-26-generic", "python_version": "3.11.14", "pytorch_version": "2.10.0+cu128" }
{ "budget_tokens": 8192, "enabled": true, "eviction_policy": "LRU", "prefix_cache_enabled": true }
{ "changelog_url": "https://github.com/intellistream/sagellm/blob/main/CHANGELOG.md", "data_source": "automated-benchmark", "git_commit": null, "notes": "Benchmark run: short_input", "release_date": "2026-01-28", "reproducible_cmd": "sagellm-benchmark run --workload None --backend None --model None", "submitted_at": "2026-01-28T13:55:19.543994+00:00", "submitter": "sagellm-benchmark automated run", "verified": false }
b17f6614-e6a1-4eec-b57c-9b58ae500afa
0.3.0.3
multi_gpu
{ "chip_count": 2, "chip_model": "NVIDIA A100 80GB PCIe", "chips_per_node": 2, "interconnect": "NVLink", "intra_node_interconnect": "NVLink", "memory_per_chip_gb": 79.25, "total_memory_gb": 158.5, "vendor": "NVIDIA" }
{ "name": "distilgpt2", "parameters": "unknown", "precision": "FP32", "quantization": "None" }
{ "batch_size": 1, "concurrent_requests": 1, "dataset": "default", "input_length": 128, "output_length": 128 }
{ "error_rate": 0, "evict_count": 0, "evict_ms": 0, "kv_used_bytes": 256000, "kv_used_tokens": 1000, "peak_mem_mb": 992, "prefix_hit_rate": 0, "spec_accept_rate": null, "tbt_ms": 0.033918053212791004, "throughput_tps": 102.38339955109224, "tpot_ms": 0.033918053212791004, "ttft_ms": 975.6916999816895 }
null
{ "backend": "0.3.0.6", "benchmark": "0.3.0.3", "comm": "0.1.1.7", "compression": "0.1.1.7", "control_plane": "0.1.1.5", "core": "0.3.0.5", "gateway": "0.1.1.5", "kv_cache": "0.1.1.6", "protocol": "0.1.1.0" }
{ "cann_version": null, "cuda_version": "12.8", "driver_version": "570.124.06", "os": "Linux 6.2.0-26-generic", "python_version": "3.11.14", "pytorch_version": "2.10.0+cu128" }
{ "budget_tokens": 8192, "enabled": true, "eviction_policy": "LRU", "prefix_cache_enabled": true }
{ "changelog_url": "https://github.com/intellistream/sagellm/blob/main/CHANGELOG.md", "data_source": "automated-benchmark", "git_commit": null, "notes": "Benchmark run: short_input", "release_date": "2026-01-28", "reproducible_cmd": "sagellm-benchmark run --workload None --backend None --model None", "submitted_at": "2026-01-28T13:50:56.179302+00:00", "submitter": "sagellm-benchmark automated run", "verified": false }
c4299844-a23e-4c3d-a53f-11fb469d1c90
0.3.0.3
multi_gpu
{ "chip_count": 2, "chip_model": "NVIDIA A100 80GB PCIe", "chips_per_node": 2, "interconnect": "NVLink", "intra_node_interconnect": "NVLink", "memory_per_chip_gb": 79.25, "total_memory_gb": 158.5, "vendor": "NVIDIA" }
{ "name": "gpt2", "parameters": "unknown", "precision": "FP32", "quantization": "None" }
{ "batch_size": 1, "concurrent_requests": 1, "dataset": "default", "input_length": 2048, "output_length": 512 }
{ "error_rate": 0, "evict_count": 0, "evict_ms": 0, "kv_used_bytes": 153600, "kv_used_tokens": 600, "peak_mem_mb": 1164, "prefix_hit_rate": 0, "spec_accept_rate": null, "tbt_ms": 0.1068115234375, "throughput_tps": 53.363877373263605, "tpot_ms": 0.1068115234375, "ttft_ms": 1881.1846574147542 }
null
{ "backend": "0.3.0.6", "benchmark": "0.3.0.3", "comm": "0.1.1.7", "compression": "0.1.1.7", "control_plane": "0.1.1.5", "core": "0.3.0.5", "gateway": "0.1.1.5", "kv_cache": "0.1.1.6", "protocol": "0.1.1.0" }
{ "cann_version": null, "cuda_version": "12.8", "driver_version": "570.124.06", "os": "Linux 6.2.0-26-generic", "python_version": "3.11.14", "pytorch_version": "2.10.0+cu128" }
{ "budget_tokens": 8192, "enabled": true, "eviction_policy": "LRU", "prefix_cache_enabled": true }
{ "changelog_url": "https://github.com/intellistream/sagellm/blob/main/CHANGELOG.md", "data_source": "automated-benchmark", "git_commit": null, "notes": "Benchmark run: long_input", "release_date": "2026-01-28", "reproducible_cmd": "sagellm-benchmark run --workload None --backend None --model None", "submitted_at": "2026-01-28T13:50:28.798723+00:00", "submitter": "sagellm-benchmark automated run", "verified": false }
638cac48-5290-4dbb-acec-f64578d74577
0.3.0.3
multi_gpu
{ "chip_count": 2, "chip_model": "NVIDIA A100 80GB PCIe", "chips_per_node": 2, "interconnect": "NVLink", "intra_node_interconnect": "NVLink", "memory_per_chip_gb": 79.25, "total_memory_gb": 158.5, "vendor": "NVIDIA" }
{ "name": "gpt2", "parameters": "unknown", "precision": "FP32", "quantization": "None" }
{ "batch_size": 1, "concurrent_requests": 1, "dataset": "default", "input_length": 128, "output_length": 128 }
{ "error_rate": 0, "evict_count": 0, "evict_ms": 0, "kv_used_bytes": 256000, "kv_used_tokens": 1000, "peak_mem_mb": 1163, "prefix_hit_rate": 0, "spec_accept_rate": null, "tbt_ms": 0.07014563589385, "throughput_tps": 59.276537729645085, "tpot_ms": 0.07014563589385, "ttft_ms": 1689.3131256103516 }
null
{ "backend": "0.3.0.6", "benchmark": "0.3.0.3", "comm": "0.1.1.7", "compression": "0.1.1.7", "control_plane": "0.1.1.5", "core": "0.3.0.5", "gateway": "0.1.1.5", "kv_cache": "0.1.1.6", "protocol": "0.1.1.0" }
{ "cann_version": null, "cuda_version": "12.8", "driver_version": "570.124.06", "os": "Linux 6.2.0-26-generic", "python_version": "3.11.14", "pytorch_version": "2.10.0+cu128" }
{ "budget_tokens": 8192, "enabled": true, "eviction_policy": "LRU", "prefix_cache_enabled": true }
{ "changelog_url": "https://github.com/intellistream/sagellm/blob/main/CHANGELOG.md", "data_source": "automated-benchmark", "git_commit": null, "notes": "Benchmark run: short_input", "release_date": "2026-01-28", "reproducible_cmd": "sagellm-benchmark run --workload None --backend None --model None", "submitted_at": "2026-01-28T13:50:06.987540+00:00", "submitter": "sagellm-benchmark automated run", "verified": false }
2d0c53d8-32f0-4ed9-9d29-52b9df56136c
0.4.0.0
multi_gpu
{ "chip_count": 2, "chip_model": "NVIDIA A100 80GB PCIe", "chips_per_node": 2, "interconnect": "NVLink", "intra_node_interconnect": "NVLink", "memory_per_chip_gb": 79.25, "total_memory_gb": 158.5, "vendor": "NVIDIA" }
{ "name": "tiny-gpt2", "parameters": "unknown", "precision": "FP32", "quantization": "None" }
{ "batch_size": 1, "concurrent_requests": 1, "dataset": "default", "input_length": 128, "output_length": 128 }
{ "error_rate": 0, "evict_count": 0, "evict_ms": 0, "kv_used_bytes": 327680, "kv_used_tokens": 1280, "peak_mem_mb": 760, "prefix_hit_rate": 0, "spec_accept_rate": null, "tbt_ms": 0.11, "throughput_tps": 159.17000000000002, "tpot_ms": 0.11, "ttft_ms": 811.884 }
null
{ "backend": "0.4.0.1", "benchmark": "0.4.0.0", "comm": "0.1.1.7", "compression": "0.1.1.7", "control_plane": "0.1.1.5", "core": "0.4.0.1", "gateway": "0.1.1.5", "kv_cache": "0.1.1.6", "protocol": "0.1.1.0" }
{ "cann_version": null, "cuda_version": "12.8", "driver_version": "570.124.06", "os": "Linux 6.2.0-26-generic", "python_version": "3.11.14", "pytorch_version": "2.10.0+cu128" }
{ "budget_tokens": 8192, "enabled": true, "eviction_policy": "LRU", "prefix_cache_enabled": true }
{ "changelog_url": "https://github.com/intellistream/sagellm/blob/main/CHANGELOG.md", "data_source": "automated-benchmark", "git_commit": null, "notes": "Benchmark run: short_input", "release_date": "2026-01-30", "reproducible_cmd": "sagellm-benchmark run --workload short --backend cpu --model tiny-gpt2", "submitted_at": "2026-01-30T06:46:36.902918+00:00", "submitter": "sagellm-benchmark automated run", "verified": false }
a6a63f1e-f8e9-4d55-86cb-eabe26c3f11f
0.3.0.3
multi_gpu
{ "chip_count": 2, "chip_model": "NVIDIA A100 80GB PCIe", "chips_per_node": 2, "interconnect": "NVLink", "intra_node_interconnect": "NVLink", "memory_per_chip_gb": 79.25, "total_memory_gb": 158.5, "vendor": "NVIDIA" }
{ "name": "distilgpt2", "parameters": "unknown", "precision": "FP32", "quantization": "None" }
{ "batch_size": 1, "concurrent_requests": 1, "dataset": "default", "input_length": 2048, "output_length": 512 }
{ "error_rate": 0, "evict_count": 0, "evict_ms": 0, "kv_used_bytes": 153600, "kv_used_tokens": 600, "peak_mem_mb": 998, "prefix_hit_rate": 0, "spec_accept_rate": null, "tbt_ms": 0.10575108255200101, "throughput_tps": 99.00923825499314, "tpot_ms": 0.10575108255200101, "ttft_ms": 1003.1216144561768 }
null
{ "backend": "0.3.0.6", "benchmark": "0.3.0.3", "comm": "0.1.1.7", "compression": "0.1.1.7", "control_plane": "0.1.1.5", "core": "0.3.0.5", "gateway": "0.1.1.5", "kv_cache": "0.1.1.6", "protocol": "0.1.1.0" }
{ "cann_version": null, "cuda_version": "12.8", "driver_version": "570.124.06", "os": "Linux 6.2.0-26-generic", "python_version": "3.11.14", "pytorch_version": "2.10.0+cu128" }
{ "budget_tokens": 8192, "enabled": true, "eviction_policy": "LRU", "prefix_cache_enabled": true }
{ "changelog_url": "https://github.com/intellistream/sagellm/blob/main/CHANGELOG.md", "data_source": "automated-benchmark", "git_commit": null, "notes": "Benchmark run: long_input", "release_date": "2026-01-28", "reproducible_cmd": "sagellm-benchmark run --workload None --backend None --model None", "submitted_at": "2026-01-28T13:51:15.188389+00:00", "submitter": "sagellm-benchmark automated run", "verified": false }

No dataset card yet

Downloads last month
7