Datasets:
Dataset Viewer
The dataset viewer is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code: StreamingRowsError
Exception: CastError
Message: Couldn't cast
case_id: string
category: string
description: string
arrival_mode: string
metadata: struct<requests_per_case: int64, stress_focus: string>
child 0, requests_per_case: int64
child 1, stress_focus: string
requests: list<item: struct<request_id: int64, prompt: string, input_tokens: int64, output_tokens: int64, arri (... 168 chars omitted)
child 0, item: struct<request_id: int64, prompt: string, input_tokens: int64, output_tokens: int64, arrival_time: d (... 156 chars omitted)
child 0, request_id: int64
child 1, prompt: string
child 2, input_tokens: int64
child 3, output_tokens: int64
child 4, arrival_time: double
child 5, priority_class: string
child 6, service_tier: string
child 7, metadata: struct<source_request_id: int64, source_conversation_index: int64, source_pair_index: int64>
child 0, source_request_id: int64
child 1, source_conversation_index: int64
child 2, source_pair_index: int64
generation_seed: int64
tokenizer_name: string
dataset_name: string
split: string
tokenizer_backend: string
cases_per_category: int64
source_record_count: int64
categories: list<item: struct<name: string, description: string, arrival_mode: string, requests_per_case: int64, (... 117 chars omitted)
child 0, item: struct<name: string, description: string, arrival_mode: string, requests_per_case: int64, sampling_p (... 105 chars omitted)
child 0, name: string
child 1, description: string
child 2, arrival_mode: string
child 3, requests_per_case: int64
child 4, sampling_policy: string
child 5, priority_mode: string
child 6, class_weights: struct<short: double, medium: double, long: double>
child 0, short: double
child 1, medium: double
child 2, long: double
to
{'dataset_name': Value('string'), 'split': Value('string'), 'tokenizer_backend': Value('string'), 'tokenizer_name': Value('string'), 'source_record_count': Value('int64'), 'cases_per_category': Value('int64'), 'categories': List({'name': Value('string'), 'description': Value('string'), 'arrival_mode': Value('string'), 'requests_per_case': Value('int64'), 'sampling_policy': Value('string'), 'priority_mode': Value('string'), 'class_weights': {'short': Value('float64'), 'medium': Value('float64'), 'long': Value('float64')}}), 'generation_seed': Value('int64')}
because column names don't match
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
return get_rows(
^^^^^^^^^
File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2690, in __iter__
for key, example in ex_iterable:
^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2227, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2251, in _iter_arrow
for key, pa_table in self.ex_iterable._iter_arrow():
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 494, in _iter_arrow
for key, pa_table in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 384, in _iter_arrow
for key, pa_table in self.generate_tables_fn(**gen_kwags):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 295, in _generate_tables
self._cast_table(pa_table, json_field_paths=json_field_paths),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 128, in _cast_table
pa_table = table_cast(pa_table, self.info.features.arrow_schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2321, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2249, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
case_id: string
category: string
description: string
arrival_mode: string
metadata: struct<requests_per_case: int64, stress_focus: string>
child 0, requests_per_case: int64
child 1, stress_focus: string
requests: list<item: struct<request_id: int64, prompt: string, input_tokens: int64, output_tokens: int64, arri (... 168 chars omitted)
child 0, item: struct<request_id: int64, prompt: string, input_tokens: int64, output_tokens: int64, arrival_time: d (... 156 chars omitted)
child 0, request_id: int64
child 1, prompt: string
child 2, input_tokens: int64
child 3, output_tokens: int64
child 4, arrival_time: double
child 5, priority_class: string
child 6, service_tier: string
child 7, metadata: struct<source_request_id: int64, source_conversation_index: int64, source_pair_index: int64>
child 0, source_request_id: int64
child 1, source_conversation_index: int64
child 2, source_pair_index: int64
generation_seed: int64
tokenizer_name: string
dataset_name: string
split: string
tokenizer_backend: string
cases_per_category: int64
source_record_count: int64
categories: list<item: struct<name: string, description: string, arrival_mode: string, requests_per_case: int64, (... 117 chars omitted)
child 0, item: struct<name: string, description: string, arrival_mode: string, requests_per_case: int64, sampling_p (... 105 chars omitted)
child 0, name: string
child 1, description: string
child 2, arrival_mode: string
child 3, requests_per_case: int64
child 4, sampling_policy: string
child 5, priority_mode: string
child 6, class_weights: struct<short: double, medium: double, long: double>
child 0, short: double
child 1, medium: double
child 2, long: double
to
{'dataset_name': Value('string'), 'split': Value('string'), 'tokenizer_backend': Value('string'), 'tokenizer_name': Value('string'), 'source_record_count': Value('int64'), 'cases_per_category': Value('int64'), 'categories': List({'name': Value('string'), 'description': Value('string'), 'arrival_mode': Value('string'), 'requests_per_case': Value('int64'), 'sampling_policy': Value('string'), 'priority_mode': Value('string'), 'class_weights': {'short': Value('float64'), 'medium': Value('float64'), 'long': Value('float64')}}), 'generation_seed': Value('int64')}
because column names don't matchNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
ShareGPT Workload Cases
This dataset packages workload-generation artifacts derived from Aeala/ShareGPT_Vicuna_unfiltered.
It includes the manifest describing how cases were generated and per-category case files intended for scheduler and systems benchmarking.
Contents
manifest.json: dataset generation metadata and category definitions.cases/: 176 generated case files across 22 workload categories.
Generation Metadata
- Split:
train - Cases per category:
8 - Tokenizer:
gpt-4viatiktoken - Generation seed:
7
Category Preview
steady_short: Steady Poisson arrivals dominated by short prompts. (32 requests per case)steady_mixed: Steady mixed load with balanced request lengths. (64 requests per case)bursty_chat: Bursty conversational traffic with mostly short and medium prompts. (96 requests per case)bursty_long: Burst-heavy load with a larger share of long prompts. (72 requests per case)flash_crowd: Large clustered traffic spikes for stress testing scheduler fairness and queueing. (192 requests per case)uniform_workload: Uniform request sizes and steady arrivals for baseline throughput and scheduler overhead. (64 requests per case)low_load: Low-arrival workload with little contention to approximate ideal latency. (32 requests per case)short_long_mix: Majority short requests with a minority of long requests to expose head-of-line blocking. (80 requests per case)- ...and 14 more categories.
Notes
- Each category folder also contains a
category.jsonfile with category-level metadata. - Source records are intentionally excluded from this Hub upload.
- The uploaded layout mirrors the local directory structure for reproducibility.
- Downloads last month
- 15