| --- |
| license: mit |
| task_categories: |
| - text-generation |
| tags: |
| - llm-serving |
| - streaming-inference |
| - vllm |
| - systems |
| pretty_name: Stream2LLM Data |
| viewer: false |
| --- |
| |
| # Stream2LLM Dataset |
|
|
| Dataset for the paper: *Stream2LLM: Overlap Context Streaming and Prefill for Reduced Time-to-First-Token* (MLSys 2026 artifact evaluation). Contains workload traces, experiment run logs, and performance model measurements used to produce all figures, tables, and inline numbers in the paper. |
|
|
| This repository is a git submodule of the main [Stream2LLM artifact](https://github.com/rajveerb/stream2llm/tree/mlsys_artifact) (branch: `mlsys_artifact`). Data files are stored with Git LFS on HuggingFace. To fetch everything: |
|
|
| ```bash |
| # Option 1: Clone parent repo with submodules, then pull LFS files |
| git clone --recurse-submodules -b mlsys_artifact https://github.com/rajveerb/stream2llm.git |
| cd stream2llm/data && git lfs install && git lfs pull && cd ../.. |
| |
| # Option 2: If you already cloned without submodules |
| git submodule update --init |
| cd data && git lfs install && git lfs pull && cd .. |
| ``` |
|
|
| ## Directory Structure |
|
|
| ``` |
| data/ |
| ├── anns/ # ANNS workload data |
| │ ├── res/ # 4,997 pipeline trace CSVs |
| │ ├── retrieved_corpus_content.*.json # Corpus content shards |
| │ ├── query_trace_map_5k.json # Query-to-trace mapping |
| │ └── compute_workload_stats.py # Workload statistics script |
| ├── crawl/ # Crawler workload data |
| │ ├── traces/simpleQA_ALL/ # 4,322 query trace CSVs |
| │ └── compute_workload_stats.py # Workload statistics script |
| ├── perf_model/ # Performance model measurements |
| │ ├── recomputation/ # 7 recomputation latency JSONs |
| │ └── swap/ # 11 swap latency JSONs |
| └── run_log/ # Experiment run logs |
| ├── crawler/ # 5 crawler experiment configurations |
| └── anns/ # 5 ANNS experiment configurations |
| ``` |
|
|
| ## Workload Traces |
|
|
| ### ANNS (`anns/res/`) |
|
|
| 4,997 pipeline trace files from approximate nearest neighbor search workloads. Each file is named `_L10000_W8_query<ID>_pipeline_trace.csv` and contains: |
|
|
| | Column | Description | |
| |--------|-------------| |
| | `StartTime_us` | Chunk start time in microseconds | |
| | `EndTime_us` | Chunk end time in microseconds | |
| | `StartIteration` | Starting iteration index | |
| | `EndIteration` | Ending iteration index | |
| | `PipelinePool` | Tuple of candidate IDs in the pipeline pool | |
|
|
| Each row represents a chunk arrival — a batch of ANNS iterations that produces new candidate results. Queries have 1–26 chunks (median 4), with inter-chunk arrival times ranging from sub-millisecond to ~9 seconds (median 37 ms). |
|
|
| ### Crawler (`crawl/traces/simpleQA_ALL/`) |
| |
| 4,322 query trace files from a web crawling workload (SimpleQA question-answering). Each file is named `query_<ID>.csv` and contains: |
|
|
| | Column | Description | |
| |--------|-------------| |
| | `type` | Event type: `tavily_search` or `page_scrape` | |
| | `startTime` | Event start time in seconds | |
| | `endTime` | Event end time in seconds | |
| | `query` | The original query string (on search rows) | |
| | `links_found` | Number of links returned by search | |
| | `url` | URL scraped (on page_scrape rows) | |
| | `content_length` | Length of scraped content in characters | |
| | `content` | Scraped page text | |
| | `link_idx` | Index of the link being scraped | |
|
|
| Each row is a chunk event. The first row is typically a `tavily_search` followed by `page_scrape` events. Queries have 1–17 chunks (median 8), with inter-chunk arrival times of 3 ms to ~35 seconds (median 701 ms). |
|
|
| ## Performance Model (`perf_model/`) |
| |
| Microbenchmark measurements for KV cache eviction cost modeling, collected across multiple GPU types (A40, A100, H100, H200) and model configurations (8B, 70B with varying tensor parallelism). |
| |
| - **`recomputation/`**: Recomputation latency in ms, keyed by number of tokens recomputed (e.g., 16, 32, ..., 8192). Each key maps to an array of repeated measurements. |
| - **`swap/`**: Swap (CPU↔GPU transfer) latency in ms, same key structure. Files ending in `_kernel_latency.json` contain kernel-only timings; others include end-to-end transfer overhead. |
| |
| Hardware configurations: `A40`, `A100`, `H100_tp2`, `H100_tp4_70B`, `H200_tp2`, `H200_tp4_70B`. |
| |
| ## Run Logs (`run_log/`) |
|
|
| Experiment outputs from the Stream2LLM serving system, organized by workload and configuration. |
|
|
| ### Structure |
|
|
| ``` |
| run_log/<workload>/<config>/<scheduler>/<timestamp>/ |
| ├── config_<timestamp>.yaml # Experiment configuration |
| └── run_metrics.csv # Per-event metrics log |
| ``` |
|
|
| ### Configurations |
|
|
| | Workload | Config Directory | Description | |
| |----------|-----------------|-------------| |
| | Crawler | `H200_enhanced_schedulers_v1_full` | Standard H200 runs | |
| | Crawler | `H200_enhanced_schedulers_v1_full_delay_10` | With 10ms artificial delay (memory pressure) | |
| | Crawler | `H200_..._delay_10_recomp_only` | Delay + recomputation-only eviction | |
| | Crawler | `H200_..._delay_10_swap_only` | Delay + swap-only eviction | |
| | Crawler | `H100_enhanced_schedulers_v1_full` | Standard H100 runs | |
| | ANNS | `H200_enhanced_schedulers_v1_full` | Standard H200 runs | |
| | ANNS | `H200_enhanced_schedulers_v1_500q_delay_30` | With 30ms artificial delay (memory pressure) | |
| | ANNS | `H200_..._delay_30_recomp_only` | Delay + recomputation-only eviction | |
| | ANNS | `H200_..._delay_30_swap_only` | Delay + swap-only eviction | |
| | ANNS | `H100_enhanced_schedulers_v1_full` | Standard H100 runs | |
|
|
| ### Schedulers |
|
|
| Each configuration contains results for four scheduling policies: |
|
|
| - **`default_vllm`**: Default vLLM scheduler (baseline) |
| - **`fcfs`**: First-come-first-served |
| - **`lcas`**: Last-chunk-arrival-stamp scheduler |
| - **`mcps`**: Most-chunks-processed scheduler |
| |
| ### Run Metrics CSV |
| |
| The `run_metrics.csv` file logs timestamped events for each experiment run: |
| |
| | Column | Description | |
| |--------|-------------| |
| | `event_timestamp` | Unix timestamp of the event | |
| | `event_type` | Event type (e.g., `replay_start`, `query_delay`, `chunk_sent`, `response_received`) | |
| | `query_id` | Query identifier | |
| | `request_id` | vLLM request identifier | |
| | `stream` | Whether streaming input was enabled | |
| | `concurrency` | Whether concurrent requests were enabled | |
| | `duration_secs` | Duration of the event in seconds | |
| | `details` | Additional event-specific information | |
| | `request_size` | Size of the request in tokens | |
| | `concurrent_requests` | Number of concurrent requests at event time | |
| | `replay_rate` | Poisson arrival rate used | |
| | `prev_event_type` | Previous event type for this query | |
| |
| ## Corpus Content (`anns/`) |
| |
| The ANNS corpus is stored as sharded JSON files (`retrieved_corpus_content.*.json` and `retrieved_corpus_content.part.*.json`). Each file maps document IDs to their text content, used for constructing input sequences from ANNS retrieval results. |
| |
| The `query_trace_map_5k.json` file maps query IDs to their corresponding pipeline trace filenames and query text. |
| |