license: apache-2.0
language:
- en
tags:
- llm-serving
- latency-prediction
- heterogeneous-serving
- scheduling
size_categories:
- 100K<n<1M
CARA Latency Prediction Dataset
A large-scale dataset for training latency predictors in heterogeneous LLM serving systems. Contains 404,258 request-level records from 18 model instances across 4 GPU types, collected under varying QPS levels (8-24). Data from sweep 2 (March 18-19, 2026) with no concurrency cap.
Dataset Description
Each record captures the instance state at scheduling time and the actual end-to-end latency after request completion. This enables training learned latency predictors that map observable system state to request latency — analogous to learned index structures in databases.
Key Statistics
| Split | Records | Prompts Source | QPS Range | Notes |
|---|---|---|---|---|
| Train | 359,258 | cara_model_estimator train split | 8-24 | 671 records dropped (unresolvable queue entries) |
| Test | 45,000 | cara_model_estimator test split | 8-24 | 0 dropped |
Cluster Configuration
18 instances across 4 GPU types serving 4 Qwen2.5 model sizes:
| Model | GPU | Instances | Tensor Parallel | vLLM Version |
|---|---|---|---|---|
| Qwen2.5-72B | A100 80GB x2 | 2 | 2 | cara_v_11 |
| Qwen2.5-14B | V100 32GB x4 | 3 | 4 | cara_v_11 |
| Qwen2.5-7B | A30 24GB | 5 | 1 | cara_v_11 |
| Qwen2.5-3B | A30 24GB | 3 | 1 | cara_v_11 |
| Qwen2.5-3B | P100 16GB | 5 | 1 | cara_v_11_p100 |
Files
Updated April 2026: Data replaced with clean sweep 2 collection (no concurrency cap).
Queue detail files enriched with per-request actual_output_tokens via cross-referencing (99.96% match rate).
| File | Records | Description |
|---|---|---|
train.jsonl |
359,258 | Training data, flat schema (schedule_state fields at top level) |
test.jsonl |
45,000 | Test data, flat schema |
train_queue_details.jsonl |
359,258 | Training data with per-request running_requests[] and waiting_requests[] + enriched actual_output_tokens |
test_queue_details.jsonl |
45,000 | Test data with per-request lists + enriched actual_output_tokens |
Flat vs Queue Details
- Flat files (
train.jsonl,test.jsonl): Schedule state fields flattened to top level. Suitable for XGBoost and tabular models. - Queue detail files: Full nested
schedule_statewith per-request running/waiting lists. Each queue entry includesactual_output_tokens(enriched post-collection). Suitable for LSTM sequence models.
Schema
Flat records (train.jsonl, test.jsonl)
Identifiers:
request_id(str): Unique request identifier (UUID)instance_id(str): CloudLab hostname + port (e.g.,c240g5-110103.wisc.cloudlab.us_port8300)instance_type(str): Model + GPU type (e.g.,qwen2.5-3b_p100)
Request features:
num_prompt_tokens(int): Input prompt length in tokensnum_predicted_output_tokens(int): Max tokens parameter (1024 for all requests in this dataset)actual_output_tokens(int): Actual generated output tokens (recovered viaround((e2e-ttft)/tpot)+1)
Schedule state (from vLLM /instance_stats at scheduling time):
num_running(int): Currently running requests on this instancenum_waiting(int): Queued requests (typically 0 — vLLM continuous batching absorbs load)num_active_decode_seqs(int): Actively decoding sequencesdecode_ctx_p50/p95/max(float): Decode context length percentiles (0 for some instances)pending_prefill_tokens(int): Total pending prefill tokens (often 0)pending_decode_tokens(int): Total pending decode tokens (0 for some instances)kv_cache_utilization(float): KV cache usage fraction [0, 1]kv_free_blocks(int): Available KV cache blockstoken_budget_per_iter(int): Scheduler token budget per iterationprefill_chunk_size(int): Chunked prefill sizemax_num_seqs(int): Max concurrent sequences allowednum_preempted(int): Cumulative preemption countema_decode_tok_per_s(float): EMA decode throughput (tokens/sec)ema_prefill_tok_per_s(float): EMA prefill throughput (tokens/sec)ema_decode_iter_ms(float): EMA per-iteration decode latency (ms)kv_evictions_per_s(float): KV cache eviction rate
Derived (flat files only):
running_requests_count(int): Count from per-request listwaiting_requests_count(int): Count from per-request list
Overhead:
probe_latency_ms(float): Time to fetch instance state from vLLMprediction_latency_ms(float): Latency predictor inference time
Timestamps:
prediction_timestamp(float): Unix time when scheduling decision was madecompletion_timestamp(float): Unix time when request completed
Target labels:
actual_e2e_latency(float): End-to-end latency in seconds (client-measured)actual_ttft(float): Time to first token in secondsactual_tpot(float): Mean time per output token in seconds (mean inter-token latency)
Queue details (train_queue_details.jsonl, test_queue_details.jsonl)
Full nested structure with schedule_state containing all aggregate fields plus per-request lists:
schedule_state.running_requests[]— per running request:request_id(str): vLLM internal ID (chatcmpl-prefixed UUID)num_prompt_tokens(int): Request's prompt lengthnum_computed_tokens(int): Tokens already processed (prefill progress)total_num_tokens(int): Total allocated tokens at snapshotnum_output_tokens(int): Output tokens generated so far (0 during prefill)actual_output_tokens(int): Final output length (enriched post-collection via cross-referencing request_ids, 99.96% match rate)
schedule_state.waiting_requests[]— same schema as running_requests
Collection Method
- Real prompts sampled from the cara_model_estimator dataset
- Sent through a coordinator with random scheduling to 18 heterogeneous instances
- Sidecar on each instance captures (state, latency) pairs
- Collected at 10 QPS levels (9-36) covering idle to saturated conditions
- Train/test split aligned with cara_model_estimator splits (no prompt overlap)
Intended Use
- Training latency predictors for LLM serving schedulers (XGBoost, LSTM, neural networks)
- Benchmarking simulation-based vs learned latency prediction
- Studying heterogeneous LLM serving behavior across GPU types
- Evaluating scheduling policies under different load conditions
Enriched Queue Entry Fields (queue_details files)
Each entry in running_requests[] and waiting_requests[] includes:
| Field | Description |
|---|---|
num_prompt_tokens |
Request's prompt length |
num_computed_tokens |
Tokens already processed (prefill progress) |
total_num_tokens |
Total sequence length at snapshot |
num_output_tokens |
Output tokens generated so far |
actual_output_tokens |
Final output length (enriched via cross-referencing request_ids) |
Offline Evaluation Results (verified April 2026)
| Model | E2E MAE | E2E MAPE | Spearman rho |
|---|---|---|---|
| XGBoost (oracle output) | 0.289s | 3.7% | 0.999 |
| Roofline (analytical) | 0.531s | 10.7% | 0.987 |
| XGBoost TTFT | 0.015s | 16.2% | 0.946 |
| XGBoost TPOT | 0.002s | 42.0% | 0.978 |
Citation
If you use this dataset, please cite:
@dataset{cara_latency_2026,
title={CARA Latency Prediction Dataset},
author={Wei Da},
year={2026},
url={https://huggingface.co/datasets/asdwb/cara_latency_prediction}
}
Related
- cara_model_estimator — Multi-model quality and length estimation dataset
- CARA: Context-Aware Resource Allocation for heterogeneous LLM serving