Datasets:
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Error code: DatasetGenerationError
Exception: CastError
Message: Couldn't cast
experiment: string
model: string
n_queries_per_wer: int64
wer_levels_tested: list<item: int64>
child 0, item: int64
results: struct<wer_0: struct<wer_pct: int64, mean_quality: double, std_quality: double, n_queries: int64, qu (... 1574 chars omitted)
child 0, wer_0: struct<wer_pct: int64, mean_quality: double, std_quality: double, n_queries: int64, quality_scores: (... 76 chars omitted)
child 0, wer_pct: int64
child 1, mean_quality: double
child 2, std_quality: double
child 3, n_queries: int64
child 4, quality_scores: list<item: double>
child 0, item: double
child 5, degradation_from_clean: double
child 6, degradation_pct: double
child 1, wer_1: struct<wer_pct: int64, mean_quality: double, std_quality: double, n_queries: int64, quality_scores: (... 76 chars omitted)
child 0, wer_pct: int64
child 1, mean_quality: double
child 2, std_quality: double
child 3, n_queries: int64
child 4, quality_scores: list<item: double>
child 0, item: double
child 5, degradation_from_clean: double
child 6, degradation_pct: double
child 2, wer_2: struct<wer_pct: int64, mean_quality: double, std_quality: double, n_queries: int64, quality_scores: (... 76 chars omitted)
child 0, wer_pct: int64
child 1, mean_quality: double
child 2, std_quality: double
child 3, n_queries: int64
child 4, quality_scores: list<item: double>
child 0, item: double
...
lower: double, ci_95_upper: doub (... 3 chars omitted)
child 0, values: list<item: double>
child 0, item: double
child 1, mean: double
child 2, std: double
child 3, ci_95_lower: double
child 4, ci_95_upper: double
pavo_cost: struct<values: list<item: double>, mean: double, std: double, ci_95_lower: double, ci_95_upper: doub (... 3 chars omitted)
child 0, values: list<item: double>
child 0, item: double
child 1, mean: double
child 2, std: double
child 3, ci_95_lower: double
child 4, ci_95_upper: double
random_cost: struct<values: list<item: double>, mean: double, std: double, ci_95_lower: double, ci_95_upper: doub (... 3 chars omitted)
child 0, values: list<item: double>
child 0, item: double
child 1, mean: double
child 2, std: double
child 3, ci_95_lower: double
child 4, ci_95_upper: double
ondevice_latency: struct<values: list<item: double>, mean: double, std: double, ci_95_lower: double, ci_95_upper: doub (... 3 chars omitted)
child 0, values: list<item: double>
child 0, item: double
child 1, mean: double
child 2, std: double
child 3, ci_95_lower: double
child 4, ci_95_upper: double
turns_per_trial: int64
pavo_latency: struct<values: list<item: double>, mean: double, std: double, ci_95_lower: double, ci_95_upper: doub (... 3 chars omitted)
child 0, values: list<item: double>
child 0, item: double
child 1, mean: double
child 2, std: double
child 3, ci_95_lower: double
child 4, ci_95_upper: double
to
{'trials': Value('int64'), 'turns_per_trial': Value('int64'), 'seeds': List(Value('int64')), 'pavo_latency': {'values': List(Value('float64')), 'mean': Value('float64'), 'std': Value('float64'), 'ci_95_lower': Value('float64'), 'ci_95_upper': Value('float64')}, 'pavo_quality': {'values': List(Value('float64')), 'mean': Value('float64'), 'std': Value('float64'), 'ci_95_lower': Value('float64'), 'ci_95_upper': Value('float64')}, 'pavo_cost': {'values': List(Value('float64')), 'mean': Value('float64'), 'std': Value('float64'), 'ci_95_lower': Value('float64'), 'ci_95_upper': Value('float64')}, 'cloud_latency': {'values': List(Value('float64')), 'mean': Value('float64'), 'std': Value('float64'), 'ci_95_lower': Value('float64'), 'ci_95_upper': Value('float64')}, 'cloud_quality': {'values': List(Value('float64')), 'mean': Value('float64'), 'std': Value('float64'), 'ci_95_lower': Value('float64'), 'ci_95_upper': Value('float64')}, 'cloud_cost': {'values': List(Value('float64')), 'mean': Value('float64'), 'std': Value('float64'), 'ci_95_lower': Value('float64'), 'ci_95_upper': Value('float64')}, 'ondevice_latency': {'values': List(Value('float64')), 'mean': Value('float64'), 'std': Value('float64'), 'ci_95_lower': Value('float64'), 'ci_95_upper': Value('float64')}, 'ondevice_quality': {'values': List(Value('float64')), 'mean': Value('float64'), 'std': Value('float64'), 'ci_95_lower': Value('float64'), 'ci_95_upper': Value('float64')}, 'ondevice_cost': {'values': List(Value('float64')), 'mean': Value('float64'), 'std': Value('float64'), 'ci_95_lower': Value('float64'), 'ci_95_upper': Value('float64')}, 'random_latency': {'values': List(Value('float64')), 'mean': Value('float64'), 'std': Value('float64'), 'ci_95_lower': Value('float64'), 'ci_95_upper': Value('float64')}, 'random_quality': {'values': List(Value('float64')), 'mean': Value('float64'), 'std': Value('float64'), 'ci_95_lower': Value('float64'), 'ci_95_upper': Value('float64')}, 'random_cost': {'values': List(Value('float64')), 'mean': Value('float64'), 'std': Value('float64'), 'ci_95_lower': Value('float64'), 'ci_95_upper': Value('float64')}, 'comparisons': {'pavo_vs_cloud_latency': {'pavo_mean': Value('float64'), 'baseline_mean': Value('float64'), 'difference': Value('float64'), 't_statistic': Value('float64'), 'p_value_ttest': Value('float64'), 'w_statistic': Value('float64'), 'p_value_wilcoxon': Value('float64')}}}
because column names don't match
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1872, in _prepare_split_single
for key, table in generator:
^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 289, in _generate_tables
self._cast_table(pa_table, json_field_paths=json_field_paths),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 124, in _cast_table
pa_table = table_cast(pa_table, self.info.features.arrow_schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2272, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
experiment: string
model: string
n_queries_per_wer: int64
wer_levels_tested: list<item: int64>
child 0, item: int64
results: struct<wer_0: struct<wer_pct: int64, mean_quality: double, std_quality: double, n_queries: int64, qu (... 1574 chars omitted)
child 0, wer_0: struct<wer_pct: int64, mean_quality: double, std_quality: double, n_queries: int64, quality_scores: (... 76 chars omitted)
child 0, wer_pct: int64
child 1, mean_quality: double
child 2, std_quality: double
child 3, n_queries: int64
child 4, quality_scores: list<item: double>
child 0, item: double
child 5, degradation_from_clean: double
child 6, degradation_pct: double
child 1, wer_1: struct<wer_pct: int64, mean_quality: double, std_quality: double, n_queries: int64, quality_scores: (... 76 chars omitted)
child 0, wer_pct: int64
child 1, mean_quality: double
child 2, std_quality: double
child 3, n_queries: int64
child 4, quality_scores: list<item: double>
child 0, item: double
child 5, degradation_from_clean: double
child 6, degradation_pct: double
child 2, wer_2: struct<wer_pct: int64, mean_quality: double, std_quality: double, n_queries: int64, quality_scores: (... 76 chars omitted)
child 0, wer_pct: int64
child 1, mean_quality: double
child 2, std_quality: double
child 3, n_queries: int64
child 4, quality_scores: list<item: double>
child 0, item: double
...
lower: double, ci_95_upper: doub (... 3 chars omitted)
child 0, values: list<item: double>
child 0, item: double
child 1, mean: double
child 2, std: double
child 3, ci_95_lower: double
child 4, ci_95_upper: double
pavo_cost: struct<values: list<item: double>, mean: double, std: double, ci_95_lower: double, ci_95_upper: doub (... 3 chars omitted)
child 0, values: list<item: double>
child 0, item: double
child 1, mean: double
child 2, std: double
child 3, ci_95_lower: double
child 4, ci_95_upper: double
random_cost: struct<values: list<item: double>, mean: double, std: double, ci_95_lower: double, ci_95_upper: doub (... 3 chars omitted)
child 0, values: list<item: double>
child 0, item: double
child 1, mean: double
child 2, std: double
child 3, ci_95_lower: double
child 4, ci_95_upper: double
ondevice_latency: struct<values: list<item: double>, mean: double, std: double, ci_95_lower: double, ci_95_upper: doub (... 3 chars omitted)
child 0, values: list<item: double>
child 0, item: double
child 1, mean: double
child 2, std: double
child 3, ci_95_lower: double
child 4, ci_95_upper: double
turns_per_trial: int64
pavo_latency: struct<values: list<item: double>, mean: double, std: double, ci_95_lower: double, ci_95_upper: doub (... 3 chars omitted)
child 0, values: list<item: double>
child 0, item: double
child 1, mean: double
child 2, std: double
child 3, ci_95_lower: double
child 4, ci_95_upper: double
to
{'trials': Value('int64'), 'turns_per_trial': Value('int64'), 'seeds': List(Value('int64')), 'pavo_latency': {'values': List(Value('float64')), 'mean': Value('float64'), 'std': Value('float64'), 'ci_95_lower': Value('float64'), 'ci_95_upper': Value('float64')}, 'pavo_quality': {'values': List(Value('float64')), 'mean': Value('float64'), 'std': Value('float64'), 'ci_95_lower': Value('float64'), 'ci_95_upper': Value('float64')}, 'pavo_cost': {'values': List(Value('float64')), 'mean': Value('float64'), 'std': Value('float64'), 'ci_95_lower': Value('float64'), 'ci_95_upper': Value('float64')}, 'cloud_latency': {'values': List(Value('float64')), 'mean': Value('float64'), 'std': Value('float64'), 'ci_95_lower': Value('float64'), 'ci_95_upper': Value('float64')}, 'cloud_quality': {'values': List(Value('float64')), 'mean': Value('float64'), 'std': Value('float64'), 'ci_95_lower': Value('float64'), 'ci_95_upper': Value('float64')}, 'cloud_cost': {'values': List(Value('float64')), 'mean': Value('float64'), 'std': Value('float64'), 'ci_95_lower': Value('float64'), 'ci_95_upper': Value('float64')}, 'ondevice_latency': {'values': List(Value('float64')), 'mean': Value('float64'), 'std': Value('float64'), 'ci_95_lower': Value('float64'), 'ci_95_upper': Value('float64')}, 'ondevice_quality': {'values': List(Value('float64')), 'mean': Value('float64'), 'std': Value('float64'), 'ci_95_lower': Value('float64'), 'ci_95_upper': Value('float64')}, 'ondevice_cost': {'values': List(Value('float64')), 'mean': Value('float64'), 'std': Value('float64'), 'ci_95_lower': Value('float64'), 'ci_95_upper': Value('float64')}, 'random_latency': {'values': List(Value('float64')), 'mean': Value('float64'), 'std': Value('float64'), 'ci_95_lower': Value('float64'), 'ci_95_upper': Value('float64')}, 'random_quality': {'values': List(Value('float64')), 'mean': Value('float64'), 'std': Value('float64'), 'ci_95_lower': Value('float64'), 'ci_95_upper': Value('float64')}, 'random_cost': {'values': List(Value('float64')), 'mean': Value('float64'), 'std': Value('float64'), 'ci_95_lower': Value('float64'), 'ci_95_upper': Value('float64')}, 'comparisons': {'pavo_vs_cloud_latency': {'pavo_mean': Value('float64'), 'baseline_mean': Value('float64'), 'difference': Value('float64'), 't_statistic': Value('float64'), 'p_value_ttest': Value('float64'), 'w_statistic': Value('float64'), 'p_value_wilcoxon': Value('float64')}}}
because column names don't match
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1347, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 980, in convert_to_parquet
builder.download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 884, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 947, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1739, in _prepare_split
for job_id, done, content in self._prepare_split_single(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1922, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the datasetNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
trials int64 | turns_per_trial int64 | seeds list | pavo_latency dict | pavo_quality dict | pavo_cost dict | cloud_latency dict | cloud_quality dict | cloud_cost dict | ondevice_latency dict | ondevice_quality dict | ondevice_cost dict | random_latency dict | random_quality dict | random_cost dict | comparisons dict |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
5 | 1,000 | [
42,
123,
456,
789,
1024
] | {
"values": [
2320.9553,
2253.3363,
2269.7735,
2245.8794,
2294.8362
],
"mean": 2276.9561,
"std": 27.6799,
"ci_95_lower": 2252.6937,
"ci_95_upper": 2301.2186
} | {
"values": [
0.8281,
0.8266,
0.8253,
0.8266,
0.8285
],
"mean": 0.827,
"std": 0.0011,
"ci_95_lower": 0.826,
"ci_95_upper": 0.828
} | {
"values": [
0.0187,
0.0187,
0.0185,
0.0187,
0.0188
],
"mean": 0.0187,
"std": 0.0001,
"ci_95_lower": 0.0186,
"ci_95_upper": 0.0187
} | {
"values": [
2702.1093,
2649.2749,
2693.8871,
2642.7514,
2665.4603
],
"mean": 2670.6966,
"std": 23.6297,
"ci_95_lower": 2649.9843,
"ci_95_upper": 2691.4089
} | {
"values": [
0.8747,
0.8745,
0.8749,
0.8745,
0.8745
],
"mean": 0.8746,
"std": 0.0002,
"ci_95_lower": 0.8745,
"ci_95_upper": 0.8748
} | {
"values": [
0.025,
0.025,
0.025,
0.025,
0.025
],
"mean": 0.025,
"std": 0,
"ci_95_lower": 0.025,
"ci_95_upper": 0.025
} | {
"values": [
1403.1297,
1411.8178,
1411.3827,
1392.7992,
1382.1751
],
"mean": 1400.2609,
"std": 11.3865,
"ci_95_lower": 1390.2802,
"ci_95_upper": 1410.2416
} | {
"values": [
0.6276,
0.6276,
0.6276,
0.6276,
0.6276
],
"mean": 0.6276,
"std": 0,
"ci_95_lower": 0.6276,
"ci_95_upper": 0.6276
} | {
"values": [
0.005,
0.005,
0.005,
0.005,
0.005
],
"mean": 0.005,
"std": 0,
"ci_95_lower": 0.005,
"ci_95_upper": 0.005
} | {
"values": [
2085.7257,
2046.8546,
2025.0513,
2052.1472,
2058.332
],
"mean": 2053.6222,
"std": 19.581,
"ci_95_lower": 2036.4586,
"ci_95_upper": 2070.7857
} | {
"values": [
0.7945,
0.7915,
0.7908,
0.7942,
0.7949
],
"mean": 0.7932,
"std": 0.0017,
"ci_95_lower": 0.7917,
"ci_95_upper": 0.7947
} | {
"values": [
0.013,
0.0125,
0.0125,
0.013,
0.0132
],
"mean": 0.0128,
"std": 0.0003,
"ci_95_lower": 0.0126,
"ci_95_upper": 0.0131
} | {
"pavo_vs_cloud_latency": {
"pavo_mean": 2276.9561,
"baseline_mean": 2670.6966,
"difference": -393.7404,
"t_statistic": -43.6151,
"p_value_ttest": 0.000002,
"w_statistic": 0,
"p_value_wilcoxon": 0.0625
}
} |
PAVO-Bench: 50K-Turn Benchmark for ASR-LLM-TTS Pipeline Routing
Author: NarasingaMoorthy VeiluKanthaPerumal, University of Pennsylvania
Description
PAVO-Bench is a comprehensive benchmark suite for evaluating ASR-LLM-TTS voice pipeline routing decisions. It provides 50,000 turns of benchmark data designed to measure how well different pipeline configurations balance latency, quality, cost, and energy consumption when routing spoken-language queries through cascaded Automatic Speech Recognition (ASR), Large Language Model (LLM), and Text-to-Speech (TTS) components.
The benchmark is organized into three tiers of increasing scale and complexity, plus component-level ablation studies. All results were produced on GPU hardware.
Dataset Files
Tier 1 -- Unit-Level Validation
| File | Description |
|---|---|
tier1_statistical_results.json |
Statistical reproducibility results across 5 trials of 1,000 turns each (seeds 42, 123, 456, 789, 1024). Reports mean, std, and 95% confidence intervals for PAVO latency, quality, cost, and energy metrics. |
tier1_coupling_results.json |
Coupling constraint validation measuring LLM quality degradation as a function of ASR word-error rate (WER 0--20%) using llama3.1:8b. |
tier1_llm_latency_results.json |
LLM latency profiling for llama3.1:8b across short (50 token), medium (200 token), and long (500 token) generation contexts. Reports total latency, time-to-first-token, and tokens/second. |
Tier 2 -- Integration-Level Evaluation
| File | Description |
|---|---|
tier2_e2e_results.json |
End-to-end pipeline measurements for cloud_premium (whisper-large-v3 + llama3.1:8b) and edge_fast (whisper-tiny + gemma2:2b) configurations on 200 LibriSpeech samples. Includes per-stage latency breakdowns, sample ASR outputs, and sample LLM responses. |
tier2_cross_dataset_results.json |
Cross-dataset ASR evaluation on LibriSpeech and FLEURS for whisper-large-v3 and whisper-tiny models (200 samples each). Reports WER and latency statistics. |
tier2_noise_robustness_results.json |
ASR robustness under white noise at SNR levels 5--30 dB, plus clean baseline. Reports WER degradation across noise conditions. |
Tier 3 -- Scale Evaluation
| File | Description |
|---|---|
tier3_50k_summary.json |
Summary statistics for the full 50,000-turn PAVO-Bench dataset: 40K train / 10K test split, complexity distribution (levels 1--5), generation time, and error rate. |
tier3_scaling_results.json |
LLM scaling benchmarks across multiple models (gemma2:2b, llama3.1:8b, etc.) with simple, medium, and complex query types. Reports latency, throughput, and real-time suitability. |
Component Analysis
| File | Description |
|---|---|
component_ablation_results.json |
Ablation study comparing PAVO-Full, PAVO-NoCoupling, and other ablated configurations. Reports latency, quality, cost, energy, coupling violations, and infeasible percentages. |
Usage
Load individual JSON files directly
import json
from huggingface_hub import hf_hub_download
# Download a specific results file
path = hf_hub_download(
repo_id="<your-username>/pavo-bench",
filename="tier3_50k_summary.json",
repo_type="dataset",
)
with open(path) as f:
data = json.load(f)
print(f"Total samples: {data['total_samples']}")
print(f"Train/Test split: {data['train_samples']}/{data['test_samples']}")
Download all files
from huggingface_hub import snapshot_download
snapshot_download(
repo_id="<your-username>/pavo-bench",
repo_type="dataset",
local_dir="./pavo-bench-data",
)
Benchmark Metrics
- Latency (ms): End-to-end and per-component response time
- Quality (0--1): Composite score incorporating ASR accuracy and LLM response quality
- Cost (USD): Per-turn inference cost
- Energy (mJ): Per-turn energy consumption
- Coupling violations: Cases where ASR errors propagate and degrade LLM quality
Citation
If you use PAVO-Bench in your research, please cite:
@misc{pavo-bench-2026,
author = {VeiluKanthaPerumal, NarasingaMoorthy},
title = {PAVO-Bench: A 50K-Turn Benchmark for ASR-LLM-TTS Pipeline Routing},
year = {2026},
institution = {University of Pennsylvania},
url = {https://huggingface.co/datasets/<your-username>/pavo-bench}
}
License
This dataset is released under the Creative Commons Attribution 4.0 International (CC-BY 4.0) license.
- Downloads last month
- -