Datasets:
license: apache-2.0
task_categories:
- automatic-speech-recognition
language:
- en
tags:
- speech-to-text
- word-error-rate
- benchmark
- cleaned-transcripts
- earnings22
pretty_name: Earnings22-Cleaned-AA
size_categories:
- n<1K
dataset_info:
features:
- name: id
dtype: string
- name: duration
dtype: float64
- name: transcript
dtype: string
- name: language
dtype: string
- name: url
dtype: string
- name: dataset
dtype: string
- name: file_name
dtype: string
splits:
- name: test
num_examples: 6
configs:
- config_name: default
data_files:
- split: test
path: earnings22_cleaned_aa_v1.jsonl
source_datasets:
- esb/datasets
Earnings22-Cleaned-AA
Quick links: AA Speech-to-Text Leaderboard | AA-WER v2.0 article
Earnings22-Cleaned-AA is a cleaned subset of the English Earnings-22 test data from esb/datasets, a corpus of corporate earnings calls from global companies with speakers of many different nationalities and accents. This cleaned subset is the Earnings-22 portion included in AA-WER v2. We manually reviewed and corrected errors in the original ground-truth transcriptions to ensure fairer evaluation of Speech to Text (STT) models.
This dataset is part of AA-WER v2.0, the Speech to Text accuracy benchmark by Artificial Analysis, where it carries a 25% weighting alongside AA-AgentTalk (50%) and VoxPopuli-Cleaned-AA (25%).
Dataset Summary
| Property | Value |
|---|---|
| Source | Subset of Earnings-22 (ESB) English test split |
| Domain | Corporate earnings calls |
| Number of samples | 6 |
| Sample duration range | ~14–22 minutes |
| Total duration | ~115 minutes |
| Language | English |
Motivation for Correction
Reference transcripts in the original Earnings22 test set contained inaccuracies — instances where the ground truth didn't match what was actually spoken. Inaccurate ground truth penalizes models that correctly transcribe the audio, inflating WER scores unfairly. On average, model WER on Earnings22 went down 5.6 percentage points (p.p.) after cleaning, and no models had higher WER after cleaning (article).
Dataset Correction
We corrected transcripts to reflect verbatim what speakers said. Key corrections included:
- Incorrect words: Misspellings, misheard words, incorrect contractions in the original references
- Missed words: Retained or added repetitions for verbatim accuracy (e.g., "the the" where the speaker genuinely repeated a word)
- Partial stuttering: Removed incomplete word fragments (e.g., "evac-" in "evac- evacuate") as these are inherently ambiguous in transcription
- Grammar and tense: When speakers used incorrect grammar (particularly speakers with accents) but the word choice was clear, we kept verbatim words as spoken rather than correcting them
Elements already normalized by the Whisper normalizer package (e.g., capitalization, punctuation, and filler words) were not modified, since these differences are already handled during WER calculation.
Sample
Thank you, Darcy, and welcome everyone to our December quarterly analyst call. December quarterly production showed a considerable improvement on the September quarter with record production throughput and improving grades, improving recoveries and improving cash flow. Unfortunately, delays accessing higher grade parts of the open pit resulted in lower grades than projected in our guidance. On the exploration front, today we announced a 70% increase in our 100% owned Yamarna resources. So they now sit at 0.5 million ounces...
Usage
from datasets import load_dataset
dataset = load_dataset("ArtificialAnalysis/Earnings22-Cleaned-AA", split="test")
url fields in the dataset point to repo-local audio files under audio/.
WER Evaluation
For WER evaluation, we use the jiwer library with a custom text normalizer building on OpenAI's Whisper normalizer. Our normalizer adds:
- Digit splitting to prevent number grouping mismatches (e.g., "1405 553 272" vs. "1405553272")
- Preservation of leading zeros in codes and identifiers
- Normalization of spoken symbols (e.g., "+", "_")
- Stripping redundant ":00" in times (e.g., "7:00pm" vs. "7pm")
- Additional US/UK English spelling equivalences (e.g., "totalled" vs. "totaled")
- Accepted equivalent spellings for ambiguous proper nouns (e.g., "Mateo" vs. "Matteo")
Results within the dataset are aggregated as an audio-duration-weighted average WER so that numerous short clips do not bias results compared to longer files.
Citation
If you use this dataset, please cite:
@misc{artificialanalysis2026earnings22cleaned,
title={Earnings22-Cleaned-AA: Cleaned Ground Truth Transcripts for Earnings22 English Test Set},
author={Artificial Analysis},
year={2026},
url={https://artificialanalysis.ai/articles/aa-wer-v2}
}
Resources
- Full results and leaderboard
- Benchmarking methodology
- AA-WER v2.0 article
- VoxPopuli-Cleaned-AA on Hugging Face
Versioning
Current version: 1.0
Used in: AA-WER v2.0 benchmark release
Specific dataset versions used for each AA-WER release are documented in the Artificial Analysis methodology.
License
This dataset is released under Apache-2.0. For upstream terms, see esb/datasets.
Feedback
These cleaned transcripts reflect our best effort at verbatim ground truth, informed by manual review and cross-validation. Future refinements will be released as subsequent versions (v2+). If you spot issues, we welcome feedback via our contact page or Discord.
