configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: call_id
dtype: string
- name: turn_index
dtype: int32
- name: text
dtype: string
- name: normalized_text
dtype: string
- name: needs_manual_review
dtype: bool
- name: duration_ms
dtype: int32
- name: audio
dtype:
audio:
sampling_rate: 16000
splits:
- name: train
num_bytes: 72820
num_examples: 530
download_size: 45729
dataset_size: 72820
Westlake Marshall Customer ASR Benchmark
This dataset packages 98 customer-only Westlake Marshall calls that were manually annotated by human reviewers. It is designed for benchmarking automatic speech recognition (ASR) systems with word error rate (WER) metrics on real customer service traffic.
Source Data
- Audio: Dual-channel MP3 recordings retrieved from the Westlake Marshall servicing platform. The customer channel (left) was isolated and converted to mono 16 kHz WAV for each customer speaking turn.
- Transcripts: Human-written transcriptions focused on the customer turns.
Transcripts may include
*markers where words were unintelligible.
Two calls present in the original annotation sheet were missing audio assets and are excluded from this release.
Dataset Structure
westlake_asr/
audio/
<call_id>/
turn_<index>.wav
data/
turns.jsonl
stats.json
README.md
(prep scripts live under `src/stt_eval/` in the repo)
Turn Record Schema (turns.jsonl)
Each line is a minimal JSON object with:
| Field | Type | Description |
|---|---|---|
call_id |
string | Call UUID |
turn_index |
int | 1-based index within the call |
text |
string | Human transcript for the turn (raw casing/punctuation) |
normalized_text |
string | Lowercased alphanumeric text used as the WER reference |
needs_manual_review |
bool | true when the transcript contained * (flag for follow-up) |
audio_filepath |
string | Relative path to the cropped 16 kHz mono WAV clip |
duration_ms |
int | Length of the exported clip in milliseconds |
Audio clips live under audio/; Hugging Face renders them via the dataset viewer directly.
stats.json summarizes aggregate totals (530 turns; 53 flagged for manual
review, approx. 10%).
Quality Considerations
- Timestamps originated from human annotations and may drift by ±1 second. Each clip has a 100 ms leading pad and 200 ms trailing pad to compensate.
- Clips shorter than 400 ms are automatically extended (when possible) to ensure compatibility with most ASR models.
- Use the
needs_manual_reviewflag to filter or separately analyze segments containing unintelligible speech.
Suggested Evaluation Workflow
- Push any edits to Hugging Face (
scripts/push_hf_dataset.py …). - Run the evaluation command, which always pulls the dataset from Hugging Face
(default repo
TrySalient/marshall-asr) to guarantee you are measuring against the published 12-second audio version. - For each record, the script feeds the WAV clip to the ASR engine and compares
its output to
clean_textusing WER (jiwer). - Reports include overall WER plus clean vs. flagged breakdowns, and the failure
artifacts are written under
datasets/westlake_asr/data/evaluations/for review.
Review / Re-annotation Workflow
Launch the local review tool:
source .venv_tts_eval/bin/activate streamlit run tools/review_app.pyUse the interface to listen to clips, update transcripts, and toggle the
needs_manual_reviewflag. Saved changes appear in the “Pending edits” table and can be exported tooutputs/review_exports/review_edits_TIMESTAMP.json.Apply edits to
turns.jsonl:python scripts/apply_review_edits.py --edits outputs/review_exports/review_edits_TIMESTAMP.jsonAfter applying, regenerate evaluation metrics if needed and push the dataset back to Hugging Face with
scripts/push_hf_dataset.py.
Licensing and Usage
This dataset contains proprietary Westlake Marshall customer content and is provided for internal benchmarking only. Do not redistribute outside authorized Salient/Skwid environments.