Annotation protocol
This document describes how the 124,876 listener judgments in VIPBench were collected.
Recruitment
Listeners were recruited via the Centaur AI crowdsourcing platform (https://centaur.ai) under an Institutional Review Board (IRB) approved research protocol. The pool was restricted to English-speaking adults; 1,290 participants completed at least one trial. Compensation followed the platform's standard rate, which meets minimum-wage requirements in the country of data collection.
Consent
Participants reviewed and accepted the platform's standard consent text before beginning the study. The IRB-approved protocol covered (a) the use of publicly available celebrity recordings as source audio, (b) the generation of AI voice clones from those recordings, and (c) the collection of binary same/different identity judgments.
Stimulus presentation
Each trial presented a single audio clip in three parts:
- A reference recording of one celebrity speaker (~6 seconds).
- One second of silence followed by a short beep (separator).
- A comparison clip (real, AI cloned, or morphed; ~6 seconds).
Stimuli are 16 kHz mono WAV. The combined stimulus is the file in data/audio/'s comparison directory together with the matching reference. The pre-concatenated trial-format clips used in the human study are not redistributed; users can recreate them by concatenating reference + 1 s silence + beep + comparison.
Response interface
For each trial, listeners answered:
- Identity judgment (binary, required): "Are these two clips from the same speaker?" with response options "Same" or "Different".
- Speaker recognition (categorical, optional probe, recorded as
know_speaker): which of four within-group celebrities (or "I don't know") they recognized in the reference clip.
Response (1) defines the primary annotation; response (2) is used to filter trials where the listener recognized the reference (78.4% of all judgments come from unfamiliar-listener trials).
Coverage
- Each pair received at least 8 judgments. Median 10, range 8 to 92.
- Real-speech pairs received more coverage on average than synthetic pairs, giving tighter consensus estimates on the real-speech reference distribution.
- Stimulus presentation order is randomized within participant.
Attention checks
Embedded probes flagged inattentive responses; per-listener qualification flags are computed from these probes. The full release includes all responses; downstream users can apply the qualification filter via the columns in data/participant_responses.csv.
Demographics collected
Per listener: pseudonymized integer identifier, age band, gender, and a binary first-language flag. No personally identifying information is included. Listener IDs are tied to no external account or platform user.
P(same) computation
For each pair, P(same) = same_vote / num_response where same_vote counts listeners who answered "Same" and num_response is the total number of judgments on that pair. The Spearman-Brown corrected split-half reliability of P(same) over the 1,290-participant pool is rho_SB = 0.705, which bounds any model's correlation against the observed target.
Reproducing the listening study
Researchers wishing to extend the study (e.g. with non-English listeners) can reuse this protocol. The trial-format audio files (reference + silence + beep + comparison) are deterministically reconstructable from data/audio/. Per-pair stimulus IDs in data/stimuli.csv allow exact replication of trial assignment.
For the IRB-approval scope and permissible extensions, contact the dataset maintainers (camera-ready will list contact information).