Datasets:
The dataset viewer is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code: StreamingRowsError
Exception: CastError
Message: Couldn't cast
audio_filename: string
offset: double
duration: double
segment_id: string
conversation_id: string
channel: string
text: string
model: string
alignment_mean_prob: double
tt_label: string
tt_confidence: double
llm_label: string
llm_confidence: double
llm_rationale: string
final_tag: string
accepted: null
accept_reason: string
reject_reason: string
silence_after_ms: double
overlap_ratio: double
prev_same_speaker: string
next_same_speaker: string
prev_other_speaker: string
next_other_speaker: string
-- schema metadata --
pandas: '{"index_columns": [], "column_indexes": [], "columns": [{"name":' + 3072
to
{'audio_filename': Value('string'), 'offset': Value('float64'), 'duration': Value('float64'), 'segment_id': Value('string'), 'conversation_id': Value('string'), 'channel': Value('string'), 'text': Value('string'), 'model': Value('string'), 'alignment_mean_prob': Value('float64'), 'tt_label': Value('string'), 'tt_confidence': Value('float64'), 'llm_label': Value('string'), 'llm_confidence': Value('float64'), 'llm_rationale': Value('string'), 'final_tag': Value('string'), 'accepted': Value('bool'), 'accept_reason': Value('string'), 'reject_reason': Value('string')}
because column names don't match
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
return get_rows(
^^^^^^^^^
File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2690, in __iter__
for key, example in ex_iterable:
^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2227, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2251, in _iter_arrow
for key, pa_table in self.ex_iterable._iter_arrow():
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 494, in _iter_arrow
for key, pa_table in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 384, in _iter_arrow
for key, pa_table in self.generate_tables_fn(**gen_kwags):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/parquet/parquet.py", line 209, in _generate_tables
yield Key(file_idx, batch_idx), self._cast_table(pa_table)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/parquet/parquet.py", line 147, in _cast_table
pa_table = table_cast(pa_table, self.info.features.arrow_schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2272, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
audio_filename: string
offset: double
duration: double
segment_id: string
conversation_id: string
channel: string
text: string
model: string
alignment_mean_prob: double
tt_label: string
tt_confidence: double
llm_label: string
llm_confidence: double
llm_rationale: string
final_tag: string
accepted: null
accept_reason: string
reject_reason: string
silence_after_ms: double
overlap_ratio: double
prev_same_speaker: string
next_same_speaker: string
prev_other_speaker: string
next_other_speaker: string
-- schema metadata --
pandas: '{"index_columns": [], "column_indexes": [], "columns": [{"name":' + 3072
to
{'audio_filename': Value('string'), 'offset': Value('float64'), 'duration': Value('float64'), 'segment_id': Value('string'), 'conversation_id': Value('string'), 'channel': Value('string'), 'text': Value('string'), 'model': Value('string'), 'alignment_mean_prob': Value('float64'), 'tt_label': Value('string'), 'tt_confidence': Value('float64'), 'llm_label': Value('string'), 'llm_confidence': Value('float64'), 'llm_rationale': Value('string'), 'final_tag': Value('string'), 'accepted': Value('bool'), 'accept_reason': Value('string'), 'reject_reason': Value('string')}
because column names don't matchNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
CANDOR - Turn-Taking Annotations
Speech transcription and turn-taking annotation dataset built from the CANDOR corpus using NVIDIA Canary-Qwen2.5B ASR.
Dataset Description
This dataset contains 172,591 transcribed speech segments from the CANDOR conversational speech corpus (1,656 conversations). Each segment is a per-speaker utterance with Canary ASR transcript, designed for turn-taking prediction research.
Source
- Audio corpus: CANDOR (English conversational speech, 1,656 conversations)
- ASR model: NVIDIA Canary-Qwen2.5B (
canary-qwen-2.5b) - Audio format: Per-speaker mono WAV (16kHz), extracted from stereo MP3
Dataset Structure
| Column | Type | Description |
|---|---|---|
audio_filename |
string | Per-speaker WAV filename (e.g., {uuid}_L.wav) |
offset |
float | Start time within the audio file (seconds) |
duration |
float | Duration of the segment (seconds) |
segment_id |
string | Unique segment identifier |
conversation_id |
string | CANDOR conversation UUID |
channel |
string | Speaker channel (L or R) |
text |
string | Canary ASR transcript |
model |
string | ASR model (canary-qwen-2.5b) |
alignment_mean_prob |
float | Parakeet CTC forced alignment score (to be added) |
tt_label |
string | TEN turn-taking label: finished/unfinished/wait (to be added) |
tt_confidence |
float | TEN confidence (3-way softmax) (to be added) |
llm_label |
string | LLM label: COMPLETE/INCOMPLETE/BACKCHANNEL (to be added) |
llm_confidence |
float | LLM confidence (to be added) |
llm_rationale |
string | LLM reasoning (to be added) |
final_tag |
string | Cross-annotation consensus label (to be added) |
accepted |
bool | Whether segment passed all quality gates (to be added) |
accept_reason |
string | Reason for acceptance (to be added) |
reject_reason |
string | Reason for rejection (to be added) |
Annotation Pipeline (in progress)
Annotations are being added incrementally:
- Parakeet CTC forced alignment →
alignment_mean_prob - TEN turn-taking model →
tt_label,tt_confidence - LLM annotation (Qwen2.5-32B) →
llm_label,llm_confidence,llm_rationale - Cross-annotation →
final_tag,accepted(requires tt_confidence >= 0.9, alignment_mean_prob >= 0.9)
Statistics
- Total segments: 172,591
- Conversations: 1,656
- Segments with text: 165,925 (96.1%)
- Mean duration: 7.6s
Intended Use
This dataset is intended for research on:
- Turn-taking prediction and modeling
- Conversational speech recognition
- Backchannel detection
Related Datasets
- hiraki/seamless-interact-canary-transcripts — Same pipeline applied to Seamless Interact corpus (2.78M segments)
License
CC-BY-4.0 (following the CANDOR corpus license).
- Downloads last month
- 22