File size: 4,554 Bytes
589ac38
 
 
bea8daa
589ac38
bea8daa
 
427fbab
db138f6
427fbab
 
589ac38
 
 
 
 
 
1c05c5c
835de87
 
 
 
 
 
 
 
589ac38
835de87
 
 
 
 
7af9c17
835de87
60a838e
835de87
f2fc235
835de87
7af9c17
835de87
 
 
 
 
7af9c17
0cf234d
 
835de87
 
 
380622c
7af9c17
f2fc235
7af9c17
835de87
 
 
 
 
 
 
 
60a838e
835de87
 
 
7f7e5aa
835de87
 
 
f2fc235
835de87
 
 
 
 
 
 
 
 
f2fc235
835de87
 
 
 
7af9c17
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
835de87
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7af9c17
835de87
7af9c17
 
 
835de87
 
 
 
 
7af9c17
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
---
dataset_info:
  features:
  - name: source
    dtype: string
  - name: audio
    dtype: audio
  - name: is_complete
    dtype: bool
  - name: transcript
    dtype: string
  splits:
  - name: train
configs:
- config_name: default
  data_files:
  - split: train
    path: data/*
license: bsd-2-clause
task_categories:
- automatic-speech-recognition
- voice-activity-detection
language:
- en
size_categories:
- 1K<n<10K
---

# Utterly

## Dataset Summary

**Utterly** is a speech dataset derived from *pipecat-ai/human_5_all* and *pipecat-ai/smart-turn-data-v3.1-train*. It contains over **7.1k recordings of complete and partial English utterances**, each augmented with **turn-level annotations**, including:

* Verbatim Whisper-generated transcripts
* End-of-turn (EoT) markers
* Speaker identifiers (Coming soon)

The dataset is designed to support research and development of speech and dialogue systems that require joint modeling of **speech recognition** and **conversational turn-taking**, such as streaming ASR systems, semantic end-of-turn detection and real-time conversational agents.

---

## Source Data

* **Base datasets**: 
  - *pipecat-ai/human_5_all*
  - *pipecat-ai/smart-turn-data-v3.1-train*
* **Language(s)**: English
* **Modality**: Audio (speech; mono-channel; sampled at 16kHz), Text
* **Interaction type**: Human conversational speech
* **Utterances**: 7,111
* **Speakers**: 500+

Dataset splits (e.g., train/validation/test) are not predefined and may be created by downstream users as needed. Deduplication was applied to the underlying audio sources to ensure dataset splits can be made without contamination.

---

## Annotation Details

* **Transcripts**

  * Generated automatically using **Whisper Large V3 Turbo**.
  * A subset of samples (\~200) was manually reviewed and corrected. The transcripts are estimated to have approximately a word error rate (WER) of **\~2.8%**.

* **End-of-Turn markers**

  * Human annotations inherited from the base datasets.

* **Speaker IDs**

  * Coming soon

---

## Dataset Structure

A typical data entry includes:

* `audio`: Path or reference to the audio utterance
* `transcript`: Text transcription of the utterance
* `speaker_id`: Identifier for the speaker (Coming soon)
* `is_completed`: Boolean or categorical flag indicating end-of-turn, i.e. turn completion

---

## Usage
In order to load the dataset from the hub, you can use the `datasets` library:

```python
ds = datasets.load_dataset(
    "ThBel/Utterly",
    split='train',
    streaming=True # (optional)
)

for row in ds:
    # Do something with the data
    print(row['audio']) # or row['is_complete'], row['transcript'], ...
```

Alternatively you may clone the `ThBel/Utterly` repository, and load the underlying parquet files using `pandas.read_parquet`.

---

## Intended Use Cases

The Utterly dataset is designed to support a range of speech and dialogue research tasks, including but not limited to:

* **Automatic Speech Recognition (ASR)** with embedded end-of-turn detection
* **Semantic end-of-turn modeling** using lexical and acoustic cues
* **Turn-taking and floor-control research** in conversational AI
* **Voice assistants and dialogue systems** requiring low-latency response timing

---

## Quality Considerations

* End-of-turn annotations involve human judgment and may reflect subjective interpretations of conversational completion.
* Transcription quality may vary depending on audio clarity and source conditions.
* Overlapping speech, interruptions, or disfluencies may introduce ambiguity in turn boundaries.

Users are encouraged to validate performance across multiple evaluation settings.

---

## Ethical Considerations

* The dataset consists of recorded human speech and should be used in accordance with the original dataset's licensing and consent terms.
* No additional personally identifying information beyond speaker IDs is introduced.
* Models trained on this dataset should avoid misuse related to surveillance or speaker profiling.

---

## Disclaimer and Licensing

Note that Utterly is a *derived dataset*. I am not the original creator of the source datasets and hold no rights over its content. This dataset is provided as-is for research purposes, and all credit goes to the original authors.

Annotations are released under the **BSD-2-Clause** license and are intended to be compatible with the licensing terms of the source datasets. 

---

## Citation

If you use the Utterly dataset in academic or commercial work, please reference the original datasets.