Update README.md
Browse files
README.md
CHANGED
|
@@ -32,12 +32,12 @@ size_categories:
|
|
| 32 |
|
| 33 |
**Utterly** is a speech dataset derived from *pipecat-ai/human_5_all*. It contains **~3.86k English utterances** by a broad range of speakers, and augments conversational audio with turn-level annotations, including:
|
| 34 |
|
| 35 |
-
*
|
| 36 |
-
* Whisper-generated transcripts
|
| 37 |
-
* Speaker identifiers (TODO)
|
| 38 |
* End-of-turn (EoT) markers
|
|
|
|
| 39 |
|
| 40 |
-
|
|
|
|
| 41 |
|
| 42 |
---
|
| 43 |
|
|
@@ -47,8 +47,12 @@ The dataset is designed to support research and development of speech and dialog
|
|
| 47 |
* **Language(s)**: English
|
| 48 |
* **Modality**: Audio (speech; mono-channel; sampled at 16kHz), Text
|
| 49 |
* **Interaction type**: Human conversational speech
|
|
|
|
|
|
|
|
|
|
|
|
|
| 50 |
|
| 51 |
-
|
| 52 |
|
| 53 |
---
|
| 54 |
|
|
@@ -57,8 +61,7 @@ The Utterly dataset is a *derived dataset*. All audio originates from the base d
|
|
| 57 |
* **Transcripts**
|
| 58 |
|
| 59 |
* Generated automatically using **Whisper Large V3 Turbo**.
|
| 60 |
-
* A subset of samples was manually reviewed and corrected.
|
| 61 |
-
* Based on this manually corrected subset, the transcripts are estimated to have approximately **98.3% transcription accuracy**, corresponding to a very low word error rate.
|
| 62 |
|
| 63 |
* **End-of-Turn markers**
|
| 64 |
|
|
@@ -66,7 +69,7 @@ The Utterly dataset is a *derived dataset*. All audio originates from the base d
|
|
| 66 |
|
| 67 |
* **Speaker IDs**
|
| 68 |
|
| 69 |
-
*
|
| 70 |
|
| 71 |
---
|
| 72 |
|
|
@@ -76,7 +79,7 @@ A typical data entry includes:
|
|
| 76 |
|
| 77 |
* `audio`: Path or reference to the audio utterance
|
| 78 |
* `transcript`: Text transcription of the utterance
|
| 79 |
-
* `speaker_id`: Identifier for the speaker (
|
| 80 |
* `is_completed`: Boolean or categorical flag indicating end-of-turn, i.e. turn completion
|
| 81 |
|
| 82 |
Depending on downstream usage, additional metadata from the source dataset may also be present.
|
|
@@ -95,17 +98,6 @@ The Utterly dataset is designed to support a range of speech and dialogue resear
|
|
| 95 |
|
| 96 |
---
|
| 97 |
|
| 98 |
-
## Dataset Size and Composition
|
| 99 |
-
|
| 100 |
-
* **Number of utterances**: 3,860
|
| 101 |
-
* **Language**: English
|
| 102 |
-
* **Modalities**: Audio, text
|
| 103 |
-
* **Speakers**: Multiple (as defined in the source dataset)
|
| 104 |
-
|
| 105 |
-
Dataset splits (e.g., train/validation/test) are not predefined and may be created by downstream users as needed.
|
| 106 |
-
|
| 107 |
-
---
|
| 108 |
-
|
| 109 |
## Quality Considerations
|
| 110 |
|
| 111 |
* End-of-turn annotations involve human judgment and may reflect subjective interpretations of conversational completion.
|
|
|
|
| 32 |
|
| 33 |
**Utterly** is a speech dataset derived from *pipecat-ai/human_5_all*. It contains **~3.86k English utterances** by a broad range of speakers, and augments conversational audio with turn-level annotations, including:
|
| 34 |
|
| 35 |
+
* Verbatim whisper-generated transcripts
|
|
|
|
|
|
|
| 36 |
* End-of-turn (EoT) markers
|
| 37 |
+
* Speaker identifiers (Coming soon)
|
| 38 |
|
| 39 |
+
|
| 40 |
+
The dataset is designed to support research and development of speech and dialogue systems that require joint modeling of **speech recognition** and **conversational turn-taking**, such as streaming ASR systems and real-time conversational agents.
|
| 41 |
|
| 42 |
---
|
| 43 |
|
|
|
|
| 47 |
* **Language(s)**: English
|
| 48 |
* **Modality**: Audio (speech; mono-channel; sampled at 16kHz), Text
|
| 49 |
* **Interaction type**: Human conversational speech
|
| 50 |
+
* **Utterances**: 3,860
|
| 51 |
+
* **Speakers**: 100+
|
| 52 |
+
|
| 53 |
+
Dataset splits (e.g., train/validation/test) are not predefined and may be created by downstream users as needed.
|
| 54 |
|
| 55 |
+
Note that Utterly is a *derived dataset*. All audio originates from the base dataset, with additional annotations created by the dataset author.
|
| 56 |
|
| 57 |
---
|
| 58 |
|
|
|
|
| 61 |
* **Transcripts**
|
| 62 |
|
| 63 |
* Generated automatically using **Whisper Large V3 Turbo**.
|
| 64 |
+
* A subset of samples (~200) was manually reviewed and corrected. The transcripts are estimated to have approximately a word error rate (WER) of **~2.8%**.
|
|
|
|
| 65 |
|
| 66 |
* **End-of-Turn markers**
|
| 67 |
|
|
|
|
| 69 |
|
| 70 |
* **Speaker IDs**
|
| 71 |
|
| 72 |
+
* Coming soon
|
| 73 |
|
| 74 |
---
|
| 75 |
|
|
|
|
| 79 |
|
| 80 |
* `audio`: Path or reference to the audio utterance
|
| 81 |
* `transcript`: Text transcription of the utterance
|
| 82 |
+
* `speaker_id`: Identifier for the speaker (Coming soon)
|
| 83 |
* `is_completed`: Boolean or categorical flag indicating end-of-turn, i.e. turn completion
|
| 84 |
|
| 85 |
Depending on downstream usage, additional metadata from the source dataset may also be present.
|
|
|
|
| 98 |
|
| 99 |
---
|
| 100 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 101 |
## Quality Considerations
|
| 102 |
|
| 103 |
* End-of-turn annotations involve human judgment and may reflect subjective interpretations of conversational completion.
|