Datasets:
Add dataset card
Browse files
README.md
ADDED
|
@@ -0,0 +1,71 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
language:
|
| 3 |
+
- en
|
| 4 |
+
license: cc-by-4.0
|
| 5 |
+
tags:
|
| 6 |
+
- audio
|
| 7 |
+
- text-to-speech
|
| 8 |
+
- mimi
|
| 9 |
+
- librispeech
|
| 10 |
+
- multi-speaker
|
| 11 |
+
- speech-synthesis
|
| 12 |
+
- codec
|
| 13 |
+
task_categories:
|
| 14 |
+
- text-to-speech
|
| 15 |
+
pretty_name: LibriSpeech ASR — Kyutai Mimi Encoded
|
| 16 |
+
size_categories:
|
| 17 |
+
- 100K<n<1M
|
| 18 |
+
---
|
| 19 |
+
|
| 20 |
+
# LibriSpeech ASR — Kyutai Mimi Encoded
|
| 21 |
+
|
| 22 |
+
[LibriSpeech ASR](https://www.openslr.org/12) (train.clean.100) pre-encoded with the [Kyutai Mimi](https://huggingface.co/kyutai/mimi) neural audio codec.
|
| 23 |
+
|
| 24 |
+
Instead of raw waveforms, every utterance is stored as a compact matrix of discrete codec tokens. This format is ready to use directly in any language-model-style audio generation pipeline without needing a GPU encoder at training time.
|
| 25 |
+
|
| 26 |
+
## What's inside
|
| 27 |
+
|
| 28 |
+
```
|
| 29 |
+
manifest.jsonl # metadata — one JSON record per utterance
|
| 30 |
+
spk_index.json # { "speaker_id": [idx, idx, ...] } — speaker-to-utterance index
|
| 31 |
+
shards/
|
| 32 |
+
├── shard_0000.pt # packed dict of { idx -> (8, L) int16 code tensor }
|
| 33 |
+
├── shard_0001.pt
|
| 34 |
+
└── ...
|
| 35 |
+
```
|
| 36 |
+
|
| 37 |
+
Each `manifest.jsonl` record:
|
| 38 |
+
```json
|
| 39 |
+
{
|
| 40 |
+
"idx": 0,
|
| 41 |
+
"text": "He was in a confused state of mind.",
|
| 42 |
+
"codes_file": "shards/shard_0000.pt:0",
|
| 43 |
+
"speaker_id": "1234",
|
| 44 |
+
"n_frames": 198
|
| 45 |
+
}
|
| 46 |
+
```
|
| 47 |
+
|
| 48 |
+
`spk_index.json` maps each speaker ID to the list of utterance indices for that speaker, useful for sampling reference audio in speaker-conditioned tasks.
|
| 49 |
+
|
| 50 |
+
## Dataset details
|
| 51 |
+
|
| 52 |
+
| | |
|
| 53 |
+
|---|---|
|
| 54 |
+
| Source | [LibriSpeech ASR train.clean.100](https://www.openslr.org/12) |
|
| 55 |
+
| Speakers | ~251 |
|
| 56 |
+
| Utterances | ~28,000 |
|
| 57 |
+
| Total duration | ~100 hours |
|
| 58 |
+
| Codec | [Kyutai Mimi](https://huggingface.co/kyutai/mimi) |
|
| 59 |
+
| Codec sample rate | 24,000 Hz |
|
| 60 |
+
| Codec frame rate | 12.5 fps |
|
| 61 |
+
| Codebooks | 8 |
|
| 62 |
+
| Token dtype | int16 |
|
| 63 |
+
| License | CC BY 4.0 |
|
| 64 |
+
|
| 65 |
+
## What you can use this for
|
| 66 |
+
|
| 67 |
+
- Multi-speaker / voice-cloning TTS research
|
| 68 |
+
- Speaker-conditioned codec language models
|
| 69 |
+
- Speaker representation learning
|
| 70 |
+
- Audio tokenization benchmarks
|
| 71 |
+
- Any task that benefits from a diverse, multi-speaker English speech corpus in discrete token form
|