File size: 3,782 Bytes
4e2774d
57f4cb2
 
20d4514
 
4e2774d
20d4514
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
57f4cb2
20d4514
57f4cb2
 
20d4514
4e2774d
 
be5d7a5
4e2774d
be5d7a5
20d4514
 
877c332
4e2774d
7482296
4e2774d
be5d7a5
 
4e2774d
20d4514
 
 
 
 
 
 
 
 
 
 
 
 
784858e
 
 
20d4514
 
 
d2b3f7d
20d4514
 
d2b3f7d
20d4514
d2b3f7d
784858e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
be5d7a5
4e2774d
be5d7a5
4e2774d
20d4514
4e2774d
 
 
 
 
 
20d4514
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
---
license: apache-2.0
task_categories:
  - text-to-speech
  - automatic-speech-recognition
configs:
  - config_name: small
    default: true
    data_files:
      - split: train
        path:
          - default/small/small-*.parquet

  - config_name: medium
    data_files:
      - split: train
        path:
          - default/medium/medium-*.parquet

  - config_name: large
    data_files:
      - split: train
        path:
          - default/large/shard-*/large-*.parquet

  - config_name: dev
    data_files:
      - split: train
        path:
          - default/dev/dev-*.parquet

  - config_name: test_clean
    data_files:
      - split: train
        path:
          - default/test_clean/test_clean-*.parquet

  - config_name: test_clean_large
    data_files:
      - split: train
        path:
          - default/test_clean_large/test_clean_large-*.parquet

  - config_name: test_other
    data_files:
      - split: train
        path:
          - default/test_other/test_other-*.parquet

  - config_name: test_other_large
    data_files:
      - split: train
        path:
          - default/test_other_large/test_other_large-*.parquet
language:
  - en
pretty_name: Libriheavy
size_categories:
  - 10M<n<100M
---

# Libriheavy

Libriheavy: a 50,000 hours ASR corpus with punctuation casing and context. Libriheavy is a labeled version of Librilight.

This uploaded version replaces the default Libri-Light audio files with the highest quality available versions
from librivox. In most cases, this consists an upgrade of the source audio from a 64kbps mp3 to a 128kbps mp3.

Audio files are then re-encoded using the Opus 68kbps codec to retain quality and reduce size.

- Homepage: https://github.com/k2-fsa/libriheavy
- License: apache-2.0

## Configs

Each dataset config exposes a single split named `train`.

- `small` (`train`): 509 hours of speech. 417 speakers averaging 1.22 hours per speaker.
- `medium` (`train`): 5042 hours of speech. 1531 speakers averaging 3.29 hours per speaker.
- `large` (`train`): 50794 hours of speech. 6736 speakers averaging 7.54 hours per speaker.
- `dev` (`train`): 22.3 hours of speech. 141 speakers averaging 0.16 hours per speaker.
- `test_clean` (`train`): 10.5 hours of speech. 70 speakers averaging 0.15 hours per speaker.
- `test_other` (`train`): 11.5 hours of speech. 72 speakers averaging 0.16 hours per speaker.
- `test_clean_large` (`train`): 107.5 hours of speech. 72 speakers averaging 1.49 hours per speaker.
- `test_other_large` (`train`): 100.3 hours of speech. 73 speakers averaging 1.37 hours per speaker.

## Usage

### Load a Single Config

```python
from datasets import load_dataset

small = load_dataset("mythicinfinity/libriheavy", "small", split="train")
```

Targeting a specific config only downloads files declared for that config, which is a good way to control disk usage.

### Load the Full Dataset (All Configs)

```python
from datasets import concatenate_datasets, load_dataset

ALL_CONFIGS = [
    "small",
    "medium",
    "large",
    "dev",
    "test_clean",
    "test_clean_large",
    "test_other",
    "test_other_large",
]


def load_libriheavy_all_train(configs: list[str] | None = None):
    cfgs = configs or ALL_CONFIGS
    parts = [load_dataset("mythicinfinity/libriheavy", cfg, split="train") for cfg in cfgs]
    return concatenate_datasets(parts)


full = load_libriheavy_all_train()
```

## Citation

```bibtex
@misc{kang2023libriheavy,
      title={Libriheavy: a 50,000 hours ASR corpus with punctuation casing and context},
      author={Wei Kang and Xiaoyu Yang and Zengwei Yao and Fangjun Kuang and Yifan Yang and Liyong Guo and Long Lin and Daniel Povey},
      year={2023},
      eprint={2309.08105},
      archivePrefix={arXiv},
      primaryClass={eess.AS}
}
```