File size: 2,351 Bytes
04687c9
2f657b7
04687c9
2e9a81a
04687c9
2e9a81a
04687c9
 
 
2e9a81a
04687c9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2e9a81a
2f657b7
2e9a81a
 
2f657b7
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
---
license: odc-by
---
**OLMoASR-Pool** is a web-scale audio-text dataset collected from the public internet, consisting of approximately **3M hours of audio** and **17M transcripts**. 

With OLMoASR-Pool, we trained **OLMoASR** πŸ’¬πŸŽ™οΈ, a series of English speech recognition models and observed strong generalization and robust capabilities!

# Content
- The dataset contains 18,761,823 unique IDs spanning approximately 3.4M hours of audio.
- It also spans across a variety speaking styles, accents and audio setups such as news segments πŸ“°, podcasts πŸŽ™οΈ, outdoors πŸŒ³πŸ™οΈ, crowds πŸ§‘β€πŸ€β€πŸ§‘, speeches 🎀, commentary πŸ—£οΈ, interviews 🀳 and more!
- **OLMoASR-Pool** is multilingual as it can contain non-English audio/transcripts. To retrieve an English-only dataset, it is critical to perform audio-text language alignment.
- After downloading the collection for training, only 3M hours of audio and 17M transcripts remains.

# Usage
1. Download from HuggingFace
    - Retrieve HF access token from [here](https://huggingface.co/settings/tokens) to gain access to the dataset.
    - Run `pip install huggingface_hub[cli]`
    - Run `huggingface-cli login` in your CLI and paste the HF access token to login
    - Use the code below to access the IDs
      ```
      from datasets import load_dataset
      dataset = load_dataset("allenai/OLMoASR-Pool", streaming=True)
      print(dataset) # features: ['id']
      print(next(iter(dataset['train'])))
      ```
    - If you're downloading all the IDs, you can run the code below
     ```
     from datasets import load_dataset
     dataset = load_dataset("allenai/OLMoASR-Pool", streaming=False, cache_dir=<where you want to download the IDs to>)
     ```
2. Download the audio and transcript files from ID information.
4. Preprocess the audio and transcript files. Follow the instructions at the [OLMoASR repo](https://github.com/allenai/OLMoASR_newest)


# Uses
The collection was used to train a speech recognition model, but it can also be used in research areas such as conversational data, audio understanding, speaker diarization, voice detection and more.

# License
This dataset is licensed under ODC-BY. It is intended for research and educational use in accordance with Ai2's [Responsible Use Guidelines](https://allenai.org/responsible-use).


# Reference