Datasets:
Tasks:
Automatic Speech Recognition
Formats:
parquet
Languages:
English
Size:
100K - 1M
ArXiv:
License:
polinaeterna
commited on
Commit
·
46e652e
1
Parent(s):
f8f30d5
add more info to readme
Browse files
README.md
CHANGED
|
@@ -1,13 +1,56 @@
|
|
| 1 |
-
# AMI
|
| 2 |
-
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 6 |
|
| 7 |
**Note**: This dataset corresponds to the data-processing of [KALDI's AMI S5 recipe](https://github.com/kaldi-asr/kaldi/tree/master/egs/ami/s5).
|
| 8 |
This means text is normalized and the audio data is chunked according to the scripts above!
|
| 9 |
To make the user experience as simply as possible, we provide the already chunked data to the user here so that the following can be done:
|
| 10 |
|
|
|
|
|
|
|
|
|
|
| 11 |
```python
|
| 12 |
from datasets import load_dataset
|
| 13 |
ds = load_dataset("edinburghcstr/ami", "ihm")
|
|
@@ -18,15 +61,15 @@ gives:
|
|
| 18 |
```
|
| 19 |
DatasetDict({
|
| 20 |
train: Dataset({
|
| 21 |
-
features: ['
|
| 22 |
num_rows: 108502
|
| 23 |
})
|
| 24 |
validation: Dataset({
|
| 25 |
-
features: ['
|
| 26 |
num_rows: 13098
|
| 27 |
})
|
| 28 |
test: Dataset({
|
| 29 |
-
features: ['
|
| 30 |
num_rows: 12643
|
| 31 |
})
|
| 32 |
})
|
|
@@ -39,10 +82,10 @@ ds["train"][0]
|
|
| 39 |
automatically loads the audio into memory:
|
| 40 |
|
| 41 |
```
|
| 42 |
-
{'
|
| 43 |
'audio_id': 'AMI_EN2001a_H00_MEE068_0000557_0000594',
|
| 44 |
'text': 'OKAY',
|
| 45 |
-
'audio': {'path': '/
|
| 46 |
'array': array([0. , 0. , 0. , ..., 0.00033569, 0.00030518,
|
| 47 |
0.00030518], dtype=float32),
|
| 48 |
'sampling_rate': 16000},
|
|
@@ -70,4 +113,66 @@ The results are in-line with results of published papers:
|
|
| 70 |
- [*Hybrid acoustic models for distant and multichannel large vocabulary speech recognition*](https://www.researchgate.net/publication/258075865_Hybrid_acoustic_models_for_distant_and_multichannel_large_vocabulary_speech_recognition)
|
| 71 |
- [Multi-Span Acoustic Modelling using Raw Waveform Signals](https://arxiv.org/abs/1906.11047)
|
| 72 |
|
| 73 |
-
You can run [run.sh](https://huggingface.co/patrickvonplaten/ami-wav2vec2-large-lv60/blob/main/run.sh) to reproduce the result.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Dataset Card for AMI
|
| 2 |
+
|
| 3 |
+
## Table of Contents
|
| 4 |
+
- [Table of Contents](#table-of-contents)
|
| 5 |
+
- [Dataset Description](#dataset-description)
|
| 6 |
+
- [Dataset Summary](#dataset-summary)
|
| 7 |
+
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
|
| 8 |
+
- [Languages](#languages)
|
| 9 |
+
- [Dataset Structure](#dataset-structure)
|
| 10 |
+
- [Data Instances](#data-instances)
|
| 11 |
+
- [Data Fields](#data-fields)
|
| 12 |
+
- [Data Splits](#data-splits)
|
| 13 |
+
- [Dataset Creation](#dataset-creation)
|
| 14 |
+
- [Curation Rationale](#curation-rationale)
|
| 15 |
+
- [Source Data](#source-data)
|
| 16 |
+
- [Annotations](#annotations)
|
| 17 |
+
- [Personal and Sensitive Information](#personal-and-sensitive-information)
|
| 18 |
+
- [Considerations for Using the Data](#considerations-for-using-the-data)
|
| 19 |
+
- [Social Impact of Dataset](#social-impact-of-dataset)
|
| 20 |
+
- [Discussion of Biases](#discussion-of-biases)
|
| 21 |
+
- [Other Known Limitations](#other-known-limitations)
|
| 22 |
+
- [Additional Information](#additional-information)
|
| 23 |
+
- [Dataset Curators](#dataset-curators)
|
| 24 |
+
- [Licensing Information](#licensing-information)
|
| 25 |
+
- [Citation Information](#citation-information)
|
| 26 |
+
- [Contributions](#contributions)
|
| 27 |
+
- [Terms of Usage](#terms-of-usage)
|
| 28 |
+
|
| 29 |
+
|
| 30 |
+
## Dataset Description
|
| 31 |
+
|
| 32 |
+
- **Homepage:** https://groups.inf.ed.ac.uk/ami/corpus/
|
| 33 |
+
- **Repository:** https://github.com/kaldi-asr/kaldi/tree/master/egs/ami/s5
|
| 34 |
+
- **Paper:**
|
| 35 |
+
- **Leaderboard:**
|
| 36 |
+
- **Point of Contact:** [jonathan@ed.ac.uk](mailto:jonathan@ed.ac.uk)
|
| 37 |
+
|
| 38 |
+
## Dataset Description
|
| 39 |
+
|
| 40 |
+
The AMI Meeting Corpus consists of 100 hours of meeting recordings. The recordings use a range of signals
|
| 41 |
+
synchronized to a common timeline. These include close-talking and far-field microphones, individual and
|
| 42 |
+
room-view video cameras, and output from a slide projector and an electronic whiteboard. During the meetings,
|
| 43 |
+
the participants also have unsynchronized pens available to them that record what is written. The meetings
|
| 44 |
+
were recorded in English using three different rooms with different acoustic properties, and include mostly
|
| 45 |
+
non-native speakers.
|
| 46 |
|
| 47 |
**Note**: This dataset corresponds to the data-processing of [KALDI's AMI S5 recipe](https://github.com/kaldi-asr/kaldi/tree/master/egs/ami/s5).
|
| 48 |
This means text is normalized and the audio data is chunked according to the scripts above!
|
| 49 |
To make the user experience as simply as possible, we provide the already chunked data to the user here so that the following can be done:
|
| 50 |
|
| 51 |
+
|
| 52 |
+
### Example Usage
|
| 53 |
+
|
| 54 |
```python
|
| 55 |
from datasets import load_dataset
|
| 56 |
ds = load_dataset("edinburghcstr/ami", "ihm")
|
|
|
|
| 61 |
```
|
| 62 |
DatasetDict({
|
| 63 |
train: Dataset({
|
| 64 |
+
features: ['meeting_id', 'audio_id', 'text', 'audio', 'begin_time', 'end_time', 'microphone_id', 'speaker_id'],
|
| 65 |
num_rows: 108502
|
| 66 |
})
|
| 67 |
validation: Dataset({
|
| 68 |
+
features: ['meeting_id', 'audio_id', 'text', 'audio', 'begin_time', 'end_time', 'microphone_id', 'speaker_id'],
|
| 69 |
num_rows: 13098
|
| 70 |
})
|
| 71 |
test: Dataset({
|
| 72 |
+
features: ['meeting_id', 'audio_id', 'text', 'audio', 'begin_time', 'end_time', 'microphone_id', 'speaker_id'],
|
| 73 |
num_rows: 12643
|
| 74 |
})
|
| 75 |
})
|
|
|
|
| 82 |
automatically loads the audio into memory:
|
| 83 |
|
| 84 |
```
|
| 85 |
+
{'meeting_id': 'EN2001a',
|
| 86 |
'audio_id': 'AMI_EN2001a_H00_MEE068_0000557_0000594',
|
| 87 |
'text': 'OKAY',
|
| 88 |
+
'audio': {'path': '/cache/dir/path/downloads/extracted/2d75d5b3e8a91f44692e2973f08b4cac53698f92c2567bd43b41d19c313a5280/EN2001a/train_ami_en2001a_h00_mee068_0000557_0000594.wav',
|
| 89 |
'array': array([0. , 0. , 0. , ..., 0.00033569, 0.00030518,
|
| 90 |
0.00030518], dtype=float32),
|
| 91 |
'sampling_rate': 16000},
|
|
|
|
| 113 |
- [*Hybrid acoustic models for distant and multichannel large vocabulary speech recognition*](https://www.researchgate.net/publication/258075865_Hybrid_acoustic_models_for_distant_and_multichannel_large_vocabulary_speech_recognition)
|
| 114 |
- [Multi-Span Acoustic Modelling using Raw Waveform Signals](https://arxiv.org/abs/1906.11047)
|
| 115 |
|
| 116 |
+
You can run [run.sh](https://huggingface.co/patrickvonplaten/ami-wav2vec2-large-lv60/blob/main/run.sh) to reproduce the result.
|
| 117 |
+
|
| 118 |
+
### Supported Tasks and Leaderboards
|
| 119 |
+
|
| 120 |
+
### Languages
|
| 121 |
+
|
| 122 |
+
## Dataset Structure
|
| 123 |
+
|
| 124 |
+
### Data Instances
|
| 125 |
+
|
| 126 |
+
### Data Fields
|
| 127 |
+
|
| 128 |
+
### Data Splits
|
| 129 |
+
|
| 130 |
+
#### Transcribed Subsets Size
|
| 131 |
+
|
| 132 |
+
## Dataset Creation
|
| 133 |
+
|
| 134 |
+
### Curation Rationale
|
| 135 |
+
|
| 136 |
+
### Source Data
|
| 137 |
+
|
| 138 |
+
#### Initial Data Collection and Normalization
|
| 139 |
+
|
| 140 |
+
#### Who are the source language producers?
|
| 141 |
+
|
| 142 |
+
### Annotations
|
| 143 |
+
|
| 144 |
+
#### Annotation process
|
| 145 |
+
|
| 146 |
+
#### Who are the annotators?
|
| 147 |
+
|
| 148 |
+
### Personal and Sensitive Information
|
| 149 |
+
|
| 150 |
+
## Considerations for Using the Data
|
| 151 |
+
|
| 152 |
+
### Social Impact of Dataset
|
| 153 |
+
|
| 154 |
+
[More Information Needed]
|
| 155 |
+
|
| 156 |
+
### Discussion of Biases
|
| 157 |
+
|
| 158 |
+
### Other Known Limitations
|
| 159 |
+
|
| 160 |
+
## Additional Information
|
| 161 |
+
|
| 162 |
+
### Dataset Curators
|
| 163 |
+
|
| 164 |
+
|
| 165 |
+
### Licensing Information
|
| 166 |
+
|
| 167 |
+
|
| 168 |
+
### Citation Information
|
| 169 |
+
|
| 170 |
+
|
| 171 |
+
### Contributions
|
| 172 |
+
|
| 173 |
+
Thanks to [@sanchit-gandhi](https://github.com/sanchit-gandhi), [@patrickvonplaten](https://github.com/patrickvonplaten),
|
| 174 |
+
and [@polinaeterna](https://github.com/polinaeterna) for adding this dataset.
|
| 175 |
+
|
| 176 |
+
## Terms of Usage
|
| 177 |
+
|
| 178 |
+
|