|
|
--- |
|
|
pretty_name: CantoMap |
|
|
annotations_creators: |
|
|
- crowdsourced |
|
|
language_creators: |
|
|
- crowdsourced |
|
|
language: |
|
|
- yue |
|
|
license: |
|
|
- gpl-3.0 |
|
|
multilinguality: |
|
|
- monolingual |
|
|
--- |
|
|
|
|
|
# Dataset Card for CantoMap |
|
|
|
|
|
## Dataset Description |
|
|
|
|
|
- **Homepage:** https://github.com/gwinterstein/CantoMap/ |
|
|
- **Repository:** https://github.com/gwinterstein/CantoMap/ |
|
|
- **Paper:** http://www.lrec-conf.org/proceedings/lrec2020/pdf/2020.lrec-1.355.pdf |
|
|
|
|
|
### Dataset Summary |
|
|
|
|
|
The Common Voice dataset consists of a unique MP3 and corresponding text file. |
|
|
Many of the 30328 recorded hours in the dataset also include demographic metadata like age, sex, and accent |
|
|
that can help improve the accuracy of speech recognition engines. |
|
|
|
|
|
The dataset currently consists of 19673 validated hours in 120 languages, but more voices and languages are always added. |
|
|
Take a look at the [Languages](https://commonvoice.mozilla.org/en/languages) page to request a language or start contributing. |
|
|
|
|
|
### Languages |
|
|
|
|
|
``` |
|
|
Cantonese |
|
|
``` |
|
|
|
|
|
## How to use |
|
|
|
|
|
The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the `load_dataset` function. |
|
|
|
|
|
For example, to download the Cantonese config, simply specify the corresponding language config name (i.e., "yue" for Cantonese): |
|
|
```python |
|
|
from datasets import load_dataset |
|
|
|
|
|
cv_16 = load_dataset("safecantonese/cantomap", "yue", split="train") |
|
|
``` |
|
|
|
|
|
Using the datasets library, you can also stream the dataset on-the-fly by adding a `streaming=True` argument to the `load_dataset` function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk. |
|
|
```python |
|
|
from datasets import load_dataset |
|
|
|
|
|
cv_16 = load_dataset("safecantonese/cantomap", "yue", split="train", streaming=True) |
|
|
|
|
|
print(next(iter(cv_16))) |
|
|
``` |
|
|
|
|
|
*Bonus*: create a [PyTorch dataloader](https://huggingface.co/docs/datasets/use_with_pytorch) directly with your own datasets (local/streamed). |
|
|
|
|
|
### Local |
|
|
|
|
|
```python |
|
|
from datasets import load_dataset |
|
|
from torch.utils.data.sampler import BatchSampler, RandomSampler |
|
|
|
|
|
cv_16 = load_dataset("safecantonese/cantomap", "yue", split="train") |
|
|
|
|
|
batch_sampler = BatchSampler(RandomSampler(cv_16), batch_size=32, drop_last=False) |
|
|
dataloader = DataLoader(cv_16, batch_sampler=batch_sampler) |
|
|
``` |
|
|
|
|
|
### Streaming |
|
|
|
|
|
```python |
|
|
from datasets import load_dataset |
|
|
from torch.utils.data import DataLoader |
|
|
|
|
|
cv_16 = load_dataset("safecantonese/cantomap", "yue", split="train") |
|
|
dataloader = DataLoader(cv_16, batch_size=32) |
|
|
``` |
|
|
|
|
|
To find out more about loading and preparing audio datasets, head over to [hf.co/blog/audio-datasets](https://huggingface.co/blog/audio-datasets). |
|
|
|
|
|
### Example scripts |
|
|
|
|
|
Train your own CTC or Seq2Seq Automatic Speech Recognition models on CantoMap with `transformers` - [here](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition). |
|
|
|
|
|
## Dataset Structure |
|
|
|
|
|
### Data Instances |
|
|
|
|
|
A typical data point comprises the `path` to the audio file and its `sentence`. |
|
|
|
|
|
```python |
|
|
{ |
|
|
'path': 'et/clips/common_voice_et_18318995.mp3', |
|
|
'audio': { |
|
|
'path': 'et/clips/common_voice_et_18318995.mp3', |
|
|
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32), |
|
|
'sampling_rate': 48000 |
|
|
}, |
|
|
'sentence': 'Tasub kokku saada inimestega, keda tunned juba ammust ajast saati.', |
|
|
} |
|
|
``` |
|
|
|
|
|
### Data Fields |
|
|
|
|
|
`path` (`string`): The path to the audio file |
|
|
|
|
|
`audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`. |
|
|
|
|
|
`sentence` (`string`): The sentence the user was prompted to speak |
|
|
|
|
|
### Data Splits |
|
|
|
|
|
The speech material has been subdivided into portions for train and test. |
|
|
|
|
|
## Additional Information |
|
|
|
|
|
### Licensing Information |
|
|
|
|
|
gpl-3.0 |
|
|
|
|
|
### Citation Information |
|
|
|
|
|
``` |
|
|
@inproceedings{lrec:2020, |
|
|
author = {Winterstein, Grégoire, Tang, Carmen and Lai, Regine}, |
|
|
title = {CantoMap: a Hong Kong Cantonese MapTask Corpus} |
|
|
} |
|
|
``` |
|
|
|
|
|
|