|
|
--- |
|
|
dataset_info: |
|
|
features: |
|
|
- name: audio |
|
|
dtype: audio |
|
|
- name: label |
|
|
dtype: |
|
|
class_label: |
|
|
names: |
|
|
'0': I+have+one+now |
|
|
'1': I+only+have+one |
|
|
splits: |
|
|
- name: train |
|
|
num_bytes: 10168367.5 |
|
|
num_examples: 535 |
|
|
- name: test |
|
|
num_bytes: 1499291.5 |
|
|
num_examples: 95 |
|
|
- name: validation |
|
|
num_bytes: 1720511.5 |
|
|
num_examples: 97 |
|
|
download_size: 13330229 |
|
|
dataset_size: 13388170.5 |
|
|
--- |
|
|
# Dataset Card for "have_one" |
|
|
The dataset consists of utterances of *have one* that are cut either from an utterance of *I have one now*, or from an utterance |
|
|
of *I only have one*. The first tends to have prominence on *have*, while the second tends to have prominence on *one*. See |
|
|
`github.com/MatsRooth/fiyou` on the methodology for finding the utterances on Youtube, and aligning and cutting them using |
|
|
Kaldi. |
|
|
|
|
|
To put such a dataset on huggingface hub, start with this directory structure, where the bottom directories contain wav files. |
|
|
``` |
|
|
have_one |
|
|
└── data |
|
|
├── I+have+one+now |
|
|
└── I+only+have+one |
|
|
``` |
|
|
Run `have_one_hub.py` to create the dataset, using the generic Huggingface methodology for audio datasets. |
|
|
|
|
|
The dataset is used in the wav2vec2 binary classification model `MatsRooth/wav2vec2-base_have_one`. |
|
|
|
|
|
Often cutting with a Kaldi phone alignment gives a snippet that includes part of preceding vowel, or has formant structure |
|
|
in the start of /h/ that gives information about the preceding vowel. These vowels are different for |
|
|
the two classes, and so classification can be based on this, as well as the intended prosodic difference. |
|
|
This needs to be corrected. |
|
|
|