object_v / README.md
MatsRooth's picture
Update README.md
d95de4d verified
---
dataset_info:
features:
- name: audio
dtype: audio
- name: label
dtype:
class_label:
names:
'0': down
'1': left
'2': 'no'
'3': object01
'4': object10
'5': 'off'
'6': 'on'
'7': present01
'8': present10
'9': right
'10': stop
'11': up
'12': 'yes'
splits:
- name: train
num_bytes: 339891481.276
num_examples: 12316
- name: validation
num_bytes: 59778691.086
num_examples: 2174
- name: train_object10
num_bytes: 26539323.751217928
num_examples: 962
- name: train_object01
num_bytes: 26704900.712244235
num_examples: 968
- name: validation_object10
num_bytes: 5112635.276908924
num_examples: 186
- name: validation_object01
num_bytes: 4947727.622815087
num_examples: 180
- name: train_present10
num_bytes: 27422165.543358233
num_examples: 994
- name: train_present01
num_bytes: 17131956.4662228
num_examples: 621
- name: validation_present10
num_bytes: 4233048.788408464
num_examples: 154
- name: validation_present01
num_bytes: 2556321.138454462
num_examples: 93
download_size: 447818847
dataset_size: 514318251.6616301
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: train_object10
path: data/train_object10-*
- split: train_object01
path: data/train_object01-*
- split: validation_object10
path: data/validation_object10-*
- split: validation_object01
path: data/validation_object01-*
- split: train_present10
path: data/train_present10-*
- split: train_present01
path: data/train_present01-*
- split: validation_present10
path: data/validation_present10-*
- split: validation_present01
path: data/validation_present01-*
---
This is a draft dataset of prosodic minimal pairs. It includes two stress doublets, the noun and verb uses of *object* and of *present*.
Remaining words are from superb ks.
*object10* and *present10* tokens lexically have left stress. *object01* and *present01* tokens lexically have right stress.
There are separate validation splits for each of *object10*, *present10*, *object01* and *present01*. In each of these splits, all the tokens
have the same segmental content and lexical prosody. Test splits are not here yet.