Datasets:

Modalities:
Text
Formats:
parquet
Libraries:
Datasets
pandas
License:
yentinglin's picture
Upload dataset
64ddd10 verified
|
raw
history blame
1.94 kB
---
license: apache-2.0
dataset_info:
features:
- name: hypothesis
sequence: string
- name: transcription
dtype: string
- name: input1
dtype: string
- name: hypothesis_concatenated
dtype: string
- name: source
dtype: string
- name: id
dtype: string
- name: am_score
sequence: float64
splits:
- name: train
num_bytes: 86387796
num_examples: 97396
- name: test
num_bytes: 5386309
num_examples: 6129
download_size: 26832650
dataset_size: 91774105
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# Dataset Name: Pilot dataset for Multi-domain ASR corrections
## Description
Consolidated from [PeacefulData/HyPoradise-v0](https://huggingface.co/datasets/PeacefulData/HyPoradise-v0)
## Structure
### Data Split
- **Training Data**: 168,460 entries
- **Test Data**: 6,992 entries
### Columns
- `hypothesis`: N-best hypothesis from beam search.
- `transcription`: Corrected asr transcription.
- `hypothesis_concatenated`: An alternative version of the text output.
- `source`: The source of the text entry, indicating the origin dataset.
- `score`: An acoustic model score (not all entries have this).
### Source Distribution
- **Training Sources**:
- `train_cv`: 47,293 entries
- `train_wsj`: 37,514 entries
- `train_swbd`: 36,539 entries
- `train_chime4`: 9,600 entries
- **Test Sources**:
- `test_swbd`: 2,000 entries
- `test_cv`: 2,000 entries
- `test_chime4`: 1,320 entries
- `test_wsj`: 836 entries
## Access
The dataset can be accessed and downloaded through the HuggingFace Datasets library. Use the following command to load the dataset:
```python
from datasets import load_dataset
dataset = load_dataset("PeacefulData/HyPoradise-pilot")
```
## Acknowledgments
Thanks https://huggingface.co/datasets/PeacefulData/HyPoradise-v0 for sharing this dataset.