Commit ·
7f7f1ee
1
Parent(s): a14d730
Update README.md
Browse files
README.md
CHANGED
|
@@ -21,12 +21,47 @@ configs:
|
|
| 21 |
- split: train
|
| 22 |
path: data/train-*
|
| 23 |
---
|
| 24 |
-
#
|
|
|
|
| 25 |
This dataset is built from Magicdata [ASR-CNANDIACSC: A CHINESE NANCHANG DIALECT CONVERSATIONAL SPEECH CORPUS](https://magichub.com/datasets/nanchang-dialect-conversational-speech-corpus/)
|
| 26 |
|
| 27 |
-
This corpus is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License](http://creativecommons.org/licenses/by-nc-nd/4.0/). Please refer to the license for further information.
|
| 28 |
|
| 29 |
Modifications: The audio is split in sentences based on the time span on the transcription file. Sentences that span less than 1 second is discarded. Topics of conversation is removed.
|
| 30 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 31 |
|
| 32 |
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
|
|
|
| 21 |
- split: train
|
| 22 |
path: data/train-*
|
| 23 |
---
|
| 24 |
+
# Corpus
|
| 25 |
+
|
| 26 |
This dataset is built from Magicdata [ASR-CNANDIACSC: A CHINESE NANCHANG DIALECT CONVERSATIONAL SPEECH CORPUS](https://magichub.com/datasets/nanchang-dialect-conversational-speech-corpus/)
|
| 27 |
|
| 28 |
+
This corpus is licensed under a [Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License](http://creativecommons.org/licenses/by-nc-nd/4.0/). Please refer to the license for further information.
|
| 29 |
|
| 30 |
Modifications: The audio is split in sentences based on the time span on the transcription file. Sentences that span less than 1 second is discarded. Topics of conversation is removed.
|
| 31 |
|
| 32 |
+
# Usage
|
| 33 |
+
|
| 34 |
+
To load this dataset, use
|
| 35 |
+
|
| 36 |
+
```python
|
| 37 |
+
from datasets import load_dataset
|
| 38 |
+
dialect_corpus = load_dataset("TingChen-ppmc/Nanchang_Dialect_Conversational_Speech_Corpus")
|
| 39 |
+
```
|
| 40 |
+
|
| 41 |
+
This dataset only has train split. To split out a test split, use
|
| 42 |
+
|
| 43 |
+
```python
|
| 44 |
+
from datasets import load_dataset
|
| 45 |
+
train_split = load_dataset("TingChen-ppmc/Nanchang_Dialect_Conversational_Speech_Corpus", split="train")
|
| 46 |
+
# where test=0.5 denotes 0.5 of the dataset will be split to test split
|
| 47 |
+
corpus = train_split.train_test_split(test=0.5)
|
| 48 |
+
```
|
| 49 |
+
|
| 50 |
+
A sample data would be
|
| 51 |
+
|
| 52 |
+
```python
|
| 53 |
+
{'audio':
|
| 54 |
+
{'path': 'A0001_S001_0_G0001_0.WAV',
|
| 55 |
+
'array': array([-0.00030518, -0.00039673,
|
| 56 |
+
-0.00036621, ..., -0.00064087,
|
| 57 |
+
-0.00015259, -0.00042725]),
|
| 58 |
+
'sampling_rate': 16000},
|
| 59 |
+
'gender': '女',
|
| 60 |
+
'speaker_id': 'G0001',
|
| 61 |
+
'transcription': '北京爱数智慧语音采集'
|
| 62 |
+
}
|
| 63 |
+
```
|
| 64 |
+
|
| 65 |
+
|
| 66 |
|
| 67 |
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|