Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -7,6 +7,19 @@ configs:
|
|
| 7 |
- split: train
|
| 8 |
path: "data/train.parquet"
|
| 9 |
---
|
|
|
|
| 10 |
It is a dataset of ukrainian audiobooks
|
| 11 |
-
Each sample contain an approximately 8
|
| 12 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 7 |
- split: train
|
| 8 |
path: "data/train.parquet"
|
| 9 |
---
|
| 10 |
+
### About dataset
|
| 11 |
It is a dataset of ukrainian audiobooks
|
| 12 |
+
Each sample contain an approximately 8 seconds od ukrainian speech
|
| 13 |
|
| 14 |
+
### Loading script
|
| 15 |
+
'''
|
| 16 |
+
>>> load_dataset("Zarakun/audiobooks_ua_test")
|
| 17 |
+
'''
|
| 18 |
+
|
| 19 |
+
### Dataset structure
|
| 20 |
+
**Every example has the following:
|
| 21 |
+
**audio** - the waveform
|
| 22 |
+
**rate** - the sampling rate of the waveform
|
| 23 |
+
**file_id** - the id of the speaker
|
| 24 |
+
**duration** - the duration of the video in seconds
|
| 25 |
+
**sentence** - the transcript of the video
|