Update README.md
Browse files
README.md
CHANGED
|
@@ -24,25 +24,23 @@ size_categories:
|
|
| 24 |
- **Point of Contact:** [jinny960812@kaist.ac.kr](mailto:jinny960812@kaist.ac.kr), [chaewonkim@kaist.ac.kr](mailto:chaewonkim@kaist.ac.kr)
|
| 25 |
|
| 26 |
## Dataset Description
|
| 27 |
-
This dataset includes manually annotated metadata linking audio files to transcriptions, emotions, and other attributes.
|
| 28 |
|
| 29 |
### Example Usage
|
| 30 |
There are 'train', 'test_freq', 'test_rare', 'valid_freq', and 'valid_rare' splits. Below is the example usage.
|
| 31 |
-
|
| 32 |
```python
|
| 33 |
from datasets import load_dataset
|
| 34 |
|
| 35 |
-
|
| 36 |
|
| 37 |
# see structure
|
| 38 |
-
print(
|
| 39 |
|
| 40 |
# load audio sample on the fly
|
| 41 |
-
audio_input =
|
| 42 |
-
transcription =
|
| 43 |
```
|
| 44 |
|
| 45 |
-
|
| 46 |
### Supported Tasks
|
| 47 |
- `multimodal dialogue generation`
|
| 48 |
- `automatic-speech-recognition`: The dataset can be used to train a model for Automatic Speech Recognition (ASR).
|
|
@@ -52,9 +50,7 @@ transcription = gs["valid_freq"][0]["text"] # first transcription
|
|
| 52 |
Multidialog contains audio and transcription data in English.
|
| 53 |
|
| 54 |
## Dataset Structure
|
| 55 |
-
|
| 56 |
### Data Instances
|
| 57 |
-
|
| 58 |
```python
|
| 59 |
{
|
| 60 |
'conv_id': 't_ffa55df6-114d-4b36-87a1-7af6b8b63d9b',
|
|
@@ -71,11 +67,9 @@ Multidialog contains audio and transcription data in English.
|
|
| 71 |
'emotion': 'Neutral',
|
| 72 |
'original_full_path': 'audio/youtube/P0004/YOU0000000315.opus'
|
| 73 |
}
|
| 74 |
-
|
| 75 |
```
|
| 76 |
|
| 77 |
### Data Fields
|
| 78 |
-
|
| 79 |
* conv_id (string) - unique identifier for each conversation.
|
| 80 |
* utterance_id (float) - uterrance index.
|
| 81 |
* from (string) - who the message is from (human, gpt).
|
|
|
|
| 24 |
- **Point of Contact:** [jinny960812@kaist.ac.kr](mailto:jinny960812@kaist.ac.kr), [chaewonkim@kaist.ac.kr](mailto:chaewonkim@kaist.ac.kr)
|
| 25 |
|
| 26 |
## Dataset Description
|
| 27 |
+
This dataset includes manually annotated metadata linking audio files to transcriptions, emotions, and other attributes.
|
| 28 |
|
| 29 |
### Example Usage
|
| 30 |
There are 'train', 'test_freq', 'test_rare', 'valid_freq', and 'valid_rare' splits. Below is the example usage.
|
|
|
|
| 31 |
```python
|
| 32 |
from datasets import load_dataset
|
| 33 |
|
| 34 |
+
MultiD = load_dataset("IVLLab/MultiDialog", "valid_freq", use_auth_token=True)
|
| 35 |
|
| 36 |
# see structure
|
| 37 |
+
print(MultiD)
|
| 38 |
|
| 39 |
# load audio sample on the fly
|
| 40 |
+
audio_input = MultiD["valid_freq"][0]["audio"] # first decoded audio sample
|
| 41 |
+
transcription = MultiD["valid_freq"][0]["value"] # first transcription
|
| 42 |
```
|
| 43 |
|
|
|
|
| 44 |
### Supported Tasks
|
| 45 |
- `multimodal dialogue generation`
|
| 46 |
- `automatic-speech-recognition`: The dataset can be used to train a model for Automatic Speech Recognition (ASR).
|
|
|
|
| 50 |
Multidialog contains audio and transcription data in English.
|
| 51 |
|
| 52 |
## Dataset Structure
|
|
|
|
| 53 |
### Data Instances
|
|
|
|
| 54 |
```python
|
| 55 |
{
|
| 56 |
'conv_id': 't_ffa55df6-114d-4b36-87a1-7af6b8b63d9b',
|
|
|
|
| 67 |
'emotion': 'Neutral',
|
| 68 |
'original_full_path': 'audio/youtube/P0004/YOU0000000315.opus'
|
| 69 |
}
|
|
|
|
| 70 |
```
|
| 71 |
|
| 72 |
### Data Fields
|
|
|
|
| 73 |
* conv_id (string) - unique identifier for each conversation.
|
| 74 |
* utterance_id (float) - uterrance index.
|
| 75 |
* from (string) - who the message is from (human, gpt).
|