Datasets:
Improve dataset card: update task category and add usage instructions
Browse filesThis PR updates the `task_categories` metadata field from `question-answering` to `audio-text-to-text`. This change more accurately reflects that the C3 benchmark is designed for evaluating Spoken Dialogue Models (SDMs) in complex conversational settings, involving both audio input and text/speech output.
Additionally, a "Sample Usage" section has been added to provide clear instructions on how to utilize the dataset for evaluation, directly referencing the detailed guides available in the accompanying GitHub repository.
README.md
CHANGED
|
@@ -1,9 +1,12 @@
|
|
| 1 |
---
|
| 2 |
-
task_categories:
|
| 3 |
-
- question-answering
|
| 4 |
language:
|
| 5 |
- zh
|
| 6 |
- en
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 7 |
tags:
|
| 8 |
- dialogue
|
| 9 |
- spoken-dialogue-model
|
|
@@ -12,10 +15,8 @@ tags:
|
|
| 12 |
- omission
|
| 13 |
- multi-turn
|
| 14 |
- complex
|
| 15 |
-
pretty_name: C3 Benchmark
|
| 16 |
-
size_categories:
|
| 17 |
-
- 1K<n<10K
|
| 18 |
---
|
|
|
|
| 19 |
๐ฃ **C3 Benchmark: The Challenging Benchmark for Bilingual Speech Dialogue Models!**
|
| 20 |
|
| 21 |
๐๏ธ **C3** is the first-ever benchmark dataset that tests complex phenomena in speech dialogues, covering **pauses, homophones, stress, intonation, syntactic ambiguity, coreference, omission, and multi-turn conversations**.
|
|
@@ -34,10 +35,16 @@ size_categories:
|
|
| 34 |
|
| 35 |
๐ **Experience C3 Now**:
|
| 36 |
|
| 37 |
-
*
|
| 38 |
-
*
|
| 39 |
-
*
|
| 40 |
-
*
|
| 41 |
|
| 42 |
> [!Important]
|
| 43 |
> ๐ฅ **Limited Time Offer!** We can help you run the evaluation script for your SDM's result on our benchmark, free of charge until Sept. 1, 2025. After that, you can run the evaluation independently. To participate, email `chengqianma@yeah.net` with subject: `[C3Bench Evaluation] - [Model_Name]`
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
|
|
|
|
|
|
| 2 |
language:
|
| 3 |
- zh
|
| 4 |
- en
|
| 5 |
+
size_categories:
|
| 6 |
+
- 1K<n<10K
|
| 7 |
+
task_categories:
|
| 8 |
+
- audio-text-to-text
|
| 9 |
+
pretty_name: C3 Benchmark
|
| 10 |
tags:
|
| 11 |
- dialogue
|
| 12 |
- spoken-dialogue-model
|
|
|
|
| 15 |
- omission
|
| 16 |
- multi-turn
|
| 17 |
- complex
|
|
|
|
|
|
|
|
|
|
| 18 |
---
|
| 19 |
+
|
| 20 |
๐ฃ **C3 Benchmark: The Challenging Benchmark for Bilingual Speech Dialogue Models!**
|
| 21 |
|
| 22 |
๐๏ธ **C3** is the first-ever benchmark dataset that tests complex phenomena in speech dialogues, covering **pauses, homophones, stress, intonation, syntactic ambiguity, coreference, omission, and multi-turn conversations**.
|
|
|
|
| 35 |
|
| 36 |
๐ **Experience C3 Now**:
|
| 37 |
|
| 38 |
+
* **Paper**: [Read the Paper](https://huggingface.co/papers/2507.22968)
|
| 39 |
+
* **Dataset**: [Explore the Dataset on Hugging Face](https://huggingface.co/datasets/ChengqianMa/C3)
|
| 40 |
+
* **Online Demo**: [Try the C3 Demo](https://step-out.github.io/C3-web)
|
| 41 |
+
* **Code**: [Submit your SDM Evaluation Result](https://github.com/step-out/C3)
|
| 42 |
|
| 43 |
> [!Important]
|
| 44 |
> ๐ฅ **Limited Time Offer!** We can help you run the evaluation script for your SDM's result on our benchmark, free of charge until Sept. 1, 2025. After that, you can run the evaluation independently. To participate, email `chengqianma@yeah.net` with subject: `[C3Bench Evaluation] - [Model_Name]`
|
| 45 |
+
|
| 46 |
+
### Sample Usage
|
| 47 |
+
|
| 48 |
+
To use this dataset for evaluation, first download the dataset from Hugging Face. Then, prepare your Spoken Dialogue Model (SDM) responses in the specified format and use the provided evaluation scripts from the [official GitHub repository](https://github.com/step-out/C3).
|
| 49 |
+
|
| 50 |
+
For detailed instructions on preparing data, running evaluation, and calculating accuracy, please refer to the [Usage section on the GitHub README](https://github.com/step-out/C3#usage).
|