Datasets:
Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,45 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
language:
|
| 3 |
+
- zh
|
| 4 |
+
- en
|
| 5 |
+
license: cc-by-nc-4.0
|
| 6 |
+
task_categories:
|
| 7 |
+
- automatic-speech-recognition
|
| 8 |
+
tags:
|
| 9 |
+
- code-switching
|
| 10 |
+
dataset_info:
|
| 11 |
+
config_names:
|
| 12 |
+
- SECoMiCSC
|
| 13 |
+
- DevCECoMiCSC
|
| 14 |
+
---
|
| 15 |
+
# Robust Code-Switching ASR Benchmark
|
| 16 |
+
|
| 17 |
+
## Dataset Summary
|
| 18 |
+
This dataset is a **processed and cleaned derivative** of the open-source MagicData corpus, specifically optimized for our project **Code-Switched ASR robustness** (e.g., Whisper fine-tuning).
|
| 19 |
+
|
| 20 |
+
We addressed the "context fragmentation" issue in original long-form audio by applying a **Smart-Merge Strategy** (merging short segments into 5-15s chunks using ground-truth timestamps) and filtering out conversational fillers.
|
| 21 |
+
|
| 22 |
+
## Original Data Sources
|
| 23 |
+
This dataset is derived from the following open-source datasets released by **MagicData Technology**:
|
| 24 |
+
|
| 25 |
+
* **Training Subset:** Derived from **ASR-SECoMiCSC**
|
| 26 |
+
* *Source:* [MagicData Open Source Community](https://magichub.com/datasets/chinese-english-code-mixing-conversational-speech-corpus/)
|
| 27 |
+
* **Benchmark/Test Subset:** Derived from **ASR-DevCECoMiCSC**
|
| 28 |
+
* *Source:* [MagicData Open Source Community](https://magichub.com/datasets/dev-set-of-chinese-english-code-mixing-conversational-speech-corpus/)
|
| 29 |
+
|
| 30 |
+
*> Note: This repository contains processed audio chunks and metadata only. Please refer to the original links for full datasets and license details.*
|
| 31 |
+
|
| 32 |
+
## Processing Pipeline (Why this version?)
|
| 33 |
+
1. **Smart Segmentation:** Instead of random VAD cutting, we merged short utterances into **5s - 15s segments** based on speaker identity and time gaps. This provides better context for Transformer-based models.
|
| 34 |
+
2. **Noise Filtering:** Removed pure filler segments (e.g., "嗯", "啊", "[ENS]") to reduce hallucination during training.
|
| 35 |
+
|
| 36 |
+
## Usage
|
| 37 |
+
|
| 38 |
+
```python
|
| 39 |
+
from datasets import load_dataset
|
| 40 |
+
|
| 41 |
+
# 1. Load Training Data (SECoMiCSC)
|
| 42 |
+
dataset_train = load_dataset("1uckyan/code-switch_chunks", data_dir="SECoMiCSC", split="train")
|
| 43 |
+
|
| 44 |
+
# 2. Load Benchmark Test Set (DevCECoMiCSC)
|
| 45 |
+
dataset_test = load_dataset("1uckyan/code-switch_chunks", data_dir="DevCECoMiCSC", split="train")
|