File size: 2,063 Bytes
b129909
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
---
language:
- zh
- en
license: cc-by-nc-4.0
task_categories:
- automatic-speech-recognition
tags:
- code-switching
dataset_info:
  config_names:
  - SECoMiCSC
  - DevCECoMiCSC
---
# Robust Code-Switching ASR Benchmark

## Dataset Summary
This dataset is a **processed and cleaned derivative** of the open-source MagicData corpus, specifically optimized for our project **Code-Switched ASR robustness** (e.g., Whisper fine-tuning). 

We addressed the "context fragmentation" issue in original long-form audio by applying a **Smart-Merge Strategy** (merging short segments into 5-15s chunks using ground-truth timestamps) and filtering out conversational fillers.

## Original Data Sources
This dataset is derived from the following open-source datasets released by **MagicData Technology**:

* **Training Subset:** Derived from **ASR-SECoMiCSC**
    * *Source:* [MagicData Open Source Community](https://magichub.com/datasets/chinese-english-code-mixing-conversational-speech-corpus/)
* **Benchmark/Test Subset:** Derived from **ASR-DevCECoMiCSC**
    * *Source:* [MagicData Open Source Community](https://magichub.com/datasets/dev-set-of-chinese-english-code-mixing-conversational-speech-corpus/)

*> Note: This repository contains processed audio chunks and metadata only. Please refer to the original links for full datasets and license details.*

## Processing Pipeline (Why this version?)
1.  **Smart Segmentation:** Instead of random VAD cutting, we merged short utterances into **5s - 15s segments** based on speaker identity and time gaps. This provides better context for Transformer-based models.
2.  **Noise Filtering:** Removed pure filler segments (e.g., "嗯", "啊", "[ENS]") to reduce hallucination during training.

## Usage

```python
from datasets import load_dataset

# 1. Load Training Data (SECoMiCSC)
dataset_train = load_dataset("1uckyan/code-switch_chunks", data_dir="SECoMiCSC", split="train")

# 2. Load Benchmark Test Set (DevCECoMiCSC)
dataset_test = load_dataset("1uckyan/code-switch_chunks", data_dir="DevCECoMiCSC", split="train")