1uckyan commited on
Commit
891314b
·
verified ·
1 Parent(s): 1ba8d1f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +76 -42
README.md CHANGED
@@ -2,61 +2,95 @@
2
  language:
3
  - zh
4
  - en
5
- license: cc-by-nc-4.0
6
- task_categories:
7
- - automatic-speech-recognition
8
  tags:
 
9
  - code-switching
 
 
 
 
 
10
  dataset_info:
11
- config_names:
12
- - SECoMiCSC
13
- - DevCECoMiCSC
14
  features:
15
- - name: file_name
16
- dtype: string
17
- - name: sentence
18
- dtype: string
19
- - name: duration
20
- dtype: float32
21
- - name: source
22
- dtype: string
23
- splits:
24
- - name: train
25
- num_bytes: 0
26
- num_rows: 0
27
- - name: test
28
- num_bytes: 0
29
- num_rows: 0
30
  ---
31
 
32
- # Robust Code-Switching ASR Benchmark
 
 
 
 
33
 
34
- ## Dataset Summary
35
- This dataset is a **processed and cleaned derivative** of the open-source MagicData corpus, specifically optimized for our project **Code-Switched ASR robustness** (e.g., Whisper fine-tuning).
36
 
37
- We addressed the "context fragmentation" issue in original long-form audio by applying a **Smart-Merge Strategy** (merging short segments into 5-15s chunks using ground-truth timestamps) and filtering out conversational fillers.
 
 
 
 
 
 
 
 
38
 
39
- ## Original Data Sources
40
- This dataset is derived from the following open-source datasets released by **MagicData Technology**:
41
 
42
- * **Training Subset:** Derived from **ASR-SECoMiCSC**
43
- * *Source:* [MagicData Open Source Community](https://magichub.com/datasets/chinese-english-code-mixing-conversational-speech-corpus/)
44
- * **Benchmark/Test Subset:** Derived from **ASR-DevCECoMiCSC**
45
- * *Source:* [MagicData Open Source Community](https://magichub.com/datasets/dev-set-of-chinese-english-code-mixing-conversational-speech-corpus/)
46
 
47
- *> Note: This repository contains processed audio chunks and metadata only. Please refer to the original links for full datasets and license details.*
 
48
 
49
- ## Processing Pipeline (Why this version?)
50
- 1. **Smart Segmentation:** Instead of random VAD cutting, we merged short utterances into **5s - 15s segments** based on speaker identity and time gaps. This provides better context for Transformer-based models.
51
- 2. **Noise Filtering:** Removed pure filler segments (e.g., "嗯", "啊", "[ENS]") to reduce hallucination during training.
52
 
53
- ## Usage
 
 
 
54
 
55
- ```python
56
- from datasets import load_dataset
57
 
58
- # 1. Load Training Data (SECoMiCSC)
59
- dataset_train = load_dataset("1uckyan/code-switch_chunks", data_dir="SECoMiCSC", split="train")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
60
 
61
- # 2. Load Benchmark Test Set (DevCECoMiCSC)
62
- dataset_test = load_dataset("1uckyan/code-switch_chunks", data_dir="DevCECoMiCSC", split="train")
 
2
  language:
3
  - zh
4
  - en
 
 
 
5
  tags:
6
+ - automatic-speech-recognition
7
  - code-switching
8
+ - audio
9
+ - speech-processing
10
+ license: cc-by-nc-sa-4.0
11
+ task_categories:
12
+ - automatic-speech-recognition
13
  dataset_info:
 
 
 
14
  features:
15
+ - name: audio
16
+ dtype: audio
17
+ - name: sentence
18
+ dtype: string
19
+ - name: duration
20
+ dtype: float32
21
+ - name: source
22
+ dtype: string
23
+ - name: original_tag
24
+ dtype: string
 
 
 
 
 
25
  ---
26
 
27
+ # Unified Mandarin-English Code-Switching Dataset (Processed)
28
+
29
+ <div align="center">
30
+ <img src="https://huggingface.co/front/assets/huggingface_logo-noborder.svg" width="50" height="50"/>
31
+ </div>
32
 
33
+ This dataset is a curated compilation of **[SECoMiCSC](https://magichub.com/datasets/chinese-english-code-mixing-conversational-speech-corpus/)**, **[DevCECoMiCSC](https://magichub.com/datasets/dev-set-of-chinese-english-code-mixing-conversational-speech-corpus/)**, and **[BAAI/CS-Dialogue](https://huggingface.co/datasets/BAAI/CS-Dialogue)**, specifically processed for Code-Switching ASR research.
 
34
 
35
+ ```text
36
+ root/
37
+ ├── audio/
38
+ │ ├── SECoMiCSC/ # Chunked segments from SECoMiCSC
39
+ │ ├── DevCECoMiCSC/ # Chunked segments from DevCECoMiCSC
40
+ │ └── CS_Dialogue/ # Extracted <MIX> segments from BAAI/CS-Dialogue
41
+ ├── metadata.jsonl # Universal index containing paths, transcripts, and metadata
42
+ └── data_preparation.py # Script to reproduce this dataset from raw sources
43
+ ```
44
 
45
+ ## Usage
 
46
 
47
+ ```python
48
+ from datasets import load_dataset, Audio
 
 
49
 
50
+ # Load with streaming (Recommended)
51
+ data = load_dataset("1uckyan/code-switch_chunks", split="train", streaming=True)
52
 
53
+ # Important: Cast to 16kHz
54
+ data = data.cast_column("audio", Audio(sampling_rate=16000))
 
55
 
56
+ for sample in data:
57
+ print(f"Source: {sample['source']} | Text: {sample['sentence']}")
58
+ break
59
+ ```
60
 
 
 
61
 
62
+ ## Data Sources & Creation
63
+
64
+
65
+ | Source Dataset | Original Content | Processing / Cleaning Logic |
66
+ | --- | --- | --- |
67
+ | **SECoMiCSC** | Conversational Speech | **VAD-based Chunking**: Split >1.8s gaps, merged to 5-15s segments.
68
+ | **DevCECoMiCSC** | Conversational Speech | **VAD-based Chunking**: Same as above. |
69
+ | **BAAI/CS-Dialogue** | Dialogue | **Tag Filtering**: Only retained utterances tagged as <MIX>
70
+
71
+
72
+
73
+ ## Reproducibility
74
+
75
+ We provide the `data_preparation.py` script in this repository to ensure the transparency and reproducibility of our data processing pipeline.
76
+
77
+ If you have access to the raw source datasets, you can recreate this specific processed version by running:
78
+
79
+ ```bash
80
+ python data_preparation.py \
81
+ --secomicsc_root /path/to/local/ASR-SECoMiCSC \
82
+ --dev_root /path/to/local/ASR-DevCECoMiCSC \
83
+ --cs_dialogue_root /path/to/local/CS_Dialogue/data/short_wav \
84
+ --output_dir ./output_Dataset
85
+
86
+ ```
87
+
88
+ ## License & Citations
89
+
90
+ This dataset is a derivative work. We adhere to the licenses of the original source datasets:
91
+
92
+ * **BAAI/CS-Dialogue**: Licensed under **CC BY-NC-SA 4.0**.
93
+ * **SECoMiCSC / DevCECoMiCSC**: Please refer to their original publications for usage rights.
94
+
95
+ If you use this dataset, please cite the original authors of the source datasets and our work.
96