File size: 3,282 Bytes
36c1bed
 
 
 
 
 
 
 
 
 
589e167
 
 
 
 
 
93a4535
 
589e167
 
 
 
 
 
e5e31e1
 
589e167
e5e31e1
 
589e167
e5e31e1
 
 
 
589e167
 
 
 
 
 
 
 
 
36c1bed
 
43f0a48
7350122
43f0a48
7350122
43f0a48
 
 
 
 
5cbef80
43f0a48
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5cbef80
43f0a48
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5cbef80
43f0a48
 
 
 
 
 
5cbef80
43f0a48
 
5cbef80
43f0a48
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
---
task_categories:
- translation
language:
- ja
- zh
tags:
- translation
- ja
- zh_cn
dataset_info:
  features:
  - name: audio
    dtype: audio
  - name: duration
    dtype: float64
  - name: sentence
    dtype: string
  - name: uid
    dtype: string
  - name: group_id
    dtype: string
  splits:
  - name: train
    num_bytes: 2072186696.0
    num_examples: 8000
  - name: valid
    num_bytes: 259808873.0
    num_examples: 1000
  - name: test
    num_bytes: 252154427.0
    num_examples: 1000
  download_size: 2596980172
  dataset_size: 2584149996.0
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: valid
    path: data/valid-*
  - split: test
    path: data/test-*
---


# ScreenTalk_JA2ZH-XS

**ScreenTalk_JA2ZH-XS** is a paired dataset of **Japanese speech and Chinese translated text** released by DataLabX. It is designed for training and evaluating speech translation (ST) and multilingual speech understanding models. The data consists of spoken dialogue extracted from real-world Japanese movies and TV shows.

## 📦 Dataset Overview

- **Source Language**: Japanese (Audio)
- **Target Language**: Simplified Chinese (Text)
- **Number of Samples**: 10,000
- **Total Duration**: ~30 hours
- **Format**: Parquet
- **License**: CC BY 4.0
- **Tasks**:
  - Speech-to-Text Translation (ST)
  - Multilingual ASR+MT joint modeling
  - Japanese ASR with Chinese aligned text training

## 📁 Data Fields

| Field Name  | Type     | Description                                |
|-------------|----------|--------------------------------------------|
| `audio`     | `Audio`  | Raw Japanese speech audio clip             |
| `sentence`  | `string` | Corresponding **Simplified Chinese text**  |
| `duration`  | `float`  | Duration of the audio in seconds          |
| `uid`       | `string` | Unique sample identifier                   |
| `group_id`  | `string` | Grouping ID (e.g., speaker or scene tag)   |

## 🔍 Example Samples

| audio       | Duration (s)  | sentence                        |
|-----------|---------------|--------------------------------------------|
| JA_00012  | 4.21          | 他不会来了。                              |
| JA_00038  | 6.78          | 为什么你会这样说?告诉我真相。              |
| JA_00104  | 3.33          | 安静,有人来了。                            |

## 💡 Use Cases

This dataset is ideal for:

- 🎯 Training **speech translation models**, such as [Whisper ST](https://huggingface.co/docs/transformers/main/en/model_doc/whisper#speech-translation)
- 🧪 Research on **multilingual speech understanding**
- 🧠 Developing multimodal AI systems (audio → Chinese text)
- 🏫 Educational tools for Japanese learners

## 📥 Loading Example (Hugging Face Datasets)

```python
from datasets import load_dataset

ds = load_dataset("DataLabX/ScreenTalk_JA2ZH-XS", split="train")
```

## 📃 Citation

```
@misc{datalabx2025screentalkja,
  title = {DataLabX/ScreenTalk_JA2ZH-XS: A Speech Translation Dataset of Japanese Audio and Chinese Text},
  author = {DataLabX},
  year = {2025},
  howpublished = {\url{https://huggingface.co/datasets/DataLabX/ScreenTalk_JA2ZH-XS}},
}
```

---

We welcome feedback, suggestions, and contributions! 🙌