|
|
--- |
|
|
task_categories: |
|
|
- translation |
|
|
language: |
|
|
- ja |
|
|
- zh |
|
|
tags: |
|
|
- translation |
|
|
- ja |
|
|
- zh_cn |
|
|
dataset_info: |
|
|
features: |
|
|
- name: audio |
|
|
dtype: audio |
|
|
- name: duration |
|
|
dtype: float64 |
|
|
- name: zh-CN |
|
|
dtype: string |
|
|
- name: uid |
|
|
dtype: string |
|
|
- name: group_id |
|
|
dtype: string |
|
|
splits: |
|
|
- name: train |
|
|
num_bytes: 2969732171.08 |
|
|
num_examples: 11288 |
|
|
- name: valid |
|
|
num_bytes: 369775889.576 |
|
|
num_examples: 1411 |
|
|
- name: test |
|
|
num_bytes: 370092000.712 |
|
|
num_examples: 1411 |
|
|
download_size: 3609504336 |
|
|
dataset_size: 3709600061.368 |
|
|
configs: |
|
|
- config_name: default |
|
|
data_files: |
|
|
- split: train |
|
|
path: data/train-* |
|
|
- split: valid |
|
|
path: data/valid-* |
|
|
- split: test |
|
|
path: data/test-* |
|
|
--- |
|
|
|
|
|
|
|
|
# ScreenTalk_JA |
|
|
|
|
|
**ScreenTalk_JA** is a paired dataset of **Japanese speech and Chinese translated text** released by DataLabX. It is designed for training and evaluating speech translation (ST) and multilingual speech understanding models. The data consists of spoken dialogue extracted from real-world Japanese movies and TV shows. |
|
|
|
|
|
## 📦 Dataset Overview |
|
|
|
|
|
- **Source Language**: Japanese (Audio) |
|
|
- **Target Language**: Simplified Chinese (Text) |
|
|
- **Number of Samples**: ~10,000 |
|
|
- **Total Duration**: ~30 hours |
|
|
- **Format**: Parquet |
|
|
- **License**: CC BY 4.0 |
|
|
- **Tasks**: |
|
|
- Speech-to-Text Translation (ST) |
|
|
- Multilingual ASR+MT joint modeling |
|
|
- Japanese ASR with Chinese aligned text training |
|
|
|
|
|
## 📁 Data Fields |
|
|
|
|
|
| Field Name | Type | Description | |
|
|
|-------------|----------|--------------------------------------------| |
|
|
| `audio` | `Audio` | Raw Japanese speech audio clip | |
|
|
| `sentence` | `string` | Corresponding **Simplified Chinese text** | |
|
|
| `duration` | `float` | Duration of the audio in seconds | |
|
|
| `uid` | `string` | Unique sample identifier | |
|
|
| `group_id` | `string` | Grouping ID (e.g., speaker or scene tag) | |
|
|
|
|
|
## 🔍 Example Samples |
|
|
|
|
|
| UID | Duration (s) | Chinese Translation | |
|
|
|-----------|---------------|--------------------------------------------| |
|
|
| JA_00012 | 4.21 | 他不会来了。 | |
|
|
| JA_00038 | 6.78 | 为什么你会这样说?告诉我真相。 | |
|
|
| JA_00104 | 3.33 | 安静,有人来了。 | |
|
|
|
|
|
## 💡 Use Cases |
|
|
|
|
|
This dataset is ideal for: |
|
|
|
|
|
- 🎯 Training **speech translation models**, such as [Whisper ST](https://huggingface.co/docs/transformers/main/en/model_doc/whisper#speech-translation) |
|
|
- 🧪 Research on **multilingual speech understanding** |
|
|
- 🧠 Developing multimodal AI systems (audio → Chinese text) |
|
|
- 🏫 Educational tools for Japanese learners |
|
|
|
|
|
## 📥 Loading Example (Hugging Face Datasets) |
|
|
|
|
|
```python |
|
|
from datasets import load_dataset |
|
|
|
|
|
ds = load_dataset("DataLabX/ScreenTalk_JA", split="train") |
|
|
``` |
|
|
|
|
|
## 📃 Citation |
|
|
|
|
|
``` |
|
|
@misc{datalabx2025screentalkja, |
|
|
title = {ScreenTalk-JA: A Speech Translation Dataset of Japanese Audio and Chinese Text}, |
|
|
author = {DataLabX}, |
|
|
year = {2025}, |
|
|
howpublished = {\url{https://huggingface.co/datasets/DataLabX/ScreenTalk-JA}}, |
|
|
} |
|
|
``` |
|
|
|
|
|
--- |
|
|
|
|
|
We welcome feedback, suggestions, and contributions! 🙌 |
|
|
|
|
|
|