ScreenTalk_JA2ZH-XS / README.md
fj11's picture
Upload dataset
445a65e
|
raw
history blame
3.26 kB
metadata
task_categories:
  - translation
language:
  - ja
  - zh
tags:
  - translation
  - ja
  - zh_cn
dataset_info:
  features:
    - name: audio
      dtype: audio
    - name: duration
      dtype: float64
    - name: zh-CN
      dtype: string
    - name: uid
      dtype: string
    - name: group_id
      dtype: string
  splits:
    - name: train
      num_bytes: 2969732171.08
      num_examples: 11288
    - name: valid
      num_bytes: 369775889.576
      num_examples: 1411
    - name: test
      num_bytes: 370092000.712
      num_examples: 1411
  download_size: 3609504336
  dataset_size: 3709600061.368
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: valid
        path: data/valid-*
      - split: test
        path: data/test-*

ScreenTalk_JA

ScreenTalk_JA is a paired dataset of Japanese speech and Chinese translated text released by DataLabX. It is designed for training and evaluating speech translation (ST) and multilingual speech understanding models. The data consists of spoken dialogue extracted from real-world Japanese movies and TV shows.

๐Ÿ“ฆ Dataset Overview

  • Source Language: Japanese (Audio)
  • Target Language: Simplified Chinese (Text)
  • Number of Samples: ~10,000
  • Total Duration: ~30 hours
  • Format: Parquet
  • License: CC BY 4.0
  • Tasks:
    • Speech-to-Text Translation (ST)
    • Multilingual ASR+MT joint modeling
    • Japanese ASR with Chinese aligned text training

๐Ÿ“ Data Fields

Field Name Type Description
audio Audio Raw Japanese speech audio clip
sentence string Corresponding Simplified Chinese text
duration float Duration of the audio in seconds
uid string Unique sample identifier
group_id string Grouping ID (e.g., speaker or scene tag)

๐Ÿ” Example Samples

UID Duration (s) Chinese Translation
JA_00012 4.21 ไป–ไธไผšๆฅไบ†ใ€‚
JA_00038 6.78 ไธบไป€ไนˆไฝ ไผš่ฟ™ๆ ท่ฏด๏ผŸๅ‘Š่ฏ‰ๆˆ‘็œŸ็›ธใ€‚
JA_00104 3.33 ๅฎ‰้™๏ผŒๆœ‰ไบบๆฅไบ†ใ€‚

๐Ÿ’ก Use Cases

This dataset is ideal for:

  • ๐ŸŽฏ Training speech translation models, such as Whisper ST
  • ๐Ÿงช Research on multilingual speech understanding
  • ๐Ÿง  Developing multimodal AI systems (audio โ†’ Chinese text)
  • ๐Ÿซ Educational tools for Japanese learners

๐Ÿ“ฅ Loading Example (Hugging Face Datasets)

from datasets import load_dataset

ds = load_dataset("DataLabX/ScreenTalk_JA", split="train")

๐Ÿ“ƒ Citation

@misc{datalabx2025screentalkja,
  title = {ScreenTalk-JA: A Speech Translation Dataset of Japanese Audio and Chinese Text},
  author = {DataLabX},
  year = {2025},
  howpublished = {\url{https://huggingface.co/datasets/DataLabX/ScreenTalk-JA}},
}

We welcome feedback, suggestions, and contributions! ๐Ÿ™Œ