File size: 4,839 Bytes
32affcf
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2f2b41b
b217974
 
32affcf
 
b217974
862cbad
32affcf
 
 
c6e31bb
32affcf
 
29e51da
 
32affcf
 
c37a475
c6e31bb
 
b217974
a62f6ec
 
862cbad
e39aa6c
63da633
32affcf
d005044
63da633
e4252ce
420c5f9
 
 
 
 
 
 
e4252ce
 
b217974
d005044
 
 
 
 
 
 
 
 
b217974
d005044
 
 
 
 
 
 
 
 
 
b217974
d005044
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
---
license: cc-by-4.0
dataset_info:
  features:
  - name: id
    dtype: string
  - name: audio
    dtype: audio
  - name: transcription
    dtype: string
  - name: summary
    dtype: string
  - name: summary1
    dtype: string
  - name: summary2
    dtype: string
  - name: summary3
    dtype: string
  splits:
  - name: core
    num_bytes: 17683719490.0
    num_examples: 50000
  - name: duc2003
    num_bytes: 244384744.0
    num_examples: 624
  - name: validation
    num_bytes: 342668783.0
    num_examples: 1000
  - name: test
    num_bytes: 1411039659.0
    num_examples: 4000
  download_size: 19837902893
  dataset_size: 19681812676.0
configs:
- config_name: default
  data_files:
  - split: core
    path: data/core-*
  - split: duc2003
    path: data/duc2003-*
  - split: validation
    path: data/validation-*
  - split: test
    path: data/test-*
---
# Mega-SSum
- A large-scale English *sentence-wise speech summarization* (Sen-SSum) dataset
  - Consists of 3.8M+ synthesized speech, transcription, summary triplets
  - Derived from the Gigaword dataset [Rush+2015](https://aclanthology.org/D15-1044/)

# Overview
- The dataset is divided into five splits: train/core/dev/eval/duc2003. (See below table)
  - We added a new evaluation split "*test*" for in-domain evaluation.
  - The train split is here: [MegaSSum(train)](https://huggingface.co/datasets/komats/mega-ssum-train).

| orig. data | split     | #samples  | #speakers | total dur. (hrs) | ave. dur. (sec) | CR* (%) |
|:----------:|:---------:|:---------:|:---------:|:----------------:|:---------------:|--------:|
| Gigaword   | train     | 3,800,000 | 2,559     | 11,678.2         | 11.1            | 26.2   |
| Gigaword   | core      | 50,000    | 2,559     | 154.6            | 11.1            | 25.8   |
| Gigaword   | valid     | 1,000     | 96        | 3.0              | 10.7            | 25.1   |
| Gigaword   | test      | 4,000     | 80        | 12.5             | 11.2            | 24.1   |
| DUC2003    | duc2003   | 624       | 80        | 2.1              | 12.2            | 27.5   |

*CR (compression rate, %) = #words in summary / #words in transcription * 100. Lower is shorter summary.

# Notes
- The core set is identical to the first 50k samples of the train split.
  - You may train your model and report the results only with the core set because the train split is very large.
  - Using the entire train split is generally not recommended unless there are special reasons (e.g., to investigate the upper bound).
- The duc2003 split has four reference summaries for each speech. You can report the best score from 4 scores.
- Spoken sentences were generated using VITS [Kim+2021](https://proceedings.mlr.press/v139/kim21f.html) trained with LibriTTS-R [Koizumi+2023](https://www.isca-archive.org/interspeech_2023/koizumi23_interspeech.html).
- More details and some experiments on this dataset can be found [here](https://www.isca-archive.org/interspeech_2024/matsuura24_interspeech.html#).

# Citation
- This dataset [Matsuura+2024](https://www.isca-archive.org/interspeech_2024/matsuura24_interspeech.html):
  ```
  @inproceedings{matsuura24_interspeech,
    title     = {{Sentence-wise Speech Summarization}: Task, Datasets, and End-to-End Modeling with LM Knowledge Distillation},
    author    = {Kohei Matsuura and Takanori Ashihara and Takafumi Moriya and Masato Mimura and Takatomo Kano and Atsunori Ogawa and Marc Delcroix},
    year      = {2024},
    booktitle = {Interspeech 2024},
    pages     = {1945--1949},
  }
  ```

- The Gigaword dataset [Rush+2015](https://aclanthology.org/D15-1044/):
  ```
  @article{Rush_2015,
     title={A Neural Attention Model for Abstractive Sentence Summarization},
     journal={Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing},
     author={Rush, Alexander M. and Chopra, Sumit and Weston, Jason},
     year={2015}
  }
  ```

- VITS TTS [Kim+2021](https://proceedings.mlr.press/v139/kim21f.html):
  ```
  @InProceedings{pmlr-v139-kim21f,
    title = 	 {Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech},
    author =       {Kim, Jaehyeon and Kong, Jungil and Son, Juhee},
    booktitle = 	 {Proceedings of the 38th International Conference on Machine Learning},
    pages = 	 {5530--5540},
    year = 	 {2021},
  }
  ```

- LibriTTS-R [Koizumi+2023](https://www.isca-archive.org/interspeech_2023/koizumi23_interspeech.html):
  ```
  @inproceedings{koizumi23_interspeech,
    author={Yuma Koizumi and Heiga Zen and Shigeki Karita and Yifan Ding and Kohei Yatabe and Nobuyuki Morioka and Michiel Bacchiani and Yu Zhang and Wei Han and Ankur Bapna},
    title={{LibriTTS-R}: A Restored Multi-Speaker Text-to-Speech Corpus},
    year=2023,
    booktitle={Proc. INTERSPEECH 2023},
    pages={5496--5500},
  }
  ```