Datasets:
File size: 7,503 Bytes
f3bad19 dcb71db f3bad19 8b477b5 77c5518 546be88 804df9b 47b0aa0 c193ec6 47b0aa0 c193ec6 47b0aa0 c193ec6 47b0aa0 c193ec6 47b0aa0 c193ec6 d23c839 546be88 d23c839 5767e79 43e58a9 5767e79 ba4f51d 9f06ac5 962c210 4e87acd dcb71db 6b3ee5f 8f72ba5 8bd03ca 8f72ba5 d23c839 d205970 8bfdf6b 8b477b5 cb5325e f3d8c28 cb5325e c85dae1 f3d8c28 f78bf44 ae12804 f78bf44 ae12804 1282903 122ef43 b0df395 47b0aa0 265a800 8b477b5 265a800 8b477b5 265a800 8b477b5 265a800 cb5325e 265a800 cb5325e 3c97952 265a800 ba23be3 265a800 47b0aa0 265a800 cb5325e 2a620a4 0e15962 2a620a4 cb5325e 0e15962 2a620a4 cb5325e 2a620a4 e681512 1b71b3f e681512 1b71b3f e681512 2a620a4 cb5325e 2a620a4 cb5325e 47b0aa0 f34b318 cb5325e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 |
---
task_categories:
- automatic-speech-recognition
multilinguality:
- multilingual
language:
- en
- fr
- de
- es
tags:
- music
- lyrics
- evaluation
- benchmark
- transcription
- pnc
pretty_name: 'Jam-ALT: A Readability-Aware Lyrics Transcription Benchmark'
paperswithcode_id: jam-alt
configs:
- config_name: all
data_files:
- split: test
path:
- metadata.jsonl
- subsets/*/audio/*.mp3
default: true
- config_name: de
data_files:
- split: test
path:
- subsets/de/metadata.jsonl
- subsets/de/audio/*.mp3
- config_name: en
data_files:
- split: test
path:
- subsets/en/metadata.jsonl
- subsets/en/audio/*.mp3
- config_name: es
data_files:
- split: test
path:
- subsets/es/metadata.jsonl
- subsets/es/audio/*.mp3
- config_name: fr
data_files:
- split: test
path:
- subsets/fr/metadata.jsonl
- subsets/fr/audio/*.mp3
---
# Jam-ALT: A Readability-Aware Lyrics Transcription Benchmark
## Dataset description
* **Project page:** https://audioshake.github.io/jam-alt/
* **Source code:** https://github.com/audioshake/alt-eval
* **Paper (ISMIR 2024):** https://doi.org/10.5281/zenodo.14877443
* **Paper (arXiv):** https://arxiv.org/abs/2408.06370
* **Extended abstract (ISMIR 2023 LBD):** https://arxiv.org/abs/2311.13987
* **Related datasets:** https://huggingface.co/jamendolyrics
Jam-ALT is a revision of the [**JamendoLyrics**](https://huggingface.co/datasets/jamendolyrics/jamendolyrics) dataset (79 songs in 4 languages), intended for use as an **automatic lyrics transcription** (**ALT**) benchmark.
It has been published in the ISMIR 2024 paper (full citation [below](#citation)): \
📄 [**Lyrics Transcription for Humans: A Readability-Aware Benchmark**](https://doi.org/10.5281/zenodo.14877443) \
👥 O. Cífka, H. Schreiber, L. Miner, F.-R. Stöter \
🏢 [AudioShake](https://www.audioshake.ai/)
The lyrics have been revised according to the newly compiled [annotation guidelines](GUIDELINES.md), which include rules about spelling and formatting, as well as punctuation and capitalization (PnC).
The audio is identical to the JamendoLyrics dataset.
💥 **New:** The dataset now has **line-level timings**.
They were added in the paper 📄 **Exploiting Music Source Separation for Automatic Lyrics Transcription with Whisper** by
J. Syed, I. Meresman-Higgs, O. Cífka, and M. Sandler, presented at the [2025 ICME Workshop AI for Music](https://ai4musicians.org/2025icme.html).
> [!note]
> **Note:** The dataset is not time-aligned at the word level. To evaluate **automatic lyrics alignment** (**ALA**), please use [JamendoLyrics](https://huggingface.co/datasets/jamendolyrics/jamendolyrics), which is the standard benchmark for that task.
> [!tip]
> See the [project website](https://audioshake.github.io/jam-alt/) for details and the [JamendoLyrics community](https://huggingface.co/jamendolyrics) for related datasets.
## Loading the data
```python
from datasets import load_dataset
dataset = load_dataset("jamendolyrics/jam-alt", split="test")
```
A subset is defined for each language (`en`, `fr`, `de`, `es`);
for example, use `load_dataset("jamendolyrics/jam-alt", "es")` to load only the Spanish songs.
To control how the audio is decoded, cast the `audio` column using `dataset.cast_column("audio", datasets.Audio(...))`.
Useful arguments to `datasets.Audio()` are:
- `sampling_rate` and `mono=True` to control the sampling rate and number of channels.
- `decode=False` to skip decoding the audio and just get the MP3 file paths and contents.
## Running the benchmark
The evaluation is implemented in our [`alt-eval` package](https://github.com/audioshake/alt-eval):
```python
from datasets import load_dataset
from alt_eval import compute_metrics
dataset = load_dataset("jamendolyrics/jam-alt", revision="v1.3.0", split="test")
# transcriptions: list[str]
compute_metrics(dataset["text"], transcriptions, languages=dataset["language"])
```
For example, the following code can be used to evaluate Whisper:
```python
dataset = load_dataset("jamendolyrics/jam-alt", revision="v1.3.0", split="test")
dataset = dataset.cast_column("audio", datasets.Audio(decode=False)) # Get the raw audio file, let Whisper decode it
model = whisper.load_model("tiny")
transcriptions = [
"\n".join(s["text"].strip() for s in model.transcribe(a["path"])["segments"])
for a in dataset["audio"]
]
compute_metrics(dataset["text"], transcriptions, languages=dataset["language"])
```
Alternatively, if you already have transcriptions, you might prefer to skip loading the `audio` column:
```python
dataset = load_dataset("jamendolyrics/jam-alt", revision="v1.3.0", split="test").remove_columns("audio")
```
## Using non-lexical tags
In addition to the `text` field, each song and line timing also contains a field `text_tagged` with
non-lexical vocables in the text enclosed in inline opening and closing tags `<nl>` and `</nl>`,
as in shown in this verse from `Les_files_d'attente_-_Law'`:
```
<nl> (Ouh) </nl> chacun est pour l'autre le suivant
Personne ne bouge, tout l'monde attend <nl> (ouh-ouh-ouh-ouh) </nl>
Marchez-vous dessus, montrez les dents
Crachez-vous dessus, l'est encore temps
```
## Citation
When using the benchmark, please cite our [ISMIR paper](https://doi.org/10.5281/zenodo.14877443) as well as the original [JamendoLyrics paper](https://arxiv.org/abs/2306.07744).
For the line-level timings or non-lexical tags, please cite the ICME workshop paper.
```bibtex
@inproceedings{cifka-2024-jam-alt,
author = {Ond{\v{r}}ej C{\'{\i}}fka and
Hendrik Schreiber and
Luke Miner and
Fabian{-}Robert St{\"{o}}ter},
title = {Lyrics Transcription for Humans: {A} Readability-Aware Benchmark},
booktitle = {Proceedings of the 25th International Society for
Music Information Retrieval Conference},
pages = {737--744},
year = 2024,
publisher = {ISMIR},
doi = {10.5281/ZENODO.14877443},
url = {https://doi.org/10.5281/zenodo.14877443}
}
@inproceedings{durand-2023-contrastive,
author={Durand, Simon and Stoller, Daniel and Ewert, Sebastian},
booktitle={2023 {IEEE} International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
title={Contrastive Learning-Based Audio to Lyrics Alignment for Multiple Languages},
year={2023},
pages={1-5},
address={Rhodes Island, Greece},
doi={10.1109/ICASSP49357.2023.10096725}
}
@inproceedings{syed-2025-mss-alt,
author = {Jaza Syed and
Ivan Meresman-Higgs and
Ond{\v{r}}ej C{\'{\i}}fka and
Mark Sandler},
title = {Exploiting Music Source Separation for Automatic Lyrics Transcription with {Whisper}},
booktitle = {2025 {IEEE} International Conference on Multimedia and Expo Workshops (ICMEW)},
publisher = {IEEE},
year = {2025},
note = {to appear}
}
```
## Contributions
The transcripts, originally from the [JamendoLyrics](https://huggingface.co/datasets/jamendolyrics/jamendolyrics) dataset, were revised by Ondřej Cífka, Hendrik Schreiber, Fabian-Robert Stöter, Luke Miner, Laura Ibáñez, Pamela Ode, Mathieu Fontaine, Claudia Faller, April Anderson, Constantinos Dimitriou, and Kateřina Apolínová.
Line-level timings were automatically transferred from JamendoLyrics and manually corrected by Ondřej Cífka and Jaza Syed to fit the revised transcripts.
|