|
|
--- |
|
|
task_categories: |
|
|
- automatic-speech-recognition |
|
|
multilinguality: |
|
|
- multilingual |
|
|
language: |
|
|
- en |
|
|
- fr |
|
|
- de |
|
|
- es |
|
|
tags: |
|
|
- music |
|
|
- lyrics |
|
|
- evaluation |
|
|
- benchmark |
|
|
- transcription |
|
|
- pnc |
|
|
pretty_name: 'Jam-ALT: A Readability-Aware Lyrics Transcription Benchmark' |
|
|
paperswithcode_id: jam-alt |
|
|
configs: |
|
|
- config_name: all |
|
|
data_files: |
|
|
- split: test |
|
|
path: |
|
|
- metadata.jsonl |
|
|
- subsets/*/audio/*.mp3 |
|
|
default: true |
|
|
- config_name: de |
|
|
data_files: |
|
|
- split: test |
|
|
path: |
|
|
- subsets/de/metadata.jsonl |
|
|
- subsets/de/audio/*.mp3 |
|
|
- config_name: en |
|
|
data_files: |
|
|
- split: test |
|
|
path: |
|
|
- subsets/en/metadata.jsonl |
|
|
- subsets/en/audio/*.mp3 |
|
|
- config_name: es |
|
|
data_files: |
|
|
- split: test |
|
|
path: |
|
|
- subsets/es/metadata.jsonl |
|
|
- subsets/es/audio/*.mp3 |
|
|
- config_name: fr |
|
|
data_files: |
|
|
- split: test |
|
|
path: |
|
|
- subsets/fr/metadata.jsonl |
|
|
- subsets/fr/audio/*.mp3 |
|
|
--- |
|
|
|
|
|
# Jam-ALT: A Readability-Aware Lyrics Transcription Benchmark |
|
|
|
|
|
|
|
|
## Dataset description |
|
|
|
|
|
* **Project page:** https://audioshake.github.io/jam-alt/ |
|
|
* **Source code:** https://github.com/audioshake/alt-eval |
|
|
* **Paper (ISMIR 2024):** https://doi.org/10.5281/zenodo.14877443 |
|
|
* **Paper (arXiv):** https://arxiv.org/abs/2408.06370 |
|
|
* **Follow-up paper (ICMEW 2025):** https://arxiv.org/abs/2506.15514 |
|
|
* **Extended abstract (ISMIR 2023 LBD):** https://arxiv.org/abs/2311.13987 |
|
|
* **Related datasets:** https://huggingface.co/jamendolyrics |
|
|
|
|
|
Jam-ALT is a revision of the [**JamendoLyrics**](https://huggingface.co/datasets/jamendolyrics/jamendolyrics) dataset (79 songs in 4 languages), intended for use as an **automatic lyrics transcription** (**ALT**) benchmark. |
|
|
It has been published in the ISMIR 2024 paper (full citation [below](#citation)): \ |
|
|
📄 [**Lyrics Transcription for Humans: A Readability-Aware Benchmark**](https://doi.org/10.5281/zenodo.14877443) \ |
|
|
👥 O. Cífka, H. Schreiber, L. Miner, F.-R. Stöter \ |
|
|
🏢 [AudioShake](https://www.audioshake.ai/) |
|
|
|
|
|
The lyrics have been revised according to the newly compiled [annotation guidelines](GUIDELINES.md), which include rules about spelling and formatting, as well as punctuation and capitalization (PnC). |
|
|
The audio is identical to the JamendoLyrics dataset. |
|
|
|
|
|
💥 **New:** The dataset now has **line-level timings**. |
|
|
They were added in the paper 📄 [**Exploiting Music Source Separation for Automatic Lyrics Transcription with Whisper**](https://arxiv.org/abs/2506.15514) by |
|
|
J. Syed, I. Meresman-Higgs, O. Cífka, and M. Sandler, presented at the [2025 ICME Workshop AI for Music](https://ai4musicians.org/2025icme.html). |
|
|
|
|
|
> [!note] |
|
|
> **Note:** The dataset is not time-aligned at the word level. To evaluate **automatic lyrics alignment** (**ALA**), please use [JamendoLyrics](https://huggingface.co/datasets/jamendolyrics/jamendolyrics), which is the standard benchmark for that task. |
|
|
|
|
|
> [!tip] |
|
|
> See the [project website](https://audioshake.github.io/jam-alt/) for details and the [JamendoLyrics community](https://huggingface.co/jamendolyrics) for related datasets. |
|
|
|
|
|
## Loading the data |
|
|
|
|
|
```python |
|
|
from datasets import load_dataset |
|
|
dataset = load_dataset("jamendolyrics/jam-alt", split="test") |
|
|
``` |
|
|
|
|
|
A subset is defined for each language (`en`, `fr`, `de`, `es`); |
|
|
for example, use `load_dataset("jamendolyrics/jam-alt", "es")` to load only the Spanish songs. |
|
|
|
|
|
To control how the audio is decoded, cast the `audio` column using `dataset.cast_column("audio", datasets.Audio(...))`. |
|
|
Useful arguments to `datasets.Audio()` are: |
|
|
- `sampling_rate` and `mono=True` to control the sampling rate and number of channels. |
|
|
- `decode=False` to skip decoding the audio and just get the MP3 file paths and contents. |
|
|
|
|
|
## Running the benchmark |
|
|
|
|
|
The evaluation is implemented in our [`alt-eval` package](https://github.com/audioshake/alt-eval): |
|
|
```python |
|
|
from datasets import load_dataset |
|
|
from alt_eval import compute_metrics |
|
|
|
|
|
dataset = load_dataset("jamendolyrics/jam-alt", revision="v1.4.0", split="test") |
|
|
# transcriptions: list[str] |
|
|
compute_metrics(dataset["text"], transcriptions, languages=dataset["language"]) |
|
|
``` |
|
|
|
|
|
For example, the following code can be used to evaluate Whisper: |
|
|
```python |
|
|
dataset = load_dataset("jamendolyrics/jam-alt", revision="v1.4.0", split="test") |
|
|
dataset = dataset.cast_column("audio", datasets.Audio(decode=False)) # Get the raw audio file, let Whisper decode it |
|
|
|
|
|
model = whisper.load_model("tiny") |
|
|
transcriptions = [ |
|
|
"\n".join(s["text"].strip() for s in model.transcribe(a["path"])["segments"]) |
|
|
for a in dataset["audio"] |
|
|
] |
|
|
compute_metrics(dataset["text"], transcriptions, languages=dataset["language"]) |
|
|
``` |
|
|
Alternatively, if you already have transcriptions, you might prefer to skip loading the `audio` column: |
|
|
```python |
|
|
dataset = load_dataset("jamendolyrics/jam-alt", revision="v1.4.0", split="test").remove_columns("audio") |
|
|
``` |
|
|
|
|
|
## Citation |
|
|
|
|
|
When using the benchmark, please cite our [ISMIR paper](https://doi.org/10.5281/zenodo.14877443) as well as the original [JamendoLyrics paper](https://arxiv.org/abs/2306.07744). |
|
|
For the line-level timings, please cite the [ICME workshop paper](https://arxiv.org/abs/2506.15514). |
|
|
```bibtex |
|
|
@inproceedings{cifka-2024-jam-alt, |
|
|
author = {Ond{\v{r}}ej C{\'{\i}}fka and |
|
|
Hendrik Schreiber and |
|
|
Luke Miner and |
|
|
Fabian{-}Robert St{\"{o}}ter}, |
|
|
title = {Lyrics Transcription for Humans: {A} Readability-Aware Benchmark}, |
|
|
booktitle = {Proceedings of the 25th International Society for |
|
|
Music Information Retrieval Conference}, |
|
|
pages = {737--744}, |
|
|
year = 2024, |
|
|
publisher = {ISMIR}, |
|
|
doi = {10.5281/ZENODO.14877443}, |
|
|
url = {https://doi.org/10.5281/zenodo.14877443} |
|
|
} |
|
|
@inproceedings{durand-2023-contrastive, |
|
|
author={Durand, Simon and Stoller, Daniel and Ewert, Sebastian}, |
|
|
booktitle={2023 {IEEE} International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, |
|
|
title={Contrastive Learning-Based Audio to Lyrics Alignment for Multiple Languages}, |
|
|
year={2023}, |
|
|
pages={1-5}, |
|
|
address={Rhodes Island, Greece}, |
|
|
doi={10.1109/ICASSP49357.2023.10096725} |
|
|
} |
|
|
@inproceedings{syed-2025-mss-alt, |
|
|
author = {Jaza Syed and |
|
|
Ivan Meresman-Higgs and |
|
|
Ond{\v{r}}ej C{\'{\i}}fka and |
|
|
Mark Sandler}, |
|
|
title = {Exploiting Music Source Separation for Automatic Lyrics Transcription with {Whisper}}, |
|
|
booktitle = {2025 {IEEE} International Conference on Multimedia and Expo Workshops (ICMEW)}, |
|
|
publisher = {IEEE}, |
|
|
year = {2025} |
|
|
} |
|
|
``` |
|
|
|
|
|
## Contributions |
|
|
|
|
|
The transcripts, originally from the [JamendoLyrics](https://huggingface.co/datasets/jamendolyrics/jamendolyrics) dataset, were revised by Ondřej Cífka, Hendrik Schreiber, Fabian-Robert Stöter, Luke Miner, Laura Ibáñez, Pamela Ode, Mathieu Fontaine, Claudia Faller, April Anderson, Constantinos Dimitriou, and Kateřina Apolínová. |
|
|
Line-level timings were automatically transferred from JamendoLyrics and manually corrected by Ondřej Cífka and Jaza Syed to fit the revised transcripts. |
|
|
|