jam-alt-lines / README.md
cifkao's picture
First commit
11dc96b
---
task_categories:
- automatic-speech-recognition
multilinguality:
- multilingual
language:
- en
- fr
- de
- es
tags:
- music
- lyrics
- evaluation
- benchmark
- transcription
- pnc
configs:
- config_name: pure
data_files:
- split: test
path:
- pure/*/metadata.jsonl
- pure/*/audio/*.flac
default: true
- config_name: pure-de
data_files:
- split: test
path:
- pure/de/metadata.jsonl
- pure/de/audio/*.flac
- config_name: pure-en
data_files:
- split: test
path:
- pure/en/metadata.jsonl
- pure/en/audio/*.flac
- config_name: pure-es
data_files:
- split: test
path:
- pure/es/metadata.jsonl
- pure/es/audio/*.flac
- config_name: pure-fr
data_files:
- split: test
path:
- pure/fr/metadata.jsonl
- pure/fr/audio/*.flac
- config_name: merged
data_files:
- split: test
path:
- pure/*/metadata.jsonl
- pure/*/audio/*.flac
- groups/*/metadata.jsonl
- groups/*/audio/*.flac
- config_name: merged-de
data_files:
- split: test
path:
- pure/de/metadata.jsonl
- pure/de/audio/*.flac
- groups/de/metadata.jsonl
- groups/de/audio/*.flac
- config_name: merged-en
data_files:
- split: test
path:
- pure/en/metadata.jsonl
- pure/en/audio/*.flac
- groups/en/metadata.jsonl
- groups/en/audio/*.flac
- config_name: merged-es
data_files:
- split: test
path:
- pure/es/metadata.jsonl
- pure/es/audio/*.flac
- groups/es/metadata.jsonl
- groups/es/audio/*.flac
- config_name: merged-fr
data_files:
- split: test
path:
- pure/fr/metadata.jsonl
- pure/fr/audio/*.flac
- groups/fr/metadata.jsonl
- groups/fr/audio/*.flac
---
# Jam-ALT Lines
Jam-ALT Lines is a line-level version of the [Jam-ALT](https://huggingface.co/datasets/jamendolyrics/jam-alt/) **lyrics transcription** dataset.
Unlike Jam-ALT, this dataset contains one audio segment for each lyrics line, facilitating research that considers each line as a separate unit.
> [!tip]
> See the [Jam-ALT project website](https://audioshake.github.io/jam-alt/) for details and the [JamendoLyrics community](https://huggingface.co/jamendolyrics) for related datasets.
## Dataset flavors
Lyrics lines may overlap in time, which makes it impossible to have a one-to-one correspondence between lines and audio segments
while including a complete and accurate transcription for each. For this reason, the dataset comes in two **flavors**:
- `pure`, containing only lines without significant temporal overlap with other lines;
- `merged`, where (transitively) overlapping lines are grouped.
Both of these strategies ensure that no two segments overlap, and hence all singing in each segment is completely transcribed.
The default is `pure`, i.e. `load_dataset("jamendolyrics/jam-alt-lines", split="test")` will only contain lines without overlap.
Use `load_dataset("jamendolyrics/jam-alt-lines", "merged", split="test")` to include segments corresponding to overlapping line groups.
Each flavor is further divided into subsets by language: `pure-en`, `pure-es`, `pure-de`, `pure-fr`, `merged-en`, `merged-es`, `merged-de`, `merged-fr`.
(This is done according to the [`language` column](#columns), which reflects the language of each _line_.)
## Columns
- `song_name`: the name of the song (`name` in Jam-ALT)
- `audio`: the audio of the segment
- `text`: the transcript of the segment
- `language`: the language of the segment; this can be different from the language of the song, but only if the line is _entirely and clearly_ in a different language
- `song_language`: the language of the song (`language` in Jam-ALT)
- `line_indices`: the indices of the lines included in the segment
- `start_time`: the start time of the segment in the song
- `end_time`: the end time of the segment in the song
- `merged`: true if the segment is composed of a group of overlapping lines
- `artist`, `title`, `genre`, `license_type`: song metadata
## Dataset creation process
The dataset was generated quasi-automatically from Jam-ALT using the [`create_dataset.py`](create_dataset.py) and [`fix_languages.py`](fix_languages.py) scripts.
1. The `start`, `end`, and `text` of each lyrics line was extracted.
2. Overlaps were handled as follows:
- Groups of lines with large overlaps (> 0.2 s) were removed (`pure` flavor) or merged to form a single segment (`merged` flavor).
- Small overlaps (≤ 0.2 s) were reduced (to at most 0.1 s) by shortening the first of each pair of overlapping segments.
3. A padding of up to 0.5 s was added on each side of a segment where possible without creating or increasing overlaps. This was done to counteract inaccuracies in the original timestamps.
4. The audio between the adjusted start and end time was extracted.
5. The language of the segment was estimated from the text, and if not matching the language of the song with sufficient confidence, it was checked and corrected manually. The language was changed from the language of the song only if the segment was _entirely and clearly_ in one of the other four languages of the dataset (`en`, `es`, `de`, `fr`). Otherwise, it was set to the language of the song.
## Citation
When using the dataset, please cite the Jam-ALT [ISMIR paper](https://doi.org/10.5281/zenodo.14877443) as well as the [ICME workshop paper](https://arxiv.org/abs/2506.15514), which introduced the line-level timings.
Consider also citing the [JamendoLyrics paper](https://arxiv.org/abs/2306.07744).
```bibtex
@inproceedings{cifka-2024-jam-alt,
author = {Ond{\v{r}}ej C{\'{\i}}fka and
Hendrik Schreiber and
Luke Miner and
Fabian{-}Robert St{\"{o}}ter},
title = {Lyrics Transcription for Humans: {A} Readability-Aware Benchmark},
booktitle = {Proceedings of the 25th International Society for
Music Information Retrieval Conference},
pages = {737--744},
year = 2024,
publisher = {ISMIR},
doi = {10.5281/ZENODO.14877443},
url = {https://doi.org/10.5281/zenodo.14877443}
}
@inproceedings{syed-2025-mss-alt,
author = {Jaza Syed and
Ivan Meresman-Higgs and
Ond{\v{r}}ej C{\'{\i}}fka and
Mark Sandler},
title = {Exploiting Music Source Separation for Automatic Lyrics Transcription with {Whisper}},
booktitle = {2025 {IEEE} International Conference on Multimedia and Expo Workshops (ICMEW)},
publisher = {IEEE},
year = {2025}
}
@inproceedings{durand-2023-contrastive,
author={Durand, Simon and Stoller, Daniel and Ewert, Sebastian},
booktitle={2023 {IEEE} International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
title={Contrastive Learning-Based Audio to Lyrics Alignment for Multiple Languages},
year={2023},
pages={1-5},
address={Rhodes Island, Greece},
doi={10.1109/ICASSP49357.2023.10096725}
}
```