|
|
--- |
|
|
language: |
|
|
- en |
|
|
license: cc0-1.0 |
|
|
task_categories: |
|
|
- image-feature-extraction |
|
|
tags: |
|
|
- omr |
|
|
- sheet-music |
|
|
- music-notation |
|
|
- public-domain |
|
|
- benchmark |
|
|
pretty_name: Muse OMR Benchmark |
|
|
size_categories: |
|
|
- 1K<n<10K |
|
|
--- |
|
|
|
|
|
# Muse OMR Benchmark |
|
|
|
|
|
## What this is |
|
|
A small, clean benchmark dataset for **OMR (Optical Music Recognition — recognizing music notation from images/PDFs)**. |
|
|
|
|
|
It contains **1077 pairs**: |
|
|
- a symbolic music score (the “ground truth”, see dataset fields below) |
|
|
- a corresponding **PDF** rendering with **data augmentation** applied |
|
|
|
|
|
All underlying works are **Public Domain**. |
|
|
|
|
|
## Why it exists |
|
|
OMR is often evaluated on private or inconsistent datasets. This dataset aims to provide the community with a practical, reproducible, public benchmark. |
|
|
|
|
|
## What’s inside |
|
|
|
|
|
Each PDF is generated from our own catalog of PD scores and then augmented to simulate real-world scans: |
|
|
- ink blobs / stains |
|
|
- scratches / wear |
|
|
- crumpled or textured paper |
|
|
- rotation / skew |
|
|
- other visual noise |
|
|
|
|
|
## Benchmark Code |
|
|
Check out official repo with evaluation code - https://github.com/musescore/omr_benchmark |
|
|
|
|
|
## Dataset structure |
|
|
The dataset is distributed as **pairs**. Typical fields: |
|
|
|
|
|
- `id`: unique sample id |
|
|
- `pdf_image`: augmented PDF file |
|
|
- `score`: symbolic reference in MuseScore Studio file format for evaluation |
|
|
|
|
|
## License |
|
|
Dataset content is released under **CC0-1.0** (no restrictions; attribution appreciated). |
|
|
|
|
|
## Citation |
|
|
If you use this dataset in a paper or a public benchmark, please cite: |
|
|
|
|
|
```bibtex |
|
|
@dataset{pd_omr_benchmark, |
|
|
title = {Muse OMR Benchmark}, |
|
|
author = {Vasily Pereverzev and Kristina Abdullina}, |
|
|
year = {2025}, |
|
|
} |