Datasets:
File size: 1,656 Bytes
04edf0f 7a10794 04edf0f f63c2f8 04edf0f f63c2f8 bbaa066 221814b 04edf0f 1dc5c53 04edf0f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 |
---
language:
- en
license: cc0-1.0
task_categories:
- image-feature-extraction
tags:
- omr
- sheet-music
- music-notation
- public-domain
- benchmark
pretty_name: Muse OMR Benchmark
size_categories:
- 1K<n<10K
---
# Muse OMR Benchmark
## What this is
A small, clean benchmark dataset for **OMR (Optical Music Recognition — recognizing music notation from images/PDFs)**.
It contains **1077 pairs**:
- a symbolic music score (the “ground truth”, see dataset fields below)
- a corresponding **PDF** rendering with **data augmentation** applied
All underlying works are **Public Domain**.
## Why it exists
OMR is often evaluated on private or inconsistent datasets. This dataset aims to provide the community with a practical, reproducible, public benchmark.
## What’s inside
Each PDF is generated from our own catalog of PD scores and then augmented to simulate real-world scans:
- ink blobs / stains
- scratches / wear
- crumpled or textured paper
- rotation / skew
- other visual noise
## Benchmark Code
Check out official repo with evaluation code - https://github.com/musescore/omr_benchmark
## Dataset structure
The dataset is distributed as **pairs**. Typical fields:
- `id`: unique sample id
- `pdf_image`: augmented PDF file
- `score`: symbolic reference in MuseScore Studio file format for evaluation
## License
Dataset content is released under **CC0-1.0** (no restrictions; attribution appreciated).
## Citation
If you use this dataset in a paper or a public benchmark, please cite:
```bibtex
@dataset{pd_omr_benchmark,
title = {Muse OMR Benchmark},
author = {Vasily Pereverzev and Kristina Abdullina},
year = {2025},
} |