File size: 3,074 Bytes
0a13353
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
---
license: cc-by-nc-sa-4.0
datasets:
- SyMuPe/PERiScoPe
tags:
- music
- piano
- midi
- expressive-performance
- transformer
- flow-matching
---

# SyMuPe: PianoFlow

**PianoFlow-base** is the flagship generative model of the SyMuPe framework. It utilizes **Conditional Flow Matching (CFM)** to render high-fidelity symbolic expressive piano performances from musical scores.

Introduced in the paper: [**SyMuPe: Affective and Controllable Symbolic Music Performance**](https://arxiv.org/abs/2511.03425).

- **GitHub:** https://github.com/ilya16/SyMuPe
- **Website:** https://ilya16.github.io/SyMuPe
- **Dataset:** https://huggingface.co/datasets/SyMuPe/PERiScoPe

## Architecture

- **Type:** Transformer Encoder
- **Objective:** Conditional Flow Matching (CFM)
- **Inputs:** 
  - **Score features (y):** `Pitch`, `Position`, `PositionShift`, `Duration`
  - **Performance features (x):** `Velocity`, `TimeShift`, `TimeDuration`, `TimeDurationSustain`
  - **Conditioning (c_s):** `Velocity` and `Tempo` score tokens for tempo and dynamics.
- **Outputs:** Probablity flow for performance feature values.
- **Training:** Trained for 300,000 iterations on the [PERiScoPe v1.0](https://huggingface.co/datasets/SyMuPe/PERiScoPe) dataset as described in the paper.

## Quick Start

To use this model, ensure you have the `symupe` library installed (refer to the [GitHub repo](https://github.com/ilya16/SyMuPe) for installation instructions).

```python
import torch
from symusic import Score

from symupe.data.tokenizers import SyMuPe
from symupe.inference import AutoGenerator, perform_score, save_performances
from symupe.models import AutoModel

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

# Load the model and tokenizer directly from the Hub
model = AutoModel.from_pretrained("SyMuPe/PianoFlow-base").to(device)
tokenizer = SyMuPe.from_pretrained("SyMuPe/PianoFlow-base")

# Prepare generator for the model
generator = AutoGenerator.from_model(model, tokenizer, device=device)

# Load score MIDI
score_midi = Score("score.mid")

# Perform score MIDI (tokenization is handled inside)
gen_results = perform_score(
    generator=generator,
    score=score_midi,
    use_score_context=True,
    num_samples=8,
    seed=23
)
# gen_results[i] is PerformanceRenderingResult(...) containing:
# - score_midi, score_seq, gen_seq, perf_seq, perf_midi, perf_midi_sus

# Save performed MIDI files in a single directory
save_performances(gen_results, out_dir="samples/pianoflow", save_midi=True)
```

## License

The model weights are distributed under the [CC-BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/deed.en) license.


## Citation

If you use the dataset, please cite the paper:

```bibtex
@inproceedings{borovik2025symupe,
  title = {{SyMuPe: Affective and Controllable Symbolic Music Performance}},
  author = {Borovik, Ilya and Gavrilev, Dmitrii and Viro, Vladimir},
  year = {2025},
  booktitle = {Proceedings of the 33rd ACM International Conference on Multimedia},
  pages = {10699--10708},
  doi = {10.1145/3746027.3755871}
}
```