|
|
--- |
|
|
license: apache-2.0 |
|
|
tags: |
|
|
- code |
|
|
- music |
|
|
--- |
|
|
# From Generality to Mastery: Composer-Style Conditioned Music Generation |
|
|
Trained model weights and training datasets for the paper: |
|
|
* Mingyang Yao and Ke Chen |
|
|
"[From Generality to Mastery: Composer-Style Symbolic Music Generation via Large-Scale Pre-training](https://arxiv.org/abs/2506.17497)." |
|
|
_Conference of AI Music Creativity (AIMC)_, 2025 |
|
|
|
|
|
**Note:** Please find project details and usage at our [Github repo](https://github.com/AndyWeasley2004/Generality-to-Mastery) |
|
|
|
|
|
## Model Architecture |
|
|
### "Generality" Stage |
|
|
The model learns **general** music patterns and knowledge from diverse genres of music |
|
|
- Model backbone: 12-layer Transformer with relative positional encoding |
|
|
- Num trainable params: 39.6M |
|
|
|
|
|
### "Mastery" Stage |
|
|
The model adapts its knowledge to specific composers' characteristics |
|
|
- Model backbone: 12-layer Transformer with relative positional encoding plus adapter modules inserted after every two transformer layers |
|
|
- Num trainable params: 46M |
|
|
|
|
|
## Citation |
|
|
If you find this project useful, please cite our paper: |
|
|
``` |
|
|
@inproceedings{generalitymastery2025, |
|
|
author = {Mingyang Yao and Ke Chen}, |
|
|
title = {From Generality to Mastery: Composer-Style Symbolic Music Generation via Large-Scale Pre-training}, |
|
|
booktitle={Proceedings of the AI Music Creativity, {AIMC}}, |
|
|
year = {2025} |
|
|
} |
|
|
``` |