Upload 4 files
Browse files- .gitattributes +1 -0
- README.md +101 -0
- gitattributes +36 -0
- notagen.png +3 -0
- weights_notagenx_p_size_16_p_length_1024_p_layers_20_h_size_1280.pth +3 -0
.gitattributes
CHANGED
|
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
| 36 |
+
notagen.png filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
|
@@ -0,0 +1,101 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: mit
|
| 3 |
+
tags:
|
| 4 |
+
- music
|
| 5 |
+
---
|
| 6 |
+
|
| 7 |
+
# ๐ต NotaGen: Advancing Musicality in Symbolic Music Generation with Large Language Model Training Paradigms
|
| 8 |
+
|
| 9 |
+
<p>
|
| 10 |
+
<!-- ArXiv -->
|
| 11 |
+
<a href="https://arxiv.org/abs/2502.18008">
|
| 12 |
+
<img src="https://img.shields.io/badge/NotaGen_Paper-ArXiv-%23B31B1B?logo=arxiv&logoColor=white" alt="Paper">
|
| 13 |
+
</a>
|
| 14 |
+
|
| 15 |
+
<!-- GitHub -->
|
| 16 |
+
<a href="https://github.com/ElectricAlexis/NotaGen">
|
| 17 |
+
<img src="https://img.shields.io/badge/NotaGen_Code-GitHub-%23181717?logo=github&logoColor=white" alt="GitHub">
|
| 18 |
+
</a>
|
| 19 |
+
|
| 20 |
+
<!-- HuggingFace -->
|
| 21 |
+
<a href="https://huggingface.co/ElectricAlexis/NotaGen">
|
| 22 |
+
<img src="https://img.shields.io/badge/NotaGen_Weights-HuggingFace-%23FFD21F?logo=huggingface&logoColor=white" alt="Weights">
|
| 23 |
+
</a>
|
| 24 |
+
|
| 25 |
+
<!-- Web Demo -->
|
| 26 |
+
<a href="https://electricalexis.github.io/notagen-demo/">
|
| 27 |
+
<img src="https://img.shields.io/badge/NotaGen_Demo-Web-%23007ACC?logo=google-chrome&logoColor=white" alt="Demo">
|
| 28 |
+
</a>
|
| 29 |
+
</p>
|
| 30 |
+
|
| 31 |
+
<p align="center">
|
| 32 |
+
<img src="notagen.png" alt="NotaGen" width="50%">
|
| 33 |
+
</p>
|
| 34 |
+
|
| 35 |
+
|
| 36 |
+
## ๐ Overview
|
| 37 |
+
**NotaGen** is a symbolic music generation model that explores the potential of producing **high-quality classical sheet music**. Inspired by the success of Large Language Models (LLMs), NotaGen adopts a three-stage training paradigm:
|
| 38 |
+
- ๐ง **Pre-training** on 1.6M musical pieces
|
| 39 |
+
- ๐ฏ **Fine-tuning** on ~9K classical compositions with `period-composer-instrumentation` prompts
|
| 40 |
+
- ๐ **Reinforcement Learning** using our novel **CLaMP-DPO** method (no human annotations or pre-defined rewards required.)
|
| 41 |
+
|
| 42 |
+
Check our [demo page](https://electricalexis.github.io/notagen-demo/) and enjoy music composed by NotaGen!
|
| 43 |
+
|
| 44 |
+
## โ๏ธ Environment Setup
|
| 45 |
+
|
| 46 |
+
```bash
|
| 47 |
+
conda create --name notagen python=3.10
|
| 48 |
+
conda activate notagen
|
| 49 |
+
conda install pytorch==2.3.0 pytorch-cuda=11.8 -c pytorch -c nvidia
|
| 50 |
+
pip install accelerate
|
| 51 |
+
pip install optimum
|
| 52 |
+
pip install -r requirements.txt
|
| 53 |
+
```
|
| 54 |
+
|
| 55 |
+
## ๐๏ธ NotaGen Model Weights
|
| 56 |
+
|
| 57 |
+
### Pre-training
|
| 58 |
+
We provide pre-trained weights of different scales:
|
| 59 |
+
| Models | Parameters | Patch-level Decoder Layers | Character-level Decoder Layers | Hidden Size | Patch Length (Context Length) |
|
| 60 |
+
| ---- | ---- | ---- | ---- | ---- | ---- |
|
| 61 |
+
| [NotaGen-small](https://huggingface.co/ElectricAlexis/NotaGen/blob/main/weights_notagen_pretrain_p_size_16_p_length_2048_p_layers_12_c_layers_3_h_size_768_lr_0.0002_batch_8.pth) | 110M | 12 | 3 | 768 | 2048 |
|
| 62 |
+
| [NotaGen-medium](https://huggingface.co/ElectricAlexis/NotaGen/blob/main/weights_notagen_pretrain_p_size_16_p_length_2048_p_layers_16_c_layers_3_h_size_1024_lr_0.0001_batch_4.pth) | 244M | 16 | 3 | 1024 | 2048 |
|
| 63 |
+
| [NotaGen-large](https://huggingface.co/ElectricAlexis/NotaGen/blob/main/weights_notagen_pretrain_p_size_16_p_length_1024_p_layers_20_c_layers_6_h_size_1280_lr_0.0001_batch_4.pth) | 516M | 20 | 6 | 1280 | 1024 |
|
| 64 |
+
|
| 65 |
+
### Fine-tuning
|
| 66 |
+
|
| 67 |
+
We fine-tuned NotaGen-large on a corpus of approximately 9k classical pieces. You can download the weights [here](https://huggingface.co/ElectricAlexis/NotaGen/blob/main/weights_notagen_pretrain-finetune_p_size_16_p_length_1024_p_layers_c_layers_6_20_h_size_1280_lr_1e-05_batch_1.pth).
|
| 68 |
+
|
| 69 |
+
### Reinforcement-Learning
|
| 70 |
+
|
| 71 |
+
After pre-training and fine-tuning, we optimized NotaGen-large with 3 iterations of CLaMP-DPO. You can download the weights [here](https://huggingface.co/ElectricAlexis/NotaGen/blob/main/weights_notagen_pretrain-finetune-RL3_beta_0.1_lambda_10_p_size_16_p_length_1024_p_layers_20_c_layers_6_h_size_1280_lr_1e-06_batch_1.pth).
|
| 72 |
+
|
| 73 |
+
### ๐ NotaGen-X
|
| 74 |
+
|
| 75 |
+
Inspired by Deepseek-R1, we further optimized the training procedures of NotaGen and released a better version --- [NotaGen-X](https://huggingface.co/ElectricAlexis/NotaGen/blob/main/weights_notagenx_p_size_16_p_length_1024_p_layers_20_h_size_1280.pth). Compared to the version in the paper, NotaGen-X incorporates the following improvements:
|
| 76 |
+
|
| 77 |
+
- We introduced a post-training stage between pre-training and fine-tuning, refining the model with a classical-style subset of the pre-training dataset.
|
| 78 |
+
- We removed the key augmentation in the Fine-tune stage, making the instrument range of the generated compositions more reasonable.
|
| 79 |
+
- After RL, we utilized the resulting checkpoint to gather a new set of post-training data. Starting from the pre-trained checkpoint, we conducted another round of post-training, fine-tuning, and reinforcement learning.
|
| 80 |
+
|
| 81 |
+
|
| 82 |
+
For implementation of pre-training, fine-tuning and reinforcement learning on NotaGen, please view our [github page](https://github.com/ElectricAlexis/NotaGen).
|
| 83 |
+
|
| 84 |
+
|
| 85 |
+
## ๐ Citation
|
| 86 |
+
|
| 87 |
+
If you find **NotaGen** or **CLaMP-DPO** useful in your work, please cite our paper.
|
| 88 |
+
|
| 89 |
+
```bibtex
|
| 90 |
+
@misc{wang2025notagenadvancingmusicalitysymbolic,
|
| 91 |
+
title={NotaGen: Advancing Musicality in Symbolic Music Generation with Large Language Model Training Paradigms},
|
| 92 |
+
author={Yashan Wang and Shangda Wu and Jianhuai Hu and Xingjian Du and Yueqi Peng and Yongxin Huang and Shuai Fan and Xiaobing Li and Feng Yu and Maosong Sun},
|
| 93 |
+
year={2025},
|
| 94 |
+
eprint={2502.18008},
|
| 95 |
+
archivePrefix={arXiv},
|
| 96 |
+
primaryClass={cs.SD},
|
| 97 |
+
url={https://arxiv.org/abs/2502.18008},
|
| 98 |
+
}
|
| 99 |
+
```
|
| 100 |
+
|
| 101 |
+
|
gitattributes
ADDED
|
@@ -0,0 +1,36 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
| 2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
| 3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
| 4 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
| 5 |
+
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
| 6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
| 7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
| 8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
| 9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
| 10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
| 11 |
+
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
| 12 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
| 13 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
| 14 |
+
*.npy filter=lfs diff=lfs merge=lfs -text
|
| 15 |
+
*.npz filter=lfs diff=lfs merge=lfs -text
|
| 16 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
| 17 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
| 18 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
| 19 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
| 20 |
+
*.pickle filter=lfs diff=lfs merge=lfs -text
|
| 21 |
+
*.pkl filter=lfs diff=lfs merge=lfs -text
|
| 22 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
| 23 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
| 24 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
| 25 |
+
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
| 26 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
| 27 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
| 28 |
+
*.tar filter=lfs diff=lfs merge=lfs -text
|
| 29 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
| 30 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
| 31 |
+
*.wasm filter=lfs diff=lfs merge=lfs -text
|
| 32 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
| 33 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
+
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
| 36 |
+
notagen.png filter=lfs diff=lfs merge=lfs -text
|
notagen.png
ADDED
|
Git LFS Details
|
weights_notagenx_p_size_16_p_length_1024_p_layers_20_h_size_1280.pth
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:3df0bc38d05ff8888551a0dfab4928bf028d2b95a45a5ee0f598b88778594cff
|
| 3 |
+
size 6189594398
|