metadata
base_model: loubb/aria-medium-base
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:loubb/aria-medium-base
- lora
- transformers
license: apache-2.0
Model
Model Description
LoRA adapter for accompaniment generation, fine-tuned on ~30,000 (treble-clef, full song) pairs, based on Aria-Medium-Base. try the model here: https://huggingface.co/spaces/xingjianll/symbolic-music-gen
Get Started
tokenizer = AutoTokenizer.from_pretrained(
"loubb/aria-medium-base",
trust_remote_code=True,
add_eos_token=True,
add_dim_token=False,
)
midi_dict = MidiDict.from_midi("input_midi_path")
tokens = tokenizer.tokenize(
midi_dict, add_eos_token=True, add_dim_token=False
)
token_ids = tokenizer._tokenizer.encode(tokens)
input_ids = torch.tensor([token_ids], device='cpu')
model = AutoModelForCausalLM.from_pretrained("loubb/aria-medium-base", trust_remote_code=True)
model = PeftModel.from_pretrained(model, "xingjianll/aria-accompaniment")
continuation = model.generate(
input_ids,
max_length=1600,
do_sample=True,
temperature=1,
top_p=0.95,
use_cache=True
)
midi_dict_output = tokenizer.decode(continuation[0][input_ids.shape[1]:].tolist())
midi_dict_output.to_midi().save("out_midi_path")