AudioCraft
AudioCraft is a PyTorch library for deep learning research on audio generation. AudioCraft contains inference and training code for two state-of-the-art AI generative models producing high-quality audio: AudioGen and MusicGen.
For MelodyFlow-specific local maintenance notes, including fork workflow and PyTorch 2.6 checkpoint compatibility guidance, see docs/MELODYFLOW.md.
Read This First For Local MelodyFlow Inference
Getting MelodyFlow to run locally required a few non-obvious fixes. If you are trying to run the released checkpoints outside the hosted demo, start here before debugging prompts or sampler settings.
What actually mattered:
- Use the published
MelodyFlowclass from the official MelodyFlow Space or a maintained fork of that codebase. - Do not try to reconstruct MelodyFlow from an older generic AudioCraft checkout that predates MelodyFlow support.
- On PyTorch 2.6+, trusted local checkpoint loads may require
torch.load(..., weights_only=False). - If local audio turns into buzz, hum, or other garbage, verify the code path and checkpoint-loading behavior before touching solver settings.
Where these findings are documented:
- operational notes and fork workflow: docs/MELODYFLOW.md
- hub-facing model card notes: model_cards/MELODYFLOW_MODEL_CARD.md
Known Good Local Setup
This is the local setup that actually produced usable text-to-music output during recovery work:
- a Python environment dedicated to MelodyFlow inference
- the released checkpoint directory, for example
facebook/melodyflow-t24-30secs - the official MelodyFlow Space checkout or a maintained fork of that checkout
- code that imports
MelodyFlowfrom that Space checkout and loads checkpoints withweights_only=Falsewhen required on PyTorch 2.6+
Operationally, the local flow looked like this:
- Set a model directory pointing at the released checkpoint folder.
- Set a code checkout pointing at the official Space or your maintained fork.
- Import
from audiocraft.models import MelodyFlowfrom that checkout. - Call
MelodyFlow.get_pretrained(...)against the local checkpoint directory. - Generate audio only after the code path and checkpoint-loading path are confirmed correct.
Example pattern:
import torch
from pathlib import Path
space_repo = Path("path/to/MelodyFlowSpace-or-your-fork")
model_dir = Path("path/to/melodyflow-t24-30secs")
original_load = torch.load
def trusted_load(*args, **kwargs):
kwargs.setdefault("weights_only", False)
return original_load(*args, **kwargs)
torch.load = trusted_load
try:
from audiocraft.models import MelodyFlow
model = MelodyFlow.get_pretrained(str(model_dir), device="cuda")
finally:
torch.load = original_load
If your local run does not resemble that shape, fix the setup first and only then investigate prompts or solver tuning.
Installation
AudioCraft requires Python 3.9, PyTorch 2.1.0. To install AudioCraft, you can run the following:
# Best to make sure you have torch installed first, in particular before installing xformers.
# Don't run this if you already have PyTorch installed.
python -m pip install 'torch==2.1.0'
# You might need the following before trying to install the packages
python -m pip install setuptools wheel
# Then proceed to one of the following
python -m pip install -U audiocraft # stable release
python -m pip install -U git+https://git@github.com/facebookresearch/audiocraft#egg=audiocraft # bleeding edge
python -m pip install -e . # or if you cloned the repo locally (mandatory if you want to train).
python -m pip install -e '.[wm]' # if you want to train a watermarking model
We also recommend having ffmpeg installed, either through your system or Anaconda:
sudo apt-get install ffmpeg
# Or if you are using Anaconda or Miniconda
conda install "ffmpeg<5" -c conda-forge
Models
At the moment, AudioCraft contains the training code and inference code for:
- MusicGen: A state-of-the-art controllable text-to-music model.
- AudioGen: A state-of-the-art text-to-sound model.
- EnCodec: A state-of-the-art high fidelity neural audio codec.
- Multi Band Diffusion: An EnCodec compatible decoder using diffusion.
- MAGNeT: A state-of-the-art non-autoregressive model for text-to-music and text-to-sound.
- AudioSeal: A state-of-the-art audio watermarking.
Training code
AudioCraft contains PyTorch components for deep learning research in audio and training pipelines for the developed models. For a general introduction of AudioCraft design principles and instructions to develop your own training pipeline, refer to the AudioCraft training documentation.
For reproducing existing work and using the developed training pipelines, refer to the instructions for each specific model that provides pointers to configuration, example grids and model/task-specific information and FAQ.
API documentation
We provide some API documentation for AudioCraft.
FAQ
Is the training code available?
Yes! We provide the training code for EnCodec, MusicGen and Multi Band Diffusion.
Where are the models stored?
Hugging Face stored the model in a specific location, which can be overridden by setting the AUDIOCRAFT_CACHE_DIR environment variable for the AudioCraft models.
In order to change the cache location of the other Hugging Face models, please check out the Hugging Face Transformers documentation for the cache setup.
Finally, if you use a model that relies on Demucs (e.g. musicgen-melody) and want to change the download location for Demucs, refer to the Torch Hub documentation.
License
- The code in this repository is released under the MIT license as found in the LICENSE file.
- The models weights in this repository are released under the CC-BY-NC 4.0 license as found in the LICENSE_weights file.
Citation
For the general framework of AudioCraft, please cite the following.
@inproceedings{copet2023simple,
title={Simple and Controllable Music Generation},
author={Jade Copet and Felix Kreuk and Itai Gat and Tal Remez and David Kant and Gabriel Synnaeve and Yossi Adi and Alexandre Défossez},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
}
When referring to a specific model, please cite as mentioned in the model specific README, e.g ./docs/MUSICGEN.md, ./docs/AUDIOGEN.md, etc.