Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,86 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: mit
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: mit
|
| 3 |
+
pipeline_tag: text-to-music
|
| 4 |
+
tags:
|
| 5 |
+
- music
|
| 6 |
+
---
|
| 7 |
+
|
| 8 |
+
<h1 align="center">ACE-Step 1.5</h1>
|
| 9 |
+
<h1 align="center">Pushing the Boundaries of Open-Source Music Generation</h1>
|
| 10 |
+
<p align="center">
|
| 11 |
+
<a href="https://ace-step.github.io/ace-step-v1.5.github.io/">Project</a> |
|
| 12 |
+
<a href="https://huggingface.co/collections/ACE-Step/ace-step-15">Hugging Face</a> |
|
| 13 |
+
<a href="https://modelscope.cn/models/ACE-Step/ACE-Step-v1-5">ModelScope</a> |
|
| 14 |
+
<a href="https://huggingface.co/spaces/ACE-Step/Ace-Step-v1.5">Space Demo</a> |
|
| 15 |
+
<a href="https://discord.gg/PeWDxrkdj7">Discord</a> |
|
| 16 |
+
<a href="">Technical Report</a>
|
| 17 |
+
</p>
|
| 18 |
+
|
| 19 |
+
|
| 20 |
+

|
| 21 |
+
|
| 22 |
+
## Model Details
|
| 23 |
+
|
| 24 |
+
🚀 We present ACE-Step v1.5, a highly efficient open-source music foundation model that brings commercial-grade generation to consumer hardware. On commonly used evaluation metrics, ACE-Step v1.5 achieves quality beyond most commercial music models while remaining extremely fast—under 2 seconds per full song on an A100 and under 10 seconds on an RTX 3090. The model runs locally with less than 4GB of VRAM, and supports lightweight personalization: users can train a LoRA from just a few songs to capture their own style.
|
| 25 |
+
|
| 26 |
+
🌉 At its core lies a novel hybrid architecture where the Language Model (LM) functions as an omni-capable planner: it transforms simple user queries into comprehensive song blueprints—scaling from short loops to 10-minute compositions—while synthesizing metadata, lyrics, and captions via Chain-of-Thought to guide the Diffusion Transformer (DiT). ⚡ Uniquely, this alignment is achieved through intrinsic reinforcement learning relying solely on the model's internal mechanisms, thereby eliminating the biases inherent in external reward models or human preferences. 🎚️
|
| 27 |
+
|
| 28 |
+
🔮 Beyond standard synthesis, ACE-Step v1.5 unifies precise stylistic control with versatile editing capabilities—such as cover generation, repainting, and vocal-to-BGM conversion—while maintaining strict adherence to prompts across 50+ languages. This paves the way for powerful tools that seamlessly integrate into the creative workflows of music artists, producers, and content creators. 🎸
|
| 29 |
+
|
| 30 |
+
|
| 31 |
+
- **Developed by:** [ACE-STEP]
|
| 32 |
+
- **Model type:** [Text2Music]
|
| 33 |
+
- **Language(s):** [50+ languages]
|
| 34 |
+
- **License:** [MIT]
|
| 35 |
+
|
| 36 |
+
## Evaluation
|
| 37 |
+
|
| 38 |
+

|
| 39 |
+
|
| 40 |
+
## 🏗️ Architecture
|
| 41 |
+
|
| 42 |
+
|
| 43 |
+

|
| 44 |
+
|
| 45 |
+
|
| 46 |
+
## 🦁 Model Zoo
|
| 47 |
+
|
| 48 |
+
|
| 49 |
+

|
| 50 |
+
|
| 51 |
+
### DiT Models
|
| 52 |
+
|
| 53 |
+
| DiT Model | Pre-Training | SFT | RL | CFG | Step | Refer audio | Text2Music | Cover | Repaint | Extract | Lego | Complete | Quality | Diversity | Fine-Tunability | Hugging Face |
|
| 54 |
+
|-----------|:------------:|:---:|:--:|:---:|:----:|:-----------:|:----------:|:-----:|:-------:|:-------:|:----:|:--------:|:-------:|:---------:|:---------------:|--------------|
|
| 55 |
+
| `acestep-v15-base` | ✅ | ❌ | ❌ | ✅ | 50 | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | Medium | High | Easy | [Link](https://huggingface.co/ACE-Step/acestep-v15-base) |
|
| 56 |
+
| `acestep-v15-sft` | ✅ | ✅ | ❌ | ✅ | 50 | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | High | Medium | Easy | [Link](https://huggingface.co/ACE-Step/acestep-v15-sft) |
|
| 57 |
+
| `acestep-v15-turbo` | ✅ | ✅ | ❌ | ❌ | 8 | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | Very High | Medium | Medium | [Link](https://huggingface.co/ACE-Step/Ace-Step1.5) |
|
| 58 |
+
| `acestep-v15-turbo-rl` | ✅ | ✅ | ✅ | ❌ | 8 | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | Very High | Medium | Medium | To be released |
|
| 59 |
+
|
| 60 |
+
### LM Models
|
| 61 |
+
|
| 62 |
+
| LM Model | Pretrain from | Pre-Training | SFT | RL | CoT metas | Query rewrite | Audio Understanding | Composition Capability | Copy Melody | Hugging Face |
|
| 63 |
+
|----------|---------------|:------------:|:---:|:--:|:---------:|:-------------:|:-------------------:|:----------------------:|:-----------:|--------------|
|
| 64 |
+
| `acestep-5Hz-lm-0.6B` | Qwen3-0.6B | ✅ | ✅ | ✅ | ✅ | ✅ | Medium | Medium | Weak | ✅ |
|
| 65 |
+
| `acestep-5Hz-lm-1.7B` | Qwen3-1.7B | ✅ | ✅ | ✅ | ✅ | ✅ | Medium | Medium | Medium | ✅ |
|
| 66 |
+
| `acestep-5Hz-lm-4B` | Qwen3-4B | ✅ | ✅ | ✅ | ✅ | ✅ | Strong | Strong | Strong | ✅ |
|
| 67 |
+
|
| 68 |
+
|
| 69 |
+
## 🙏 Acknowledgements
|
| 70 |
+
|
| 71 |
+
This project is co-led by ACE Studio and StepFun.
|
| 72 |
+
|
| 73 |
+
|
| 74 |
+
## 📖 Citation
|
| 75 |
+
|
| 76 |
+
If you find this project useful for your research, please consider citing:
|
| 77 |
+
|
| 78 |
+
```BibTeX
|
| 79 |
+
@misc{gong2026acestep,
|
| 80 |
+
title={ACE-Step 1.5: Pushing the Boundaries of Open-Source Music Generation},
|
| 81 |
+
author={Junmin Gong, Yulin Song, Wenxiao Zhao, Sen Wang, Shengyuan Xu, Jing Guo},
|
| 82 |
+
howpublished={\url{https://github.com/ace-step/ACE-Step-1.5}},
|
| 83 |
+
year={2026},
|
| 84 |
+
note={GitHub repository}
|
| 85 |
+
}
|
| 86 |
+
```
|