ChuxiJ commited on
Commit
8b3373c
·
verified ·
1 Parent(s): 65407b3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -6
README.md CHANGED
@@ -15,8 +15,7 @@ tags:
15
  <a href="https://huggingface.co/collections/ACE-Step/ace-step-15">Hugging Face</a> |
16
  <a href="https://modelscope.cn/models/ACE-Step/ACE-Step-v1-5">ModelScope</a> |
17
  <a href="https://huggingface.co/spaces/ACE-Step/Ace-Step-v1.5">Space Demo</a> |
18
- <a href="https://discord.gg/PeWDxrkdj7">Discord</a> |
19
- <a href="">Technical Report</a>
20
  </p>
21
 
22
 
@@ -24,13 +23,24 @@ tags:
24
 
25
  ## Model Details
26
 
27
- 🚀 We present ACE-Step v1.5, a highly efficient open-source music foundation model that brings commercial-grade generation to consumer hardware. On commonly used evaluation metrics, ACE-Step v1.5 achieves quality beyond most commercial music models while remaining extremely fast—under 2 seconds per full song on an A100 and under 10 seconds on an RTX 3090. The model runs locally with less than 4GB of VRAM, and supports lightweight personalization: users can train a LoRA from just a few songs to capture their own style.
 
 
 
 
 
 
 
 
 
 
 
 
28
 
29
  🌉 At its core lies a novel hybrid architecture where the Language Model (LM) functions as an omni-capable planner: it transforms simple user queries into comprehensive song blueprints—scaling from short loops to 10-minute compositions—while synthesizing metadata, lyrics, and captions via Chain-of-Thought to guide the Diffusion Transformer (DiT). ⚡ Uniquely, this alignment is achieved through intrinsic reinforcement learning relying solely on the model's internal mechanisms, thereby eliminating the biases inherent in external reward models or human preferences. 🎚️
30
 
31
  🔮 Beyond standard synthesis, ACE-Step v1.5 unifies precise stylistic control with versatile editing capabilities—such as cover generation, repainting, and vocal-to-BGM conversion—while maintaining strict adherence to prompts across 50+ languages. This paves the way for powerful tools that seamlessly integrate into the creative workflows of music artists, producers, and content creators. 🎸
32
 
33
-
34
  - **Developed by:** [ACE-STEP]
35
  - **Model type:** [Text2Music]
36
  - **Language(s):** [50+ languages]
@@ -85,5 +95,4 @@ If you find this project useful for your research, please consider citing:
85
  howpublished={\url{https://github.com/ace-step/ACE-Step-1.5}},
86
  year={2026},
87
  note={GitHub repository}
88
- }
89
- ```
 
15
  <a href="https://huggingface.co/collections/ACE-Step/ace-step-15">Hugging Face</a> |
16
  <a href="https://modelscope.cn/models/ACE-Step/ACE-Step-v1-5">ModelScope</a> |
17
  <a href="https://huggingface.co/spaces/ACE-Step/Ace-Step-v1.5">Space Demo</a> |
18
+ <a href="https://discord.gg/PeWDxrkdj7">Discord</a>
 
19
  </p>
20
 
21
 
 
23
 
24
  ## Model Details
25
 
26
+ 🚀 **ACE-Step v1.5** is a highly efficient open-source music foundation model designed to bring commercial-grade music generation to consumer hardware.
27
+
28
+ ### Key Features
29
+
30
+ * **💰 Commercial-Ready:** Unlike many models trained on ambiguous datasets, ACE-Step v1.5 is designed for creators. You can strictly use the generated music for **commercial purposes**.
31
+ * **📚 Safe & Robust Training Data:** The model is trained on a massive, legally compliant dataset consisting of:
32
+ * **Licensed Data:** Professionally licensed music tracks.
33
+ * **Royalty-Free / No-Copyright Data:** A vast collection of public domain and royalty-free music.
34
+ * **Synthetic Data:** High-quality audio generated via advanced MIDI-to-Audio conversion.
35
+ * **⚡ Extreme Speed:** Generates a full song in under 2 seconds on an A100 and under 10 seconds on an RTX 3090.
36
+ * **🖥️ Consumer Hardware Friendly:** Runs locally with less than 4GB of VRAM.
37
+
38
+ ### Technical Capabilities
39
 
40
  🌉 At its core lies a novel hybrid architecture where the Language Model (LM) functions as an omni-capable planner: it transforms simple user queries into comprehensive song blueprints—scaling from short loops to 10-minute compositions—while synthesizing metadata, lyrics, and captions via Chain-of-Thought to guide the Diffusion Transformer (DiT). ⚡ Uniquely, this alignment is achieved through intrinsic reinforcement learning relying solely on the model's internal mechanisms, thereby eliminating the biases inherent in external reward models or human preferences. 🎚️
41
 
42
  🔮 Beyond standard synthesis, ACE-Step v1.5 unifies precise stylistic control with versatile editing capabilities—such as cover generation, repainting, and vocal-to-BGM conversion—while maintaining strict adherence to prompts across 50+ languages. This paves the way for powerful tools that seamlessly integrate into the creative workflows of music artists, producers, and content creators. 🎸
43
 
 
44
  - **Developed by:** [ACE-STEP]
45
  - **Model type:** [Text2Music]
46
  - **Language(s):** [50+ languages]
 
95
  howpublished={\url{https://github.com/ace-step/ACE-Step-1.5}},
96
  year={2026},
97
  note={GitHub repository}
98
+ }