Add pipeline tag, project page, usage example and models table

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +68 -4
README.md CHANGED
@@ -1,17 +1,81 @@
1
  ---
2
- license: apache-2.0
3
  language:
4
  - en
5
  - zh
6
  library_name: songbloom
 
 
7
  ---
8
 
9
  ## Introduction
10
  We propose SongBloom, a novel framework for full-length song generation that leverages an interleaved paradigm of autoregressive sketching and diffusion-based refinement. SongBloom employs an autoregressive diffusion model that combines the high fidelity of diffusion models with the scalability of language models. Specifically, it gradually extends a musical sketch from short to long and refines the details from coarse to fine-grained. The interleaved generation paradigm effectively integrates prior semantic and acoustic context to guide the generation process. Experimental results demonstrate that SongBloom outperforms existing methods across both subjective and objective metrics and achieves performance comparable to the state-of-the-art commercial music generation platforms.
11
 
12
- ## Model Configuration
13
- TODO
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
 
15
  ## Papers
16
  * [Model Paper](https://huggingface.co/papers/2506.07634)
17
- * [Github Repo](https://github.com/Cypress-Yang/SongBloom)
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
 
2
  language:
3
  - en
4
  - zh
5
  library_name: songbloom
6
+ license: apache-2.0
7
+ pipeline_tag: text-to-audio
8
  ---
9
 
10
  ## Introduction
11
  We propose SongBloom, a novel framework for full-length song generation that leverages an interleaved paradigm of autoregressive sketching and diffusion-based refinement. SongBloom employs an autoregressive diffusion model that combines the high fidelity of diffusion models with the scalability of language models. Specifically, it gradually extends a musical sketch from short to long and refines the details from coarse to fine-grained. The interleaved generation paradigm effectively integrates prior semantic and acoustic context to guide the generation process. Experimental results demonstrate that SongBloom outperforms existing methods across both subjective and objective metrics and achieves performance comparable to the state-of-the-art commercial music generation platforms.
12
 
13
+ ## Project Page
14
+ https://cypress-yang.github.io/SongBloom_demo/
15
+
16
+ ## Usage
17
+
18
+ ### Prepare Environments
19
+
20
+ ```bash
21
+ conda create -n SongBloom python==3.8.12
22
+ conda activate SongBloom
23
+
24
+ # yum install libsndfile
25
+ # pip install torch==2.2.0 torchaudio==2.2.0 --index-url https://download.pytorch.org/whl/cu118 # For different CUDA version
26
+ pip install -r requirements.txt
27
+ ```
28
+
29
+ ### Data Preparation
30
+
31
+ A .jsonl file, where each line is a json object:
32
+
33
+ ```json
34
+ {
35
+ "idx": "The index of each sample",
36
+ "lyrics": "The lyrics to be generated",
37
+ "prompt_wav": "The path of the style prompt audio",
38
+ }
39
+ ```
40
+
41
+ One example can be refered to as: [example/test.jsonl](example/test.jsonl)
42
+
43
+ The prompt wav should be a 10-second, 48kHz audio clip.
44
+
45
+ The details about lyric format can be found in [docs/lyric_format.md](docs/lyric_format.md).
46
+
47
+ ### Inference
48
+
49
+ ```bash
50
+ source set_env.sh
51
+
52
+ python3 infer.py --input-jsonl example/test.jsonl
53
+
54
+ # For GPUs with low VRAM like RTX4090, you should set the dtype as bfloat16
55
+ python3 infer.py --input-jsonl example/test.jsonl --dtype bfloat16
56
+
57
+ # SongBloom also supports flash-attn (optional). To enable it, please install flash-attn (v2.6.3 is used during training) manually and set os.environ['DISABLE_FLASH_ATTN'] = "0" in infer.py:8
58
+ ```
59
+
60
+ ## Models
61
+
62
+ | Name | Size | Max Length | Prompt type | 🤗 |
63
+ |---|---|---|---|---|
64
+ | songbloom_full_150s | 2B | 2m30s | 10s wav | [link](https://huggingface.co/CypressYang/SongBloom) |
65
+ | songbloom_mulan_150s | 2B | 2m30s | 10s wav / text description | coming soon |
66
+ | ... | | | | |
67
 
68
  ## Papers
69
  * [Model Paper](https://huggingface.co/papers/2506.07634)
70
+ * [Github Repo](https://github.com/Cypress-Yang/SongBloom)
71
+
72
+ ## Citation
73
+
74
+ ```
75
+ @article{yang2025songbloom,
76
+ title={SongBloom: Coherent Song Generation via Interleaved Autoregressive Sketching and Diffusion Refinement},
77
+ author={Yang, Chenyu and Wang, Shuai and Chen, Hangting and Tan, Wei and Yu, Jianwei and Li, Haizhou},
78
+ journal={arXiv preprint arXiv:2506.07634},
79
+ year={2025}
80
+ }
81
+ ```