Diffusers
Safetensors

Add pipeline tag and library name, and update content

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +172 -27
README.md CHANGED
@@ -1,14 +1,16 @@
1
  ---
2
- license: mit
3
  base_model:
4
  - stabilityai/stable-diffusion-3-medium
 
 
 
5
  ---
6
 
7
  # ⚡️Pyramid Flow⚡️
8
 
9
- [[Paper]](https://arxiv.org/abs/2410.05954) [[Project Page ✨]](https://pyramid-flow.github.io) [[Code 🚀]](https://github.com/jy0205/Pyramid-Flow)
10
 
11
- This is the official repository for Pyramid Flow, a training-efficient **Autoregressive Video Generation** method based on **Flow Matching**. By training only on open-source datasets, it generates high-quality 10-second videos at 768p resolution and 24 FPS, and naturally supports image-to-video generation.
12
 
13
  <table class="center" border="0" style="width: 100%; text-align: left;">
14
  <tr>
@@ -17,29 +19,108 @@ This is the official repository for Pyramid Flow, a training-efficient **Autoreg
17
  <th>Image-to-video</th>
18
  </tr>
19
  <tr>
20
- <td><video src="https://pyramid-flow.github.io/static/videos/t2v_10s/fireworks.mp4" autoplay muted loop playsinline></video></td>
21
- <td><video src="https://pyramid-flow.github.io/static/videos/t2v/trailer.mp4" autoplay muted loop playsinline></video></td>
22
- <td><video src="https://pyramid-flow.github.io/static/videos/i2v/sunday.mp4" autoplay muted loop playsinline></video></td>
23
  </tr>
24
  </table>
25
 
26
  ## News
 
 
 
 
 
 
 
 
 
 
27
 
28
- * `COMING SOON` ⚡️⚡️⚡️ Training code and new model checkpoints trained from scratch.
29
  * `2024.10.10` 🚀🚀🚀 We release the [technical report](https://arxiv.org/abs/2410.05954), [project page](https://pyramid-flow.github.io) and [model checkpoint](https://huggingface.co/rain1011/pyramid-flow-sd3) of Pyramid Flow.
30
 
31
- ## Usage
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
32
 
33
- You can directly download the model from [Huggingface](https://huggingface.co/rain1011/pyramid-flow-sd3). We provide both model checkpoints for 768p and 384p video generation. The 384p checkpoint supports 5-second video generation at 24FPS, while the 768p checkpoint supports up to 10-second video generation at 24FPS.
34
 
35
  ```python
36
  from huggingface_hub import snapshot_download
37
 
38
  model_path = 'PATH' # The local directory to save downloaded checkpoint
39
- snapshot_download("rain1011/pyramid-flow-sd3", local_dir=model_path, local_dir_use_symlinks=False, repo_type='model')
40
  ```
41
 
42
- To use our model, please follow the inference code in `video_generation_demo.ipynb` at [this link](https://github.com/jy0205/Pyramid-Flow/blob/main/video_generation_demo.ipynb). We further simplify it into the following two-step procedure. First, load the downloaded model:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
43
 
44
  ```python
45
  import torch
@@ -48,36 +129,49 @@ from pyramid_dit import PyramidDiTForVideoGeneration
48
  from diffusers.utils import load_image, export_to_video
49
 
50
  torch.cuda.set_device(0)
51
- model_dtype, torch_dtype = 'bf16', torch.bfloat16 # Use bf16, fp16 or fp32
52
 
53
  model = PyramidDiTForVideoGeneration(
54
  'PATH', # The downloaded checkpoint dir
55
- model_dtype,
56
- model_variant='diffusion_transformer_768p', # 'diffusion_transformer_384p'
 
57
  )
58
 
59
- model.vae.to("cuda")
60
- model.dit.to("cuda")
61
- model.text_encoder.to("cuda")
62
  model.vae.enable_tiling()
 
 
 
 
 
 
63
  ```
64
 
65
- Then, you can try text-to-video generation on your own prompts:
66
 
67
  ```python
68
  prompt = "A movie trailer featuring the adventures of the 30 year old space man wearing a red wool knitted motorcycle helmet, blue sky, salt desert, cinematic style, shot on 35mm film, vivid colors"
69
 
 
 
 
 
 
 
 
 
70
  with torch.no_grad(), torch.cuda.amp.autocast(enabled=True, dtype=torch_dtype):
71
  frames = model.generate(
72
  prompt=prompt,
73
  num_inference_steps=[20, 20, 20],
74
  video_num_inference_steps=[10, 10, 10],
75
- height=768,
76
- width=1280,
77
  temp=16, # temp=16: 5s, temp=31: 10s
78
- guidance_scale=9.0, # The guidance for the first frame
79
  video_guidance_scale=5.0, # The guidance for the other video latent
80
  output_type="pil",
 
81
  )
82
 
83
  export_to_video(frames, "./text_to_video_sample.mp4", fps=24)
@@ -86,7 +180,15 @@ export_to_video(frames, "./text_to_video_sample.mp4", fps=24)
86
  As an autoregressive model, our model also supports (text conditioned) image-to-video generation:
87
 
88
  ```python
89
- image = Image.open('assets/the_great_wall.jpg').convert("RGB").resize((1280, 768))
 
 
 
 
 
 
 
 
90
  prompt = "FPV flying over the Great Wall"
91
 
92
  with torch.no_grad(), torch.cuda.amp.autocast(enabled=True, dtype=torch_dtype):
@@ -97,32 +199,75 @@ with torch.no_grad(), torch.cuda.amp.autocast(enabled=True, dtype=torch_dtype):
97
  temp=16,
98
  video_guidance_scale=4.0,
99
  output_type="pil",
 
100
  )
101
 
102
  export_to_video(frames, "./image_to_video_sample.mp4", fps=24)
103
  ```
104
 
105
- Usage tips:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
106
 
107
  * The `guidance_scale` parameter controls the visual quality. We suggest using a guidance within [7, 9] for the 768p checkpoint during text-to-video generation, and 7 for the 384p checkpoint.
108
  * The `video_guidance_scale` parameter controls the motion. A larger value increases the dynamic degree and mitigates the autoregressive generation degradation, while a smaller value stabilizes the video.
109
  * For 10-second video generation, we recommend using a guidance scale of 7 and a video guidance scale of 5.
110
 
 
 
 
 
 
 
 
 
 
 
111
  ## Gallery
112
 
113
  The following video examples are generated at 5s, 768p, 24fps. For more results, please visit our [project page](https://pyramid-flow.github.io).
114
 
115
  <table class="center" border="0" style="width: 100%; text-align: left;">
116
  <tr>
117
- <td><video src="https://pyramid-flow.github.io/static/videos/t2v/tokyo.mp4" autoplay muted loop playsinline></video></td>
118
- <td><video src="https://pyramid-flow.github.io/static/videos/t2v/eiffel.mp4" autoplay muted loop playsinline></video></td>
119
  </tr>
120
  <tr>
121
- <td><video src="https://pyramid-flow.github.io/static/videos/t2v/waves.mp4" autoplay muted loop playsinline></video></td>
122
- <td><video src="https://pyramid-flow.github.io/static/videos/t2v/rail.mp4" autoplay muted loop playsinline></video></td>
123
  </tr>
124
  </table>
125
 
 
 
 
 
 
 
 
 
 
 
126
  ## Acknowledgement
127
 
128
  We are grateful for the following awesome projects when implementing Pyramid Flow:
 
1
  ---
 
2
  base_model:
3
  - stabilityai/stable-diffusion-3-medium
4
+ license: mit
5
+ library_name: diffusers
6
+ pipeline_tag: text-to-video
7
  ---
8
 
9
  # ⚡️Pyramid Flow⚡️
10
 
11
+ [[Paper]](https://arxiv.org/abs/2410.05954) [[Project Page ✨]](https://pyramid-flow.github.io) [[Code 🚀]](https://github.com/jy0205/Pyramid-Flow) [[demo 🤗](https://huggingface.co/spaces/Pyramid-Flow/pyramid-flow)]
12
 
13
+ This is the official repository for Pyramid Flow, a training-efficient **Autoregressive Video Generation** method based on **Flow Matching**. By training only on **open-source datasets**, it can generate high-quality 10-second videos at 768p resolution and 24 FPS, and naturally supports image-to-video generation.
14
 
15
  <table class="center" border="0" style="width: 100%; text-align: left;">
16
  <tr>
 
19
  <th>Image-to-video</th>
20
  </tr>
21
  <tr>
22
+ <td><video src="https://github.com/user-attachments/assets/9935da83-ae56-4672-8747-0f46e90f7b2b" autoplay muted loop playsinline></video></td>
23
+ <td><video src="https://github.com/user-attachments/assets/3412848b-64db-4d9e-8dbf-11403f6d02c5" autoplay muted loop playsinline></video></td>
24
+ <td><video src="https://github.com/user-attachments/assets/3bd7251f-7b2c-4bee-951d-656fdb45f427" autoplay muted loop playsinline></video></td>
25
  </tr>
26
  </table>
27
 
28
  ## News
29
+ * `2024.11.13` 🚀🚀🚀 We release the [768p miniFLUX checkpoint](https://huggingface.co/rain1011/pyramid-flow-miniflux) (up to 10s).
30
+
31
+ > We have switched the model structure from SD3 to a mini FLUX to fix human structure issues, please try our 1024p image checkpoint, 384p video checkpoint (up to 5s) and 768p video checkpoint (up to 10s). The new miniflux model shows great improvement on human structure and motion stability
32
+
33
+ * `2024.10.29` ⚡️⚡️⚡️ We release [training code for VAE](#1-training-vae), [finetuning code for DiT](#2-finetuning-dit) and [new model checkpoints](https://huggingface.co/rain1011/pyramid-flow-miniflux) with FLUX structure trained from scratch.
34
+
35
+
36
+ * `2024.10.13` ✨✨✨ [Multi-GPU inference](#3-multi-gpu-inference) and [CPU offloading](#cpu-offloading) are supported. Use it with **less than 8GB** of GPU memory, with great speedup on multiple GPUs.
37
+
38
+ * `2024.10.11` 🤗🤗🤗 [Hugging Face demo](https://huggingface.co/spaces/Pyramid-Flow/pyramid-flow) is available. Thanks [@multimodalart](https://huggingface.co/multimodalart) for the commit!
39
 
 
40
  * `2024.10.10` 🚀🚀🚀 We release the [technical report](https://arxiv.org/abs/2410.05954), [project page](https://pyramid-flow.github.io) and [model checkpoint](https://huggingface.co/rain1011/pyramid-flow-sd3) of Pyramid Flow.
41
 
42
+ ## Table of Contents
43
+
44
+ * [Introduction](#introduction)
45
+ * [Installation](#installation)
46
+ * [Inference](#inference)
47
+ 1. [Quick Start with Gradio](#1-quick-start-with-gradio)
48
+ 2. [Inference Code](#2-inference-code)
49
+ 3. [Multi-GPU Inference](#3-multi-gpu-inference)
50
+ 4. [Usage Tips](#4-usage-tips)
51
+ * [Training](#Training)
52
+ 1. [Training VAE](#training-vae)
53
+ 2. [Finetuning DiT](#finetuning-dit)
54
+ * [Gallery](#gallery)
55
+ * [Comparison](#comparison)
56
+ * [Acknowledgement](#acknowledgement)
57
+ * [Citation](#citation)
58
+
59
+ ## Introduction
60
+
61
+ Existing video diffusion models operate at full resolution, spending a lot of computation on very noisy latents. By contrast, our method harnesses the flexibility of flow matching ([Lipman et al., 2023](https://openreview.net/forum?id=PqvMRDCJT9t); [Liu et al., 2023](https://openreview.net/forum?id=XVjTT1nw5z); [Albergo & Vanden-Eijnden, 2023](https://openreview.net/forum?id=li7qeBbCR1t)) to interpolate between latents of different resolutions and noise levels, allowing for simultaneous generation and decompression of visual content with better computational efficiency. The entire framework is end-to-end optimized with a single DiT ([Peebles & Xie, 2023](http://openaccess.thecvf.com/content/ICCV2023/html/Peebles_Scalable_Diffusion_Models_with_Transformers_ICCV_2023_paper.html)), generating high-quality 10-second videos at 768p resolution and 24 FPS within 20.7k A100 GPU training hours.
62
+
63
+ ## Installation
64
+
65
+ We recommend setting up the environment with conda. The codebase currently uses Python 3.8.10 and PyTorch 2.1.2 ([guide](https://pytorch.org/get-started/previous-versions/#v212)), and we are actively working to support a wider range of versions.
66
+
67
+ ```bash
68
+ git clone https://github.com/jy0205/Pyramid-Flow
69
+ cd Pyramid-Flow
70
+
71
+ # create env using conda
72
+ conda create -n pyramid python==3.8.10
73
+ conda activate pyramid
74
+ pip install -r requirements.txt
75
+ ```
76
 
77
+ Then, download the model from [Huggingface](https://huggingface.co/rain1011) (there are two variants: [miniFLUX](https://huggingface.co/rain1011/pyramid-flow-miniflux) or [SD3](https://huggingface.co/rain1011/pyramid-flow-sd3)). The miniFLUX models support 1024p image, 384p and 768p video generation, and the SD3-based models support 768p and 384p video generation. The 384p checkpoint generates 5-second video at 24FPS, while the 768p checkpoint generates up to 10-second video at 24FPS.
78
 
79
  ```python
80
  from huggingface_hub import snapshot_download
81
 
82
  model_path = 'PATH' # The local directory to save downloaded checkpoint
83
+ snapshot_download("rain1011/pyramid-flow-miniflux", local_dir=model_path, local_dir_use_symlinks=False, repo_type='model')
84
  ```
85
 
86
+ ## Inference
87
+
88
+ ### 1. Quick start with Gradio
89
+
90
+ To get started, first install [Gradio](https://www.gradio.app/guides/quickstart), set your model path at [#L36](https://github.com/jy0205/Pyramid-Flow/blob/3777f8b84bddfa2aa2b497ca919b3f40567712e6/app.py#L36), and then run on your local machine:
91
+
92
+ ```bash
93
+ python app.py
94
+ ```
95
+
96
+ The Gradio demo will be opened in a browser. Thanks to [@tpc2233](https://github.com/tpc2233) the commit, see [#48](https://github.com/jy0205/Pyramid-Flow/pull/48) for details.
97
+
98
+ Or, try it out effortlessly on [Hugging Face Space 🤗](https://huggingface.co/spaces/Pyramid-Flow/pyramid-flow) created by [@multimodalart](https://huggingface.co/multimodalart). Due to GPU limits, this online demo can only generate 25 frames (export at 8FPS or 24FPS). Duplicate the space to generate longer videos.
99
+
100
+ #### Quick Start on Google Colab
101
+
102
+ To quickly try out Pyramid Flow on Google Colab, run the code below:
103
+
104
+ ```
105
+ # Setup
106
+ !git clone https://github.com/jy0205/Pyramid-Flow
107
+ %cd Pyramid-Flow
108
+ !pip install -r requirements.txt
109
+ !pip install gradio
110
+
111
+ # This code downloads miniFLUX
112
+ from huggingface_hub import snapshot_download
113
+
114
+ model_path = '/content/Pyramid-Flow'
115
+ snapshot_download("rain1011/pyramid-flow-miniflux", local_dir=model_path, local_dir_use_symlinks=False, repo_type='model')
116
+
117
+ # Start
118
+ !python app.py
119
+ ```
120
+
121
+ ### 2. Inference Code
122
+
123
+ To use our model, please follow the inference code in `video_generation_demo.ipynb` at [this link](https://github.com/jy0205/Pyramid-Flow/blob/main/video_generation_demo.ipynb). We strongly recommend you to try the latest published pyramid-miniflux, which shows great improvement on human structure and motion stability. Set the param `model_name` to `pyramid_flux` to use. We further simplify it into the following two-step procedure. First, load the downloaded model:
124
 
125
  ```python
126
  import torch
 
129
  from diffusers.utils import load_image, export_to_video
130
 
131
  torch.cuda.set_device(0)
132
+ model_dtype, torch_dtype = 'bf16', torch.bfloat16 # Use bf16 (not support fp16 yet)
133
 
134
  model = PyramidDiTForVideoGeneration(
135
  'PATH', # The downloaded checkpoint dir
136
+ model_name="pyramid_flux",
137
+ model_dtype=model_dtype,
138
+ model_variant='diffusion_transformer_768p',
139
  )
140
 
 
 
 
141
  model.vae.enable_tiling()
142
+ # model.vae.to("cuda")
143
+ # model.dit.to("cuda")
144
+ # model.text_encoder.to("cuda")
145
+
146
+ # if you're not using sequential offloading bellow uncomment the lines above ^
147
+ model.enable_sequential_cpu_offload()
148
  ```
149
 
150
+ Then, you can try text-to-video generation on your own prompts. Noting that the 384p version only support 5s now (set temp up to 16)!
151
 
152
  ```python
153
  prompt = "A movie trailer featuring the adventures of the 30 year old space man wearing a red wool knitted motorcycle helmet, blue sky, salt desert, cinematic style, shot on 35mm film, vivid colors"
154
 
155
+ # used for 384p model variant
156
+ # width = 640
157
+ # height = 384
158
+
159
+ # used for 768p model variant
160
+ width = 1280
161
+ height = 768
162
+
163
  with torch.no_grad(), torch.cuda.amp.autocast(enabled=True, dtype=torch_dtype):
164
  frames = model.generate(
165
  prompt=prompt,
166
  num_inference_steps=[20, 20, 20],
167
  video_num_inference_steps=[10, 10, 10],
168
+ height=height,
169
+ width=width,
170
  temp=16, # temp=16: 5s, temp=31: 10s
171
+ guidance_scale=7.0, # The guidance for the first frame, set it to 7 for 384p variant
172
  video_guidance_scale=5.0, # The guidance for the other video latent
173
  output_type="pil",
174
+ save_memory=True, # If you have enough GPU memory, set it to `False` to improve vae decoding speed
175
  )
176
 
177
  export_to_video(frames, "./text_to_video_sample.mp4", fps=24)
 
180
  As an autoregressive model, our model also supports (text conditioned) image-to-video generation:
181
 
182
  ```python
183
+ # used for 384p model variant
184
+ # width = 640
185
+ # height = 384
186
+
187
+ # used for 768p model variant
188
+ width = 1280
189
+ height = 768
190
+
191
+ image = Image.open('assets/the_great_wall.jpg').convert("RGB").resize((width, height))
192
  prompt = "FPV flying over the Great Wall"
193
 
194
  with torch.no_grad(), torch.cuda.amp.autocast(enabled=True, dtype=torch_dtype):
 
199
  temp=16,
200
  video_guidance_scale=4.0,
201
  output_type="pil",
202
+ save_memory=True, # If you have enough GPU memory, set it to `False` to improve vae decoding speed
203
  )
204
 
205
  export_to_video(frames, "./image_to_video_sample.mp4", fps=24)
206
  ```
207
 
208
+ #### CPU offloading
209
+
210
+ We also support two types of CPU offloading to reduce GPU memory requirements. Note that they may sacrifice efficiency.
211
+ * Adding a `cpu_offloading=True` parameter to the generate function allows inference with **less than 12GB** of GPU memory. This feature was contributed by [@Ednaordinary](https://github.com/Ednaordinary), see [#23](https://github.com/jy0205/Pyramid-Flow/pull/23) for details.
212
+ * Calling `model.enable_sequential_cpu_offload()` before the above procedure allows inference with **less than 8GB** of GPU memory. This feature was contributed by [@rodjjo](https://github.com/rodjjo), see [#75](https://github.com/jy0205/Pyramid-Flow/pull/75) for details.
213
+
214
+ #### MPS backend
215
+
216
+ Thanks to [@niw](https://github.com/niw), Apple Silicon users (e.g. MacBook Pro with M2 24GB) can also try our model using the MPS backend! Please see [#113](https://github.com/jy0205/Pyramid-Flow/pull/113) for the details.
217
+
218
+ ### 3. Multi-GPU Inference
219
+
220
+ For users with multiple GPUs, we provide an [inference script](https://github.com/jy0205/Pyramid-Flow/blob/main/scripts/inference_multigpu.sh) that uses sequence parallelism to save memory on each GPU. This also brings a big speedup, taking only 2.5 minutes to generate a 5s, 768p, 24fps video on 4 A100 GPUs (vs. 5.5 minutes on a single A100 GPU). Run it on 2 GPUs with the following command:
221
+
222
+ ```bash
223
+ CUDA_VISIBLE_DEVICES=0,1 sh scripts/inference_multigpu.sh
224
+ ```
225
+
226
+ It currently supports 2 or 4 GPUs (For SD3 Version), with more configurations available in the original script. You can also launch a [multi-GPU Gradio demo](https://github.com/jy0205/Pyramid-Flow/blob/main/scripts/app_multigpu_engine.sh) created by [@tpc2233](https://github.com/tpc2233), see [#59](https://github.com/jy0205/Pyramid-Flow/pull/59) for details.
227
+
228
+ > Spoiler: We didn't even use sequence parallelism in training, thanks to our efficient pyramid flow designs.
229
+
230
+ ### 4. Usage tips
231
 
232
  * The `guidance_scale` parameter controls the visual quality. We suggest using a guidance within [7, 9] for the 768p checkpoint during text-to-video generation, and 7 for the 384p checkpoint.
233
  * The `video_guidance_scale` parameter controls the motion. A larger value increases the dynamic degree and mitigates the autoregressive generation degradation, while a smaller value stabilizes the video.
234
  * For 10-second video generation, we recommend using a guidance scale of 7 and a video guidance scale of 5.
235
 
236
+ ## Training
237
+
238
+ ### 1. Training VAE
239
+
240
+ The hardware requirements for training VAE are at least 8 A100 GPUs. Please refer to [this document](https://github.com/jy0205/Pyramid-Flow/blob/main/docs/VAE.md). This is a [MAGVIT-v2](https://arxiv.org/abs/2310.05737) like continuous 3D VAE, which should be quite flexible. Feel free to build your own video generative model on this part of VAE training code.
241
+
242
+ ### 2. Finetuning DiT
243
+
244
+ The hardware requirements for finetuning DiT are at least 8 A100 GPUs. Please refer to [this document](https://github.com/jy0205/Pyramid-Flow/blob/main/docs/DiT.md). We provide instructions for both autoregressive and non-autoregressive versions of Pyramid Flow. The former is more research oriented and the latter is more stable (but less efficient without temporal pyramid).
245
+
246
  ## Gallery
247
 
248
  The following video examples are generated at 5s, 768p, 24fps. For more results, please visit our [project page](https://pyramid-flow.github.io).
249
 
250
  <table class="center" border="0" style="width: 100%; text-align: left;">
251
  <tr>
252
+ <td><video src="https://github.com/user-attachments/assets/5b44a57e-fa08-4554-84a2-2c7a99f2b343" autoplay muted loop playsinline></video></td>
253
+ <td><video src="https://github.com/user-attachments/assets/5afd5970-de72-40e2-900d-a20d18308e8e" autoplay muted loop playsinline></video></td>
254
  </tr>
255
  <tr>
256
+ <td><video src="https://github.com/user-attachments/assets/1d44daf8-017f-40e9-bf18-1e19c0a8983b" autoplay muted loop playsinline></video></td>
257
+ <td><video src="https://github.com/user-attachments/assets/7f5dd901-b7d7-48cc-b67a-3c5f9e1546d2" autoplay muted loop playsinline></video></td>
258
  </tr>
259
  </table>
260
 
261
+ ## Comparison
262
+
263
+ On VBench ([Huang et al., 2024](https://huggingface.co/spaces/Vchitect/VBench_Leaderboard)), our method surpasses all the compared open-source baselines. Even with only public video data, it achieves comparable performance to commercial models like Kling ([Kuaishou, 2024](https://kling.kuaishou.com/en)) and Gen-3 Alpha ([Runway, 2024](https://runwayml.com/research/introducing-gen-3-alpha)), especially in the quality score (84.74 vs. 84.11 of Gen-3) and motion smoothness.
264
+
265
+ ![vbench](assets/vbench.jpg)
266
+
267
+ We conduct an additional user study with 20+ participants. As can be seen, our method is preferred over open-source models such as [Open-Sora](https://github.com/hpcaitech/Open-Sora) and [CogVideoX-2B](https://github.com/THUDM/CogVideo) especially in terms of motion smoothness.
268
+
269
+ ![user_study](assets/user_study.jpg)
270
+
271
  ## Acknowledgement
272
 
273
  We are grateful for the following awesome projects when implementing Pyramid Flow: