C4G-HKUST commited on
Commit
1861663
·
1 Parent(s): 4fad606

fix: quota

Browse files
Files changed (1) hide show
  1. app.py +6 -6
app.py CHANGED
@@ -605,8 +605,8 @@ def run_graio_demo(args):
605
  # 参考: https://huggingface.co/spaces/KlingTeam/LivePortrait/blob/main/app.py
606
  # @spaces.GPU 装饰器会自动处理 GPU 初始化,不需要手动初始化
607
 
608
- # 快速生成模式:240秒,固定12步去噪
609
- @spaces.GPU(duration=240)
610
  def gpu_wrapped_generate_video_fast(*args, **kwargs):
611
  # 固定使用12步去噪,通过关键字参数传递
612
  kwargs['fixed_steps'] = 12
@@ -757,7 +757,7 @@ def run_graio_demo(args):
757
 
758
  with gr.Row():
759
  run_i2v_button_fast = gr.Button(
760
- "Generate Video (Fast - 240s, 12 steps)",
761
  variant="secondary",
762
  scale=1
763
  )
@@ -768,10 +768,10 @@ def run_graio_demo(args):
768
  )
769
  gr.Markdown("""
770
  **Generation Modes:**
771
- - **Fast Mode (up to 240s GPU budget)**: Fixed 12 denoising steps for quick generation. Suitable for single-person videos or quick previews. The 240s is the maximum GPU allocation time, not the actual generation time.
772
  - **Quality Mode (up to 720s GPU budget)**: Custom denoising steps (adjustable via "Diffusion steps" slider). Recommended for multi-person videos that require higher quality. The 720s is the maximum GPU allocation time, not the actual generation time. With 40 denoising steps, approximately 10 seconds of video can be generated.
773
 
774
- *Note: The GPU duration (240s/720s) represents the maximum budget allocated, not the actual generation time. Multi-person videos generally require longer duration and more Usage Quota for better quality.*
775
  """)
776
 
777
  with gr.Column(scale=2):
@@ -806,7 +806,7 @@ def run_graio_demo(args):
806
  )
807
 
808
 
809
- # 快速生成按钮:240秒,固定12步
810
  run_i2v_button_fast.click(
811
  fn=gpu_wrapped_generate_video_fast,
812
  inputs=[img2vid_image, img2vid_prompt, n_prompt, img2vid_audio_1, img2vid_audio_2, img2vid_audio_3, sd_steps, seed, guide_scale, person_num_selector, audio_mode_selector],
 
605
  # 参考: https://huggingface.co/spaces/KlingTeam/LivePortrait/blob/main/app.py
606
  # @spaces.GPU 装饰器会自动处理 GPU 初始化,不需要手动初始化
607
 
608
+ # 快速生成模式:239秒,固定12步去噪
609
+ @spaces.GPU(duration=239)
610
  def gpu_wrapped_generate_video_fast(*args, **kwargs):
611
  # 固定使用12步去噪,通过关键字参数传递
612
  kwargs['fixed_steps'] = 12
 
757
 
758
  with gr.Row():
759
  run_i2v_button_fast = gr.Button(
760
+ "Generate Video (Fast - 239s, 12 steps)",
761
  variant="secondary",
762
  scale=1
763
  )
 
768
  )
769
  gr.Markdown("""
770
  **Generation Modes:**
771
+ - **Fast Mode (up to 239s GPU budget)**: Fixed 12 denoising steps for quick generation. Suitable for single-person videos or quick previews. The 239s is the maximum GPU allocation time, not the actual generation time.
772
  - **Quality Mode (up to 720s GPU budget)**: Custom denoising steps (adjustable via "Diffusion steps" slider). Recommended for multi-person videos that require higher quality. The 720s is the maximum GPU allocation time, not the actual generation time. With 40 denoising steps, approximately 10 seconds of video can be generated.
773
 
774
+ *Note: The GPU duration (239s/720s) represents the maximum budget allocated, not the actual generation time. Multi-person videos generally require longer duration and more Usage Quota for better quality.*
775
  """)
776
 
777
  with gr.Column(scale=2):
 
806
  )
807
 
808
 
809
+ # 快速生成按钮:239秒,固定12步
810
  run_i2v_button_fast.click(
811
  fn=gpu_wrapped_generate_video_fast,
812
  inputs=[img2vid_image, img2vid_prompt, n_prompt, img2vid_audio_1, img2vid_audio_2, img2vid_audio_3, sd_steps, seed, guide_scale, person_num_selector, audio_mode_selector],