YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

BitDance 14B (FP8 & FP16) for ComfyUI: Low VRAM Optimization

This repository contains the optimized FP8 and FP16 model files required to run the BitDance 14B model locally inside ComfyUI on consumer GPUs (12GB - 24GB VRAM).

These weights have been manually quantized, tested, and verified to prevent CUDA Out of Memory (OOM) errors and "Black Screen" crashes during video generation.

File Placement Instructions:

  • BitDance_14B_MainModel_FP8.safetensors: The main diffusion model. Place this in your ComfyUI/models/diffusion_models/ folder.
  • BitDance_TextEncoder_FP8.safetensors: The required text encoder. Place this in your ComfyUI/models/text_encoders/ folder.
  • BitDance_VAE_FP16.safetensors: The VAE for decoding. Place this in your ComfyUI/models/vae/ folder.

Required Custom Node & Workflow (.json)

To successfully run these models without node errors, you must install the custom BitDance node and use the optimized node routing for low VRAM.


About the Creator

Tested, compiled, and maintained by Esha Sharma, Founder of AI Study Now. I build and document custom ComfyUI workflows, GGUF optimizations, and local AI solutions for consumer hardware.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support