YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)
BitDance 14B (FP8 & FP16) for ComfyUI: Low VRAM Optimization
This repository contains the optimized FP8 and FP16 model files required to run the BitDance 14B model locally inside ComfyUI on consumer GPUs (12GB - 24GB VRAM).
These weights have been manually quantized, tested, and verified to prevent CUDA Out of Memory (OOM) errors and "Black Screen" crashes during video generation.
File Placement Instructions:
BitDance_14B_MainModel_FP8.safetensors: The main diffusion model. Place this in yourComfyUI/models/diffusion_models/folder.BitDance_TextEncoder_FP8.safetensors: The required text encoder. Place this in yourComfyUI/models/text_encoders/folder.BitDance_VAE_FP16.safetensors: The VAE for decoding. Place this in yourComfyUI/models/vae/folder.
Required Custom Node & Workflow (.json)
To successfully run these models without node errors, you must install the custom BitDance node and use the optimized node routing for low VRAM.
- π» Install the Custom Node: ComfyUI-BitDance on GitHub
- π Download the exact
.jsonworkflow and step-by-step guide: π How to Run BitDance 14B in ComfyUI (Low VRAM Workflow Fix) - π Youtube Tutorial Guide: π https://www.youtube.com/watch?v=4O9ATPbeQyg
About the Creator
Tested, compiled, and maintained by Esha Sharma, Founder of AI Study Now. I build and document custom ComfyUI workflows, GGUF optimizations, and local AI solutions for consumer hardware.
- πΊ Watch the video tutorial: @ComfyUIworkflows
- π¬ Follow for updates: X/Twitter
Inference Providers NEW
This model isn't deployed by any Inference Provider. π Ask for provider support