comfyuiblog commited on
Commit
9d823d8
·
verified ·
1 Parent(s): fd4279d

Upload BitDance_VAE_FP16.safetensors

Browse files

# BitDance 14B (FP8 & FP16) for ComfyUI: Low VRAM Optimization

This repository contains the optimized FP8 and FP16 model files required to run the **BitDance 14B** model locally inside **ComfyUI** on consumer GPUs (12GB - 24GB VRAM).

These weights have been manually quantized, tested, and verified to prevent CUDA Out of Memory (OOM) errors and "Black Screen" crashes during video generation.

### 📁 File Placement Instructions:
* **`BitDance_14B_MainModel_FP8.safetensors`**: The main diffusion model. Place this in your `ComfyUI/models/diffusion_models/` folder.
* **`BitDance_TextEncoder_FP8.safetensors`**: The required text encoder. Place this in your `ComfyUI/models/text_encoders/` folder.
* **`BitDance_VAE_FP16.safetensors`**: The VAE for decoding. Place this in your `ComfyUI/models/vae/` folder.

---

## 🚀 Required ComfyUI Workflow (.json) & Full Guide
To successfully run these models without node errors, you must use the optimized node routing for low VRAM.

**Download the exact `.json` workflow and read the step-by-step installation guide here:**
👉 [How to Run BitDance 14B in ComfyUI (Low VRAM Workflow Fix)](INSERT_YOUR_EXACT_AISTUDYNOW_ARTICLE_URL_HERE)

---

### 🛡️ About the Creator (E-E-A-T Verification)
Tested, compiled, and maintained by **Esha Sharma**, Founder of **[AI Study Now](https://aistudynow.com)**. I build and document custom ComfyUI workflows, GGUF optimizations, and local AI solutions for consumer hardware.

* 📺 **Watch the video tutorial:** [@ComfyUIworkflows](https://www.youtube.com/@ComfyUIworkflows)
* 💬 **Follow for updates:** [X/Twitter](https://x.com/aistudynowcom)

Files changed (1) hide show
  1. BitDance_VAE_FP16.safetensors +3 -0
BitDance_VAE_FP16.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3090d1d570583fde1940b82185b1210c0087d13d094b0201997f44c135946406
3
+ size 920624046