YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
๐ Wan2.1 VACE + Phantom (Finetune)
Author / Creator: Inner_Reflections_AI
Original Guide: Wan VACE + Phantom Merge โ An Inner Reflections Guide
๐น About This Finetune
A regular VACE + Phantom merge (nonโCausvid) prepared for WanGP.
Converted to pure FP16 for reliable loading and optional INT8 quantization.
- Architecture:
vace_14B - Mode: Image/Video conditioning with multiโimage reference support (2โ4 refs in custom WanGP builds)
- Variants: FP16 (pure) and quanto INT8
๐น Files
Wan2.1VACE_Phantom_fp16_pure.safetensorsWan2.1VACE_Phantom_quanto_fp16_int8.safetensors(or_quanto_bf16_int8depending on your dtype selection in WanGP)
Replace these with your final filenames/links if different.
๐น Usage in WanGP
Place the finetune JSON in:
app/finetunes/vace_phantom.json
Example JSON (matching the regular VACE + Phantom merge):
{
"model": {
"name": "VACE Phantom 14B",
"architecture": "vace_14B",
"description": "Regular VACE + Phantom merge by Inner_Reflections_AI, purified for WanGP. Multi-image references supported.",
"URLs": [
"ckpts/Wan2.1VACE_Phantom_fp16_pure.safetensors",
"ckpts/Wan2.1VACE_Phantom_quanto_fp16_int8.safetensors"
],
"modules": [],
"auto_quantize": false
}
}
๐น Notes
- This is an experimental finetune. Tune steps, guidance scale, and reference image setup to taste.
- If you see a Gradio dropdown error (
Value: on is not in the list...), refresh the UI and reselect the option.
๐น Credits
- Merge & Guide: Inner_Reflections_AI
- WanGP Packaging: conversion to FP16 and finetune JSON layout compatible with WanGP.
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support