YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

๐Ÿ“– Wan2.1 VACE + Phantom (Finetune)

Author / Creator: Inner_Reflections_AI
Original Guide: Wan VACE + Phantom Merge โ€“ An Inner Reflections Guide


๐Ÿ”น About This Finetune

A regular VACE + Phantom merge (nonโ€‘Causvid) prepared for WanGP.
Converted to pure FP16 for reliable loading and optional INT8 quantization.

  • Architecture: vace_14B
  • Mode: Image/Video conditioning with multiโ€‘image reference support (2โ€“4 refs in custom WanGP builds)
  • Variants: FP16 (pure) and quanto INT8

๐Ÿ”น Files

  • Wan2.1VACE_Phantom_fp16_pure.safetensors
  • Wan2.1VACE_Phantom_quanto_fp16_int8.safetensors (or _quanto_bf16_int8 depending on your dtype selection in WanGP)

Replace these with your final filenames/links if different.


๐Ÿ”น Usage in WanGP

Place the finetune JSON in:

app/finetunes/vace_phantom.json

Example JSON (matching the regular VACE + Phantom merge):

{
  "model": {
    "name": "VACE Phantom 14B",
    "architecture": "vace_14B",
    "description": "Regular VACE + Phantom merge by Inner_Reflections_AI, purified for WanGP. Multi-image references supported.",
    "URLs": [
      "ckpts/Wan2.1VACE_Phantom_fp16_pure.safetensors",
      "ckpts/Wan2.1VACE_Phantom_quanto_fp16_int8.safetensors"
    ],
    "modules": [],
    "auto_quantize": false
  }
}

๐Ÿ”น Notes

  • This is an experimental finetune. Tune steps, guidance scale, and reference image setup to taste.
  • If you see a Gradio dropdown error (Value: on is not in the list...), refresh the UI and reselect the option.

๐Ÿ”น Credits

  • Merge & Guide: Inner_Reflections_AI
  • WanGP Packaging: conversion to FP16 and finetune JSON layout compatible with WanGP.
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support