YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
π Wan2.1 VACE + Phantom Lightx2V (Finetune)
Author / Creator: Ciro_Negrogni
Based on merge by: Inner_Reflections_AI
Original Guide: Wan VACE + Phantom Merge β Inner Reflections
πΉ About This Finetune
This is the Lightx2V variant of the VACE + Phantom merge, created by Ciro_Negrogni and derived from Inner_Reflections_AIβs original merge.
Purified to FP16 for WanGP compatibility and supplied with optional INT8 quantization.
- Architecture:
vace_14B - Mode: Image/Video conditioning with multiβimage reference support (2β4 refs in WanGP UI)
- Variants: FP16 (pure) and quanto INT8
πΉ Files
Wan2.1_VACE_Phantom_Lightx2V_fp16_pure.safetensorsWan2.1_VACE_Phantom_Lightx2V_quanto_fp16_int8.safetensors(or_quanto_bf16_int8depending on dtype)
πΉ Usage in WanGP
Place the JSON definition in:
app/finetunes/vace_phantom_lightx2v.json
Example JSON snippet:
{
"model": {
"name": "VACE Phantom Lightx2V 14B",
"architecture": "vace_14B",
"description": "VACE + Phantom Lightx2V merge by Ciro_Negrogni, based on Inner_Reflections_AI. Multi-image support enabled. FP16 + INT8 versions.",
"URLs": [
"ckpts/Wan2.1_VACE_Phantom_Lightx2V_fp16_pure.safetensors",
"ckpts/Wan2.1_VACE_Phantom_Lightx2V_quanto_fp16_int8.safetensors"
],
"modules": [],
"auto_quantize": false
}
}
πΉ Notes
- Experimental merge β results may vary. Adjust steps, guidance, and refs for best quality.
- Optimized for multi-image setups with clear subject separation.
- Refresh the UI if you see dropdown errors (
Value: on is not in the list...).
πΉ Credits
- Original Merge & Guide: Inner_Reflections_AI
- Lightx2V Variant: Ciro_Negrogni
- WanGP Packaging: Conversion to FP16/INT8 and JSON prep for WanGP finetune system.
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support