--- license: openrail++ tags: - image-to-video - wan2.1 - finetune - magref model_type: image-to-video base_model: Wan2_1-I2V --- # MAGREF‑Video 14B I2V (WanGP-Ready) 📁 This repo bundles the FP16 MAGREF-14B Image‑to‑Video model with an INT8 quantized version optimized for WanGP. --- ## 📥 Files - **`Wan2_1-Wan-I2V-MAGREF-14B_fp16_pure.safetensors`** – original Wan2.1 FP16 MAGREF model, cleaned for WanGP loading. - **`Wan2_1-Wan-I2V-MAGREF-14B_quanto_fp16_int8.safetensors`** – INT8 quantized version for faster inference, same architecture. > These weights are based on the **fp8 MAGREF checkpoint** published by **Kijai** at [WanVideo_comfy](https://huggingface.co/Kijai/WanVideo_comfy/blob/main/Wan2_1-Wan-I2V-MAGREF-14B_fp8_e4m3fn.safetensors). --- ## ⚙️ Features - **Image‑to‑Video pipeline** (i2v backbone) - **Optimized for prompt-based subject preservation** - **INT8 quantized version included for memory efficiency** --- ## 🛠 Installation & Usage (WanGP) 1. **Clone/download** this repo. 2. Ensure both `.safetensors` files are in the same directory. 3. Use this JSON as your finetune definition: ```jsonc { "model": { "name": "MAGREF‑14B I2V", "architecture": "i2v", "modules": [], "URLs": [ "Wan2_1-Wan-I2V-MAGREF-14B_fp16_pure.safetensors", "Wan2_1-Wan-I2V-MAGREF-14B_quanto_fp16_int8.safetensors" ], "auto_quantize": false }, "multi_images_gen_type": 1, "prompt_enhancer": "" } ``` 4. **Start WanGP** with UI setup: ```bash python wgp.py --multiple-images ``` 5. Choose **“MAGREF‑14B I2V”**, upload reference image(s), then generate! --- ## 📊 Model Info (Wan 2.1, MAGREF) - MAGREF extends the Wan2.1 14B I2V with **identity-preserving video generation**—originally published by **Kijai**. - Built on the **Wan 2.1 framework**, a state-of-the-art open-source video foundation model with a 14B parameter backbone. - MAGREF specializes in maintaining consistent subject appearance across varied motions, automatically managing pose and depth. --- ## 🧠 Tips & Tricks - 📸 Provide clear reference images—MAGREF handles blending but quality matters. - 🎯 Use prompt slots like: “**[Left: ref‑image 1 with X], [Right: ref‑image 2 with Y]**” for layered scene control. - 🎥 Fine-tune motion through clip speed (slower = softer movement). - ⚡ FP16 is slow; use INT8 quant for real-time generation. --- ## 🧩 Credits - **MAGREF 14B I2V** checkpoint by **Kijai / WanVideo_comfy** - **Wan 2.1** open-source video model framework - **WanGP** UI integration for finetune loading --- Download, drop in WanGP, and go wild with your characters in motion!