File size: 2,778 Bytes
f76c640
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
---
license: openrail++
tags:
  - image-to-video
  - wan2.1
  - finetune
  - magref
model_type: image-to-video
base_model: Wan2_1-I2V
---

# MAGREF‑Video 14B I2V (WanGP-Ready)

📁 This repo bundles the FP16 MAGREF-14B Image‑to‑Video model with an INT8 quantized version optimized for WanGP.

---

## 📥 Files

- **`Wan2_1-Wan-I2V-MAGREF-14B_fp16_pure.safetensors`** – original Wan2.1 FP16 MAGREF model, cleaned for WanGP loading.  
- **`Wan2_1-Wan-I2V-MAGREF-14B_quanto_fp16_int8.safetensors`** – INT8 quantized version for faster inference, same architecture.

> These weights are based on the **fp8 MAGREF checkpoint** published by **Kijai** at [WanVideo_comfy](https://huggingface.co/Kijai/WanVideo_comfy/blob/main/Wan2_1-Wan-I2V-MAGREF-14B_fp8_e4m3fn.safetensors).

---

## ⚙️ Features

- **Image‑to‑Video pipeline** (i2v backbone)  
- **Optimized for prompt-based subject preservation**  
- **INT8 quantized version included for memory efficiency**

---

## 🛠 Installation & Usage (WanGP)

1. **Clone/download** this repo.  
2. Ensure both `.safetensors` files are in the same directory.  
3. Use this JSON as your finetune definition:

   ```jsonc
   {
     "model": {
       "name": "MAGREF‑14B I2V",
       "architecture": "i2v",
       "modules": [],
       "URLs": [
         "Wan2_1-Wan-I2V-MAGREF-14B_fp16_pure.safetensors",
         "Wan2_1-Wan-I2V-MAGREF-14B_quanto_fp16_int8.safetensors"
       ],
       "auto_quantize": false
     },
     "multi_images_gen_type": 1,
     "prompt_enhancer": ""
   }
   ```
4. **Start WanGP** with UI setup:
   ```bash
   python wgp.py --multiple-images
   ```
5. Choose **“MAGREF‑14B I2V”**, upload reference image(s), then generate!

---

## 📊 Model Info (Wan 2.1, MAGREF)

- MAGREF extends the Wan2.1 14B I2V with **identity-preserving video generation**—originally published by **Kijai**.  
- Built on the **Wan 2.1 framework**, a state-of-the-art open-source video foundation model with a 14B parameter backbone.  
- MAGREF specializes in maintaining consistent subject appearance across varied motions, automatically managing pose and depth.

---

## 🧠 Tips & Tricks

- 📸 Provide clear reference images—MAGREF handles blending but quality matters.  
- 🎯 Use prompt slots like: “**[Left: ref‑image 1 with X], [Right: ref‑image 2 with Y]**” for layered scene control.  
- 🎥 Fine-tune motion through clip speed (slower = softer movement).  
- ⚡ FP16 is slow; use INT8 quant for real-time generation.

---

## 🧩 Credits

- **MAGREF 14B I2V** checkpoint by **Kijai / WanVideo_comfy**  
- **Wan 2.1** open-source video model framework  
- **WanGP** UI integration for finetune loading

---

Download, drop in WanGP, and go wild with your characters in motion!