Video-to-Video

Add pipeline tag and improve model card

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +31 -3
README.md CHANGED
@@ -1,7 +1,8 @@
1
-
2
  ---
3
  license: apache-2.0
 
4
  ---
 
5
  # SAMA: Factorized Semantic Anchoring and Motion Alignment for Instruction-Guided Video Editing
6
 
7
  <div align="center">
@@ -12,13 +13,40 @@ license: apache-2.0
12
  <a href="https://github.com/Cynthiazxy123/SAMA" target="_blank"><img src="https://img.shields.io/badge/Code-111111.svg?logo=github&logoColor=white" height="22px"></a>
13
  </div>
14
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
 
 
 
 
 
 
 
 
 
 
 
 
 
 
16
 
17
  ## ๐Ÿค— Available Models
18
 
19
  | Model | Status | Link |
20
  | --- | --- | --- |
21
- | SAMA-5B | Coming soon | Coming soon |
22
  | SAMA-14B | Available | [syxbb/SAMA-14B](https://huggingface.co/syxbb/SAMA-14B) |
23
 
24
  ## ๐Ÿ“š Citation
@@ -33,4 +61,4 @@ license: apache-2.0
33
  primaryClass={cs.CV},
34
  url={https://arxiv.org/abs/2603.19228},
35
  }
36
- ```
 
 
1
  ---
2
  license: apache-2.0
3
+ pipeline_tag: image-to-video
4
  ---
5
+
6
  # SAMA: Factorized Semantic Anchoring and Motion Alignment for Instruction-Guided Video Editing
7
 
8
  <div align="center">
 
13
  <a href="https://github.com/Cynthiazxy123/SAMA" target="_blank"><img src="https://img.shields.io/badge/Code-111111.svg?logo=github&logoColor=white" height="22px"></a>
14
  </div>
15
 
16
+ SAMA (factorized **S**emantic **A**nchoring and **M**otion **A**lignment) is an instruction-guided video editing framework. It factorizes video editing into two parts: semantic anchoring to establish structural planning and motion alignment to internalize temporal dynamics. This approach enables precise semantic modifications while faithfully preserving the original motion of the source video.
17
+
18
+ ## ๐Ÿš€ Quick Start
19
+
20
+ ### ๐Ÿ› ๏ธ Installation
21
+
22
+ Recommended environment: Linux, NVIDIA GPU, CUDA 12.1, and Python 3.10.
23
+
24
+ ```bash
25
+ git clone https://github.com/Cynthiazxy123/SAMA
26
+ cd SAMA
27
+
28
+ conda create -n sama python=3.10 -y
29
+ conda activate sama
30
 
31
+ pip install --upgrade pip
32
+ pip install -r requirements.txt
33
+ ```
34
+
35
+ ### โ–ถ๏ธ Inference
36
+
37
+ To run instruction-guided video editing, you will need the base `Wan2.1-T2V-14B` model and the SAMA checkpoint.
38
+
39
+ The inference script is located at `infer_sh/run_sama.sh`. Edit the variables at the top of that script (such as `MODEL_ROOT`, `STATE_DICT`, `SRC_VIDEO`, and `PROMPT`) and then run:
40
+
41
+ ```bash
42
+ bash infer_sh/run_sama.sh
43
+ ```
44
 
45
  ## ๐Ÿค— Available Models
46
 
47
  | Model | Status | Link |
48
  | --- | --- | --- |
49
+ | SAMA-5B | Coming soon | - |
50
  | SAMA-14B | Available | [syxbb/SAMA-14B](https://huggingface.co/syxbb/SAMA-14B) |
51
 
52
  ## ๐Ÿ“š Citation
 
61
  primaryClass={cs.CV},
62
  url={https://arxiv.org/abs/2603.19228},
63
  }
64
+ ```