Add pipeline_tag, library_name, correct Arxiv badge link, and add authors to model card
#1
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,10 +1,12 @@
|
|
| 1 |
---
|
| 2 |
-
license: mit
|
| 3 |
-
language:
|
| 4 |
-
- en
|
| 5 |
base_model:
|
| 6 |
- Qwen/Qwen-Image-Edit-2509
|
|
|
|
|
|
|
|
|
|
| 7 |
base_model_relation: adapter
|
|
|
|
|
|
|
| 8 |
---
|
| 9 |
|
| 10 |
<p align="center">
|
|
@@ -13,12 +15,15 @@ base_model_relation: adapter
|
|
| 13 |
|
| 14 |
# MotionEdit: Benchmarking and Learning Motion-Centric Image Editing
|
| 15 |
|
| 16 |
-
[](https://
|
| 17 |
[](https://github.com/elainew728/motion-edit/tree/main)
|
| 18 |
[](https://huggingface.co/datasets/elaine1wan/MotionEdit-Bench)
|
| 19 |
[](https://x.com/yixin_wan_?s=21&t=EqTxUZPAldbQnbhLN-CETA)
|
| 20 |
[](https://motion-edit.github.io/) <br>
|
| 21 |
|
|
|
|
|
|
|
|
|
|
| 22 |
|
| 23 |
# ✨ Overview
|
| 24 |
**MotionEdit** is a novel dataset and benchmark for motion-centric image editing. We also propose **MotionNFT** (Motion-guided Negative-aware FineTuning), a post-training framework with motion alignment rewards to guide models on motion image editing task.
|
|
@@ -84,12 +89,6 @@ python inference/run_image_editing.py \
|
|
| 84 |
--seed 42
|
| 85 |
```
|
| 86 |
|
| 87 |
-
<!-- ## Authors
|
| 88 |
-
[Yixin Wan](https://elainew728.github.io/)<sup>1,2</sup>, [Lei Ke](https://www.kelei.site/)<sup>1</sup>, [Wenhao Yu](https://wyu97.github.io/)<sup>1</sup>, [Kai-Wei Chang](https://web.cs.ucla.edu/~kwchang/)<sup>2</sup>, [Dong Yu](https://sites.google.com/view/dongyu888/)<sup>1</sup>
|
| 89 |
-
|
| 90 |
-
<sup>1</sup>Tencent AI, Seattle <sup>2</sup>University of California, Los Angeles
|
| 91 |
-
-->
|
| 92 |
-
|
| 93 |
# ✏️ Citing
|
| 94 |
|
| 95 |
```bibtex
|
|
|
|
| 1 |
---
|
|
|
|
|
|
|
|
|
|
| 2 |
base_model:
|
| 3 |
- Qwen/Qwen-Image-Edit-2509
|
| 4 |
+
language:
|
| 5 |
+
- en
|
| 6 |
+
license: mit
|
| 7 |
base_model_relation: adapter
|
| 8 |
+
pipeline_tag: image-to-image
|
| 9 |
+
library_name: diffusers
|
| 10 |
---
|
| 11 |
|
| 12 |
<p align="center">
|
|
|
|
| 15 |
|
| 16 |
# MotionEdit: Benchmarking and Learning Motion-Centric Image Editing
|
| 17 |
|
| 18 |
+
[](https://arxiv.org/abs/2512.10284)
|
| 19 |
[](https://github.com/elainew728/motion-edit/tree/main)
|
| 20 |
[](https://huggingface.co/datasets/elaine1wan/MotionEdit-Bench)
|
| 21 |
[](https://x.com/yixin_wan_?s=21&t=EqTxUZPAldbQnbhLN-CETA)
|
| 22 |
[](https://motion-edit.github.io/) <br>
|
| 23 |
|
| 24 |
+
[Yixin Wan](https://elainew728.github.io/)<sup>1,2</sup>, [Lei Ke](https://www.kelei.site/)<sup>1</sup>, [Wenhao Yu](https://wyu97.github.io/)<sup>1</sup>, [Kai-Wei Chang](https://web.cs.ucla.edu/~kwchang/)<sup>2</sup>, [Dong Yu](https://sites.google.com/view/dongyu888/)<sup>1</sup>
|
| 25 |
+
|
| 26 |
+
<sup>1</sup>Tencent AI, Seattle <sup>2</sup>University of California, Los Angeles
|
| 27 |
|
| 28 |
# ✨ Overview
|
| 29 |
**MotionEdit** is a novel dataset and benchmark for motion-centric image editing. We also propose **MotionNFT** (Motion-guided Negative-aware FineTuning), a post-training framework with motion alignment rewards to guide models on motion image editing task.
|
|
|
|
| 89 |
--seed 42
|
| 90 |
```
|
| 91 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 92 |
# ✏️ Citing
|
| 93 |
|
| 94 |
```bibtex
|