nielsr HF Staff commited on
Commit
a44c44b
·
verified ·
1 Parent(s): c7cf16c

Add pipeline_tag, library_name, correct Arxiv badge link, and add authors to model card

Browse files

This PR enhances the model card by making the following improvements:

- **Adds `pipeline_tag: image-to-image`**: This will ensure the model is properly categorized and discoverable under the "Image-to-Image" filter on the Hugging Face Hub, reflecting its purpose in motion-centric image editing.
- **Adds `library_name: diffusers`**: Evidence from `adapter_config.json` (specifying `diffusers.models.transformers.transformer_qwenimage` as the parent library) and mentions in the GitHub README (regarding `diffusers` dependencies) confirm compatibility. This addition will enable an automated "How to use" code snippet for the `diffusers` library, making it easier for users to get started.
- **Corrects arXiv badge link**: The existing badge with "Arxiv-MotionEdit" text was incorrectly linking to the project page. This has been updated to correctly link to the arXiv paper URL: `https://arxiv.org/abs/2512.10284`.
- **Adds Authors section**: The author list from the GitHub README has been added to the model card content for better attribution and information.

No other changes to the content or other links (Github repo, project page, usage examples) were necessary as they are already present and correctly formatted.

Files changed (1) hide show
  1. README.md +9 -10
README.md CHANGED
@@ -1,10 +1,12 @@
1
  ---
2
- license: mit
3
- language:
4
- - en
5
  base_model:
6
  - Qwen/Qwen-Image-Edit-2509
 
 
 
7
  base_model_relation: adapter
 
 
8
  ---
9
 
10
  <p align="center">
@@ -13,12 +15,15 @@ base_model_relation: adapter
13
 
14
  # MotionEdit: Benchmarking and Learning Motion-Centric Image Editing
15
 
16
- [![MotionEdit](https://img.shields.io/badge/Arxiv-MotionEdit-b31b1b.svg?logo=arXiv)](https://motion-edit.github.io/)
17
  [![GitHub Stars](https://img.shields.io/github/stars/elainew728/motion-edit?style=flat&logo=github&logoColor=whitesmoke)](https://github.com/elainew728/motion-edit/tree/main)
18
  [![hf_dataset](https://img.shields.io/badge/🤗-HF_Dataset-red.svg)](https://huggingface.co/datasets/elaine1wan/MotionEdit-Bench)
19
  [![Twitter](https://img.shields.io/badge/-Twitter@yixin_wan_-black?logo=twitter&logoColor=1D9BF0)](https://x.com/yixin_wan_?s=21&t=EqTxUZPAldbQnbhLN-CETA)
20
  [![proj_page](https://img.shields.io/badge/Project_Page-ffcae2?style=flat-square)](https://motion-edit.github.io/) <br>
21
 
 
 
 
22
 
23
  # ✨ Overview
24
  **MotionEdit** is a novel dataset and benchmark for motion-centric image editing. We also propose **MotionNFT** (Motion-guided Negative-aware FineTuning), a post-training framework with motion alignment rewards to guide models on motion image editing task.
@@ -84,12 +89,6 @@ python inference/run_image_editing.py \
84
  --seed 42
85
  ```
86
 
87
- <!-- ## Authors
88
- [Yixin Wan](https://elainew728.github.io/)<sup>1,2</sup>, [Lei Ke](https://www.kelei.site/)<sup>1</sup>, [Wenhao Yu](https://wyu97.github.io/)<sup>1</sup>, [Kai-Wei Chang](https://web.cs.ucla.edu/~kwchang/)<sup>2</sup>, [Dong Yu](https://sites.google.com/view/dongyu888/)<sup>1</sup>
89
-
90
- <sup>1</sup>Tencent AI, Seattle &nbsp; <sup>2</sup>University of California, Los Angeles
91
- -->
92
-
93
  # ✏️ Citing
94
 
95
  ```bibtex
 
1
  ---
 
 
 
2
  base_model:
3
  - Qwen/Qwen-Image-Edit-2509
4
+ language:
5
+ - en
6
+ license: mit
7
  base_model_relation: adapter
8
+ pipeline_tag: image-to-image
9
+ library_name: diffusers
10
  ---
11
 
12
  <p align="center">
 
15
 
16
  # MotionEdit: Benchmarking and Learning Motion-Centric Image Editing
17
 
18
+ [![MotionEdit](https://img.shields.io/badge/Arxiv-MotionEdit-b31b1b.svg?logo=arXiv)](https://arxiv.org/abs/2512.10284)
19
  [![GitHub Stars](https://img.shields.io/github/stars/elainew728/motion-edit?style=flat&logo=github&logoColor=whitesmoke)](https://github.com/elainew728/motion-edit/tree/main)
20
  [![hf_dataset](https://img.shields.io/badge/🤗-HF_Dataset-red.svg)](https://huggingface.co/datasets/elaine1wan/MotionEdit-Bench)
21
  [![Twitter](https://img.shields.io/badge/-Twitter@yixin_wan_-black?logo=twitter&logoColor=1D9BF0)](https://x.com/yixin_wan_?s=21&t=EqTxUZPAldbQnbhLN-CETA)
22
  [![proj_page](https://img.shields.io/badge/Project_Page-ffcae2?style=flat-square)](https://motion-edit.github.io/) <br>
23
 
24
+ [Yixin Wan](https://elainew728.github.io/)<sup>1,2</sup>, [Lei Ke](https://www.kelei.site/)<sup>1</sup>, [Wenhao Yu](https://wyu97.github.io/)<sup>1</sup>, [Kai-Wei Chang](https://web.cs.ucla.edu/~kwchang/)<sup>2</sup>, [Dong Yu](https://sites.google.com/view/dongyu888/)<sup>1</sup>
25
+
26
+ <sup>1</sup>Tencent AI, Seattle &nbsp; <sup>2</sup>University of California, Los Angeles
27
 
28
  # ✨ Overview
29
  **MotionEdit** is a novel dataset and benchmark for motion-centric image editing. We also propose **MotionNFT** (Motion-guided Negative-aware FineTuning), a post-training framework with motion alignment rewards to guide models on motion image editing task.
 
89
  --seed 42
90
  ```
91
 
 
 
 
 
 
 
92
  # ✏️ Citing
93
 
94
  ```bibtex