Add metadata to model card
#1
by
nielsr HF Staff - opened
README.md
CHANGED
|
@@ -1,3 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
|
| 2 |
### Introduction
|
| 3 |
Paper: [Paper](https://arxiv.org/abs/2502.18411),
|
|
@@ -37,8 +42,4 @@ By integrating OmniAlign-V datasets in Supervised Fine-tuning(SFT) stage, we can
|
|
| 37 |
|
| 38 |
For MM-AlignBench and WildVision, A/B denotes Winning Rate/Reward.
|
| 39 |
### How to use
|
| 40 |
-
Please refer to our [Github](https://github.com/PhoenixZ810/OmniAlign-V) for more details about training and evaluation.
|
| 41 |
-
|
| 42 |
-
|
| 43 |
-
|
| 44 |
-
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: cc-by-nc-4.0
|
| 3 |
+
library_name: transformers
|
| 4 |
+
pipeline_tag: image-text-to-text
|
| 5 |
+
---
|
| 6 |
|
| 7 |
### Introduction
|
| 8 |
Paper: [Paper](https://arxiv.org/abs/2502.18411),
|
|
|
|
| 42 |
|
| 43 |
For MM-AlignBench and WildVision, A/B denotes Winning Rate/Reward.
|
| 44 |
### How to use
|
| 45 |
+
Please refer to our [Github](https://github.com/PhoenixZ810/OmniAlign-V) for more details about training and evaluation.
|
|
|
|
|
|
|
|
|
|
|
|