Improve model card for MagicAssessor-7B: Add pipeline tag, library name, and links

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +11 -10
README.md CHANGED
@@ -1,18 +1,19 @@
1
  ---
2
- license: mit
3
  base_model:
4
  - Qwen/Qwen2.5-VL-7B-Instruct
 
 
 
5
  ---
6
 
7
- Paper:
8
- MagicMirror: A Large-Scale Dataset and Benchmark for Fine-Grained Artifacts Assessment in Text-to-Image Generation
9
- https://arxiv.org/abs/2509.10260
10
 
11
- Dataset:
12
- https://huggingface.co/datasets/wj-inf/MagicData340k
13
 
14
- Model:
15
- https://huggingface.co/datasets/wj-inf/MagicAssessor-7B
16
 
17
- Benchmark:
18
- https://github.com/wj-inf/MagicMirror
 
 
 
 
1
  ---
 
2
  base_model:
3
  - Qwen/Qwen2.5-VL-7B-Instruct
4
+ license: mit
5
+ pipeline_tag: image-text-to-text
6
+ library_name: transformers
7
  ---
8
 
9
+ # MagicAssessor-7B
 
 
10
 
11
+ MagicAssessor-7B is a Vision-Language Model (VLM) developed for fine-grained artifact assessment in text-to-image generation. It is a core component of the comprehensive **MagicMirror** framework, which aims to systematically evaluate the perceptual quality and identify various anatomical and structural flaws in generated images.
 
12
 
13
+ The model was introduced in the paper [MagicMirror: A Large-Scale Dataset and Benchmark for Fine-Grained Artifacts Assessment in Text-to-Image Generation](https://arxiv.org/abs/2509.10260).
 
14
 
15
+ * **Paper**: [arXiv:2509.10260](https://arxiv.org/abs/2509.10260) | [Hugging Face Papers: 2509.10260](https://huggingface.co/papers/2509.10260)
16
+ * **Project Page**: https://wj-inf.github.io/MagicMirror-page/
17
+ * **Code / GitHub Repository (MagicMirror Benchmark)**: https://github.com/wj-inf/MagicMirror
18
+ * **Dataset (MagicData340K)**: https://huggingface.co/datasets/wj-inf/MagicData340k
19
+ * **Model (MagicAssessor-7B - this repository)**: https://huggingface.co/wj-inf/MagicAssessor-7B