Update paper link in model card

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +25 -24
README.md CHANGED
@@ -1,26 +1,27 @@
1
  ---
2
- library_name: transformers
3
- tags:
4
- - Diffusion_Multimodal_Large_Language_Model
5
- - MLLM
6
- - Discrete_Diffusion
7
- license: apache-2.0
8
  datasets:
9
  - liuhaotian/LLaVA-CC3M-Pretrain-595K
10
  - lmms-lab/LLaVA-NeXT-Data
11
  language:
12
  - en
 
 
13
  metrics:
14
  - accuracy
15
- base_model:
16
- - Dream-org/Dream-v0-Instruct-7B
17
  pipeline_tag: image-text-to-text
 
 
 
 
18
  ---
 
19
  <img src="https://cdn-uploads.huggingface.co/production/uploads/635364b3c41f548fe39db945/T6ffjtAkFkI76QjXmN6iR.png" alt="Dimple" style="width:100%;"/>
20
 
21
 
22
  <p align="center">
23
- ๐Ÿค— <a href="https://huggingface.co/rp-yu/Dimple-7B">Model</a>&nbsp&nbsp | &nbsp&nbsp ๐Ÿ’ฌ <a href="https://huggingface.co/spaces/rp-yu/Dimple-7B">Demo: Chat with Dimple</a>&nbsp&nbsp | &nbsp&nbsp๐Ÿ“‘ <a href="https://arxiv.org/abs/">Paper</a>&nbsp&nbsp | &nbsp&nbsp โœจ <a href="https://github.com/yu-rp/Dimple">Code</a>&nbsp&nbsp
24
  </p>
25
 
26
  # ๐Ÿ’ง Dimple-7B
@@ -50,20 +51,20 @@ Trained on the same dataset as LLaVA-NEXT, **Dimple-7B surpasses LLaVA-NEXT-7B b
50
  | **Training Samples** | 1.3M | 1.2M | 1.3M | 2.4M | 27.8M | 1.5B | - |
51
  | **Training Tokens** | 0.8B | - | - | - | - | - | 2.6T |
52
  | **Base LLM** | Dream (Qwen2.5) | Vicuna | Vicuna-1.5 | Vicuna | Qwen2.5 | Qwen | Qwen2.5 |
53
- | **GQA** | 59.2 | 62.0 | 64.8 | 64.9 | - | 59.3 | - |
54
- | **MMBench (en test)** | 74.6 | 64.3 | 68.7 | 68.4 | - | - | 83.5 |
55
- | **MME (Perception)** | 1514 | 1510 | 1519 | 1528 | - | - | - |
56
- | **MME (Cognition)** | 432 | - | 332 | - | - | - | - |
57
- | **MME (Total)** | 1946 | - | 1851 | - | - | - | 2347 |
58
- | **POPE** | 86.2 | 85.8 | 86.7 | 88.8 | - | - | - |
59
- | **MMMU (val)** | 45.2 | - | 35.8 | 36.3 | 56.1 | - | 58.6 |
60
- | **SQA (img)** | 77.1 | 66.8 | 72.8 | 70.0 | - | - | - |
61
- | **AI2D** | 74.4 | - | 65.4 | - | 83.9 | 62.3 | 83.9 |
62
- | **ChartQA** | 63.4 | - | 54.9 | 67.7 | 86.4 | 65.7 | 87.3 |
63
- | **TextVQA** | 61.6 | - | 64.8 | - | 83.0 | - | - |
64
- | **OCRBench** | 565 | - | 490 | 529 | - | - | - |
65
- | **MathVista (mini)** | 42.3 | - | 33.0 | - | 63.8 | 37.0 | 68.2 |
66
- | **MMVet** | 41.2 | 31.1 | 47.3 | - | 62.2 | - | 67.1 |
67
 
68
  ---
69
 
@@ -153,4 +154,4 @@ for j in range(len(messages)):
153
  ## ๐Ÿ“š Citation
154
 
155
  > Citation information will be provided soon.
156
- > Please stay tuned if you are interested in citing **Dimple** in your work.
 
1
  ---
2
+ base_model:
3
+ - Dream-org/Dream-v0-Instruct-7B
 
 
 
 
4
  datasets:
5
  - liuhaotian/LLaVA-CC3M-Pretrain-595K
6
  - lmms-lab/LLaVA-NeXT-Data
7
  language:
8
  - en
9
+ library_name: transformers
10
+ license: apache-2.0
11
  metrics:
12
  - accuracy
 
 
13
  pipeline_tag: image-text-to-text
14
+ tags:
15
+ - Diffusion_Multimodal_Large_Language_Model
16
+ - MLLM
17
+ - Discrete_Diffusion
18
  ---
19
+
20
  <img src="https://cdn-uploads.huggingface.co/production/uploads/635364b3c41f548fe39db945/T6ffjtAkFkI76QjXmN6iR.png" alt="Dimple" style="width:100%;"/>
21
 
22
 
23
  <p align="center">
24
+ ๐Ÿค— <a href="https://huggingface.co/rp-yu/Dimple-7B">Model</a>&nbsp&nbsp | &nbsp&nbsp ๐Ÿ’ฌ <a href="https://huggingface.co/spaces/rp-yu/Dimple-7B">Demo: Chat with Dimple</a>&nbsp&nbsp | &nbsp&nbsp๐Ÿ“‘ <a href="https://huggingface.co/papers/2505.16990">Paper</a>&nbsp&nbsp | &nbsp&nbsp โœจ <a href="https://github.com/yu-rp/Dimple">Code</a>&nbsp&nbsp
25
  </p>
26
 
27
  # ๐Ÿ’ง Dimple-7B
 
51
  | **Training Samples** | 1.3M | 1.2M | 1.3M | 2.4M | 27.8M | 1.5B | - |
52
  | **Training Tokens** | 0.8B | - | - | - | - | - | 2.6T |
53
  | **Base LLM** | Dream (Qwen2.5) | Vicuna | Vicuna-1.5 | Vicuna | Qwen2.5 | Qwen | Qwen2.5 |
54
+ | **GQA** | 59.2 | 62.0 | 64.8 | 64.9 | - | 59.3 | - |
55
+ | **MMBench (en test)** | 74.6 | 64.3 | 68.7 | 68.4 | - | - | 83.5 |
56
+ | **MME (Perception)** | 1514 | 1510 | 1519 | 1528 | - | - | - |
57
+ | **MME (Cognition)** | 432 | - | 332 | - | - | - | - |
58
+ | **MME (Total)** | 1946 | - | 1851 | - | - | - | 2347 |
59
+ | **POPE** | 86.2 | 85.8 | 86.7 | 88.8 | - | - | - |
60
+ | **MMMU (val)** | 45.2 | - | 35.8 | 36.3 | 56.1 | - | 58.6 |
61
+ | **SQA (img)** | 77.1 | 66.8 | 72.8 | 70.0 | - | - | - |
62
+ | **AI2D** | 74.4 | - | 65.4 | - | 83.9 | 62.3 | 83.9 |
63
+ | **ChartQA** | 63.4 | - | 54.9 | 67.7 | 86.4 | 65.7 | 87.3 |
64
+ | **TextVQA** | 61.6 | - | 64.8 | - | 83.0 | - | - |
65
+ | **OCRBench** | 565 | - | 490 | 529 | - | - | - |
66
+ | **MathVista (mini)** | 42.3 | - | 33.0 | - | 63.8 | 37.0 | 68.2 |
67
+ | **MMVet** | 41.2 | 31.1 | 47.3 | - | 62.2 | - | 67.1 |
68
 
69
  ---
70
 
 
154
  ## ๐Ÿ“š Citation
155
 
156
  > Citation information will be provided soon.
157
+ > Please stay tuned if you are interested in citing **Dimple** in your work.