hellogroup-opensource commited on
Commit
f573561
·
verified ·
1 Parent(s): f3fa12d

update benchmark value

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -88,8 +88,8 @@ The compression pipeline operates in two stages:
88
  | Qwen-Image | 0.539 | **0.548** |
89
  | Z-Image | **0.546** | 0.535 |
90
  | Ovis-Image | 0.530 | 0.521 |
91
- | **Amber-Image-10B** | 0.489 | 0.470 |
92
- | **Amber-Image-6B** | 0.477 | 0.456 |
93
 
94
  ### Text Rendering
95
 
@@ -103,7 +103,7 @@ The compression pipeline operates in two stages:
103
  | Z-Image | 0.935 | 0.936 |
104
  | Ovis-Image | 0.922 | **0.964** |
105
  | **Amber-Image-10B** | 0.911 | 0.915 |
106
- | **Amber-Image-6B** | 0.838 | 0.828 |
107
 
108
  **CVTG-2K** — Complex visual text generation. Amber-Image-10B achieves the highest CLIPScore among all compared models, indicating strong semantic alignment. Word accuracy remains stable across increasing region counts.
109
 
 
88
  | Qwen-Image | 0.539 | **0.548** |
89
  | Z-Image | **0.546** | 0.535 |
90
  | Ovis-Image | 0.530 | 0.521 |
91
+ | **Amber-Image-10B** | 0.504 | 0.502 |
92
+ | **Amber-Image-6B** | 0.491 | 0.486 |
93
 
94
  ### Text Rendering
95
 
 
103
  | Z-Image | 0.935 | 0.936 |
104
  | Ovis-Image | 0.922 | **0.964** |
105
  | **Amber-Image-10B** | 0.911 | 0.915 |
106
+ | **Amber-Image-6B** | 0.870 | 0.876 |
107
 
108
  **CVTG-2K** — Complex visual text generation. Amber-Image-10B achieves the highest CLIPScore among all compared models, indicating strong semantic alignment. Word accuracy remains stable across increasing region counts.
109