File size: 4,385 Bytes
0d0c4eb 7ec8e79 2de6b84 ed125e7 2de6b84 85899b5 5bd205f 52f3795 2dff6b6 f697358 5434088 f697358 5434088 f697358 5434088 b6d48f6 ed125e7 2de6b84 ed125e7 5851ffd ed125e7 afd3f58 5851ffd ed125e7 5bd205f 52f3795 f697358 e5b21e5 f697358 e5b21e5 f697358 e5b21e5 f697358 5434088 e5b21e5 5434088 0d0c4eb 339b333 029c10c 57e66b3 339b333 029c10c 339b333 9022bd2 029c10c 339b333 57e66b3 339b333 029c10c 339b333 029c10c 339b333 029c10c 339b333 029c10c d20d50a 339b333 029c10c 339b333 029c10c 339b333 029c10c 339b333 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 |
---
license: apache-2.0
task_categories:
- image-to-text
- visual-question-answering
language:
- zh
- en
configs:
- config_name: Full_page_ocr
data_files:
- split: test
path:
- full_page_ocr/easy/easy.parquet
- full_page_ocr/medium/medium.parquet
- full_page_ocr/hard/hard.parquet
- config_name: Intent
data_files:
- split: test
path:
- reasoning/intent/intent.parquet
- config_name: Bilingual
data_files:
- split: test
path:
- reasoning/bilingual/medium/bilingual_medium.parquet
- reasoning/bilingual/hard/bilingual_hard.parquet
- config_name: Author
data_files:
- split: test
path:
- choice/author/author.parquet
- config_name: Style
data_files:
- split: test
path:
- choice/style/style.parquet
- config_name: Layout
data_files:
- split: test
path:
- choice/layout/layout.parquet
- config_name: Region
data_files:
- split: test
path:
- region-wise/region.parquet
dataset_info:
- config_name: Full_page_ocr
features:
- name: id
dtype: string
- name: image
dtype: image
- name: answer
dtype: string
- name: annotation
dtype: string
splits:
- name: test
- config_name: Intent
features:
- name: id
dtype: string
- name: image
dtype: image
- name: answer
dtype: string
- name: annotation
dtype: string
splits:
- name: test
- config_name: Bilingual
features:
- name: id
dtype: string
- name: image
dtype: image
- name: answer
dtype: string
- name: annotation
dtype: string
splits:
- name: test
- config_name: Author
features:
- name: id
dtype: string
- name: image
dtype: image
- name: question
dtype: string
- name: answer
dtype: string
- name: annotation
dtype: string
splits:
- name: test
- config_name: Style
features:
- name: id
dtype: string
- name: image
dtype: image
- name: question
dtype: string
- name: answer
dtype: string
- name: annotation
dtype: string
splits:
- name: test
- config_name: Layout
features:
- name: id
dtype: string
- name: image
dtype: image
- name: question
dtype: string
- name: answer
dtype: string
- name: annotation
dtype: string
splits:
- name: test
- config_name: Region
features:
- name: id
dtype: string
- name: image
dtype: image
- name: region
dtype: string
- name: answer
dtype: string
- name: annotation
dtype: string
splits:
- name: test
tags:
- art
size_categories:
- 1K<n<10K
---
# 🧠 CalliReader: Contextualizing Chinese Calligraphy via an Embedding-aligned Vision Language Model
<div align="center">
<a href="https://github.com/LoYuXr/CalliReader">📂 Code</a>
<a href="https://arxiv.org/pdf/2503.06472">📄 Paper</a>
</div>
**CalliBench** is aimed to comprehensively evaluate VLMs' performance on the recognition and understanding of Chinese calligraphy.
## 📦 Dataset Summary
* **Samples**: 3,192 image–annotation pairs
* **Tasks**: **Full-page recognition** and **Contextual VQA** (choice of author/layout/style, bilingual interpretation, and intent analysis).
* **Annotations**:
* Metadata of author, layout, and style.
* Fine-grained annotations of **character-wise bounding boxes and labels**.
* Certain samples include **contextual VQA**.
## 🧪 How To Use
All **.parqeut** files of different tiers can be found in the sub-folders of **data**. **Pandas** can be used to parse and further process those files.
For example, to load a sample and convert its image into a .jpg file:
```
import pandas as pd
import io
from PIL import Image
df = pd.read_parquet('./full_page_ocr/hard/hard.parquet')
image_data = df.iloc[0]['image']
image = Image.open(io.BytesIO(image_data['bytes']))
image.save('output_image.jpg')
```
## 🤗 License
Apache 2.0 – open for research and commercial use. |