metadata
license: apache-2.0
task_categories:
- image-to-text
- visual-question-answering
language:
- zh
- en
configs:
- config_name: Full_page_ocr
data_files:
- split: test
path:
- full_page_ocr/easy/easy.parquet
- full_page_ocr/medium/medium.parquet
- full_page_ocr/hard/hard.parquet
- config_name: Intent
data_files:
- split: test
path:
- reasoning/intent/intent.parquet
- config_name: Bilingual
data_files:
- split: test
path:
- reasoning/bilingual/medium/bilingual_medium.parquet
- reasoning/bilingual/hard/bilingual_hard.parquet
- config_name: Author
data_files:
- split: test
path:
- choice/author/author.parquet
- config_name: Style
data_files:
- split: test
path:
- choice/style/style.parquet
- config_name: Layout
data_files:
- split: test
path:
- choice/layout/layout.parquet
- config_name: Region
data_files:
- split: test
path:
- region-wise/region.parquet
dataset_info:
- config_name: Full_page_ocr
features:
- name: id
dtype: string
- name: image
dtype: image
- name: answer
dtype: string
- name: annotation
dtype: string
splits:
- name: test
- config_name: Intent
features:
- name: id
dtype: string
- name: image
dtype: image
- name: answer
dtype: string
- name: annotation
dtype: string
splits:
- name: test
- config_name: Bilingual
features:
- name: id
dtype: string
- name: image
dtype: image
- name: answer
dtype: string
- name: annotation
dtype: string
splits:
- name: test
- config_name: Author
features:
- name: id
dtype: string
- name: image
dtype: image
- name: question
dtype: string
- name: answer
dtype: string
- name: annotation
dtype: string
splits:
- name: test
- config_name: Style
features:
- name: id
dtype: string
- name: image
dtype: image
- name: question
dtype: string
- name: answer
dtype: string
- name: annotation
dtype: string
splits:
- name: test
- config_name: Layout
features:
- name: id
dtype: string
- name: image
dtype: image
- name: question
dtype: string
- name: answer
dtype: string
- name: annotation
dtype: string
splits:
- name: test
- config_name: Region
features:
- name: id
dtype: string
- name: image
dtype: image
- name: region
dtype: string
- name: answer
dtype: string
- name: annotation
dtype: string
splits:
- name: test
tags:
- art
size_categories:
- 1K<n<10K
🧠 CalliReader: Contextualizing Chinese Calligraphy via an Embedding-aligned Vision Language Model
CalliBench is aimed to comprehensively evaluate VLMs' performance on the recognition and understanding of Chinese calligraphy.
📦 Dataset Summary
Samples: 3,192 image–annotation pairs
Tasks: Full-page recognition and Contextual VQA (choice of author/layout/style, bilingual interpretation, and intent analysis).
Annotations:
- Metadata of author, layout, and style.
- Fine-grained annotations of character-wise bounding boxes and labels.
- Certain samples include contextual VQA.
🧪 How To Use
All .parqeut files of different tiers can be found in the sub-folders of data. Pandas can be used to parse and further process those files.
For example, to load a sample and convert its image into a .jpg file:
import pandas as pd
import io
from PIL import Image
df = pd.read_parquet('./full_page_ocr/hard/hard.parquet')
image_data = df.iloc[0]['image']
image = Image.open(io.BytesIO(image_data['bytes']))
image.save('output_image.jpg')
🤗 License
Apache 2.0 – open for research and commercial use.