htdung167's picture
Update README.md
6a419f5 verified
metadata
dataset_info:
  features:
    - name: image
      dtype: image
    - name: text
      dtype: string
  splits:
    - name: train
      num_bytes: 441294141
      num_examples: 22403
    - name: test
      num_bytes: 20676100
      num_examples: 1000
  download_size: 475468625
  dataset_size: 461970241
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: test
        path: data/test-*
task_categories:
  - image-to-text
language:
  - vi
tags:
  - ocr
  - text-recognition
  - handwriting
  - handwriting-recognition
  - vietnamese
size_categories:
  - 10K<n<100K

Dataset Overview

From our experience, OPEN-SOURCE DATASETS AND MODELS FROM THE GLOBAL COMMUNITY HAVE HELPED US GREATLY. However, we also learned that TO PUSH THE MODELS FURTHER, WE NEED MORE HIGH-QUALITY LOCAL DATA to DEVELOP STRONGER VIETNAMESE AI MODELS 💪.

This dataset consists of 23,403 Vietnamese 🇻🇳 handwritten text images collected and curated for Handwritten Text Recognition research and applications.

The original images were crawled from public internet sources. For each image, only one bounding box containing handwritten text was selected. All samples were then manually annotated by human labelers, ensuring both high diversity and high transcription accuracy.

We are actively scaling to hundreds of thousands of HUMAN-VERIFIED samples using the same rigorous annotation process.

Dataset Structure

The dataset is released with two splits:

  • Train: 22,403 samples
  • Test: 1,000 samples

Examples

  • Example 1
cô đã tạo ra nhiều hoạt động sân chơi cho các con,
  • Example 2
Câu 2: Phát biểu quy luật phân li và phân li độc lập (4đ)
  • Example 3
được chân thành cảm tạ các y, bác sĩ trong khoa, và đặc biệt 

Citation

@misc{doan2024vintern1befficientmultimodallarge,
      title={Vintern-1B: An Efficient Multimodal Large Language Model for Vietnamese}, 
      author={Khang T. Doan and Bao G. Huynh and Dung T. Hoang and Thuc D. Pham and Nhat H. Pham and Quan T. M. Nguyen and Bang Q. Vo and Suong N. Hoang},
      year={2024},
      eprint={2408.12480},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2408.12480}, 
}