File size: 2,929 Bytes
3a5a459 5f7f53b 3a5a459 9646e4c 3a5a459 5f7f53b 3a5a459 5f7f53b 3a5a459 5f7f53b 3a5a459 5f7f53b dc4b7fc 3a5a459 dc4b7fc 5f7f53b 3a5a459 6deff08 3a5a459 5f7f53b 3a5a459 dc4b7fc 3a5a459 5f7f53b 3a5a459 5f7f53b 3a5a459 5f7f53b dc4b7fc 5f7f53b 3a5a459 5f7f53b | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 | ---
tags:
- ocr
- document-processing
- dots-ocr
- multilingual
- markdown
- uv-script
- generated
configs:
- config_name: firered-ocr
data_files:
- split: train
path: firered-ocr/train-*
dataset_info:
config_name: firered-ocr
features:
- name: image
dtype: image
- name: text
dtype: string
- name: image_name
dtype: string
- name: type
dtype: string
- name: source_dir
dtype: string
- name: markdown
dtype: string
- name: inference_info
dtype: string
splits:
- name: train
num_bytes: 5830923
num_examples: 10
download_size: 5805418
dataset_size: 5830923
---
# Document OCR using dots.ocr
This dataset contains OCR results from images in [NealCaren/InkBench](https://huggingface.co/datasets/NealCaren/InkBench) using DoTS.ocr, a compact 1.7B multilingual model.
## Processing Details
- **Source Dataset**: [NealCaren/InkBench](https://huggingface.co/datasets/NealCaren/InkBench)
- **Model**: [rednote-hilab/dots.ocr](https://huggingface.co/rednote-hilab/dots.ocr)
- **Number of Samples**: 10
- **Processing Time**: 2.6 min
- **Processing Date**: 2026-03-05 21:00 UTC
### Configuration
- **Image Column**: `image`
- **Output Column**: `markdown`
- **Dataset Split**: `train`
- **Batch Size**: 16
- **Prompt Mode**: ocr
- **Max Model Length**: 8,192 tokens
- **Max Output Tokens**: 8,192
- **GPU Memory Utilization**: 80.0%
## Model Information
DoTS.ocr is a compact multilingual document parsing model that excels at:
- 🌍 **100+ Languages** - Multilingual document support
- 📊 **Table extraction** - Structured data recognition
- 📐 **Formulas** - Mathematical notation preservation
- 📝 **Layout-aware** - Reading order and structure preservation
- 🎯 **Compact** - Only 1.7B parameters
## Dataset Structure
The dataset contains all original columns plus:
- `markdown`: The extracted text in markdown format
- `inference_info`: JSON list tracking all OCR models applied to this dataset
## Usage
```python
from datasets import load_dataset
import json
# Load the dataset
dataset = load_dataset("{output_dataset_id}", split="train")
# Access the markdown text
for example in dataset:
print(example["markdown"])
break
# View all OCR models applied to this dataset
inference_info = json.loads(dataset[0]["inference_info"])
for info in inference_info:
print(f"Column: {info['column_name']} - Model: {info['model_id']}")
```
## Reproduction
This dataset was generated using the [uv-scripts/ocr](https://huggingface.co/datasets/uv-scripts/ocr) DoTS OCR script:
```bash
uv run https://huggingface.co/datasets/uv-scripts/ocr/raw/main/dots-ocr.py \
NealCaren/InkBench \
<output-dataset> \
--image-column image \
--batch-size 16 \
--prompt-mode ocr \
--max-model-len 8192 \
--max-tokens 8192 \
--gpu-memory-utilization 0.8
```
Generated with 🤖 [UV Scripts](https://huggingface.co/uv-scripts)
|