File size: 2,749 Bytes
975e471
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
# OCR to Markdown with Nanonets

Convert document images to structured markdown using [Nanonets-OCR-s](https://huggingface.co/nanonets/Nanonets-OCR-s) with vLLM acceleration.

## Quick Start

```bash
# Basic OCR conversion
uv run main.py document-images markdown-output

# With custom image column
uv run main.py scanned-docs extracted-text --image-column page

# Test with subset
uv run main.py large-dataset test-output --max-samples 100

# Run directly from Hub
uv run https://huggingface.co/datasets/davanstrien/dataset-creation-scripts/raw/main/ocr-vllm/main.py \
  input-dataset output-dataset
```

## Features

Nanonets-OCR-s excels at:
- **LaTeX equations**: Mathematical formulas preserved in LaTeX format
- **Tables**: Complex table structures converted to markdown
- **Document structure**: Headers, lists, and formatting maintained
- **Special elements**: Signatures, watermarks, and checkboxes detected

## HF Jobs Deployment

Deploy on GPU infrastructure:

```bash
hfjobs run \
  --flavor l4x1 \
  --secret HF_TOKEN=$HF_TOKEN \
  ghcr.io/astral-sh/uv:latest \
  /bin/bash -c "
    uv run https://huggingface.co/datasets/davanstrien/dataset-creation-scripts/raw/main/ocr-vllm/main.py \
      your-document-dataset \
      your-markdown-output \
      --batch-size 32 \
      --gpu-memory-utilization 0.8
  "
```

## Parameters

| Parameter | Default | Description |
|-----------|---------|-------------|
| `--image-column` | `"image"` | Column containing images |
| `--batch-size` | `8` | Images per batch |
| `--model` | `nanonets/Nanonets-OCR-s` | OCR model to use |
| `--max-tokens` | `4096` | Max output tokens |
| `--gpu-memory-utilization` | `0.7` | GPU memory usage |
| `--split` | `"train"` | Dataset split |
| `--max-samples` | None | Limit samples (testing) |
| `--private` | False | Private output dataset |

## Examples

### Scientific Papers
```bash
uv run main.py arxiv-papers arxiv-markdown \
  --max-tokens 8192  # Longer output for equations
```

### Scanned Documents
```bash
uv run main.py historical-scans extracted-text \
  --image-column scan \
  --batch-size 4  # Lower batch for high-res images
```

### Multi-page Documents
```bash
uv run main.py pdf-pages document-text \
  --image-column page_image \
  --batch-size 16
```

## Tips

- **Batch size**: Reduce if encountering OOM errors
- **GPU memory**: Increase for better throughput
- **Max tokens**: Increase for long documents
- **Testing**: Use `--max-samples` to validate pipeline

## Model Details

Nanonets-OCR-s (576M parameters) is optimized for:
- High-quality markdown output
- Complex document understanding
- Efficient GPU inference
- Multi-language support

For more details, see the [model card](https://huggingface.co/nanonets/Nanonets-OCR-s).