Datasets:
Languages:
English
Size:
1M<n<10M
Tags:
handwriting-recognition
htr
self-supervised-learning
historical-documents
writer-identification
License:
File size: 9,233 Bytes
d24aba7 3114ebc d24aba7 3114ebc |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 |
---
license: apache-2.0
task_categories:
- image-to-text
- image-classification
language:
- en
size_categories:
- 1M<n<10M
tags:
- handwriting-recognition
- htr
- self-supervised-learning
- historical-documents
- writer-identification
pretty_name: SSL-HWD
---
# SSL-HWD (A Large Scale Handwritten Image Dataset)
## Dataset Description
### Dataset Summary
**SSL-HWD** is a large-scale handwritten text dataset introduced in the paper ["Learning Beyond Labels: Self-Supervised Handwritten Text Recognition"](https://logo-ssl.github.io/) (WACV 2026). The dataset comprises **10 million word-level handwritten images** from **852 writers** across diverse domains including Physics, Computer Science, Biology, Mathematics, and more.
The dataset is specifically designed to support **self-supervised learning** approaches for Handwritten Text Recognition (HTR), addressing the critical challenge of reducing dependence on large volumes of labeled data.
### Dataset Composition
- **Total Words**: 10 million word-level images
- **Writers**: 852 unique contributors
- **Pages**: 81,280 scanned document pages
- **Domains**: 20+ domains including sciences, literature, and mathematics
- **Labeled Subset**: 2.08M words (20.8%) with ground truth transcriptions
- **Unlabeled Subset**: 7.92M words (79.2%) for self-supervised pretraining
- **Unique Vocabulary**: 107,813 unique words
### Key Features
**Diversity and Complexity**: The dataset includes challenging real-world scenarios:
- Texts with different font colors (varying ink and pen usage)
- Texts with difficult backgrounds (lines, noise interference)
- Texts with distorted characters (irregular strokes, structural inconsistencies)
- Texts with blurring effects (motion or focus issues)
- Texts with highlighted backgrounds (color markings obscuring content)
**Quality Assurance**:
- All samples automatically annotated using Amazon Textract with confidence ≥99%
- Labeled subset manually verified by language experts
- High-quality word segmentation with precise bounding boxes
### Comparison with Existing Datasets
| Dataset | Pages | Writers | Words | Unique Words |
|---------|-------|---------|-------|--------------|
| IAM | 1.5K | 657 | 115K | 10.5K |
| GNHK | 687 | - | 39K | 12.3K |
| IIIT-HW-English-Word | 20.8K | 1,215 | 757K | 174K |
| **SSL-HWD (Ours)** | **81.2K** | **852** | **10M** | **107K** |
### Vocabulary Distribution
| Category | SSL-HWD |
|----------|---------|
| Alphabetic Words | 61,088 |
| Numeric Words | 4,981 |
| Stop-words | 457 |
| Other Words | 41,287 |
| **Total Unique** | **107,813** |
## Dataset Structure
### Data Organization
```
SSL-HWD/
├── labeled/ # 2.08M labeled samples
│ ├── writer1/
│ │ ├── writer1.csv # Ground truth transcriptions
│ │ ├── writer1_1.png
│ │ ├── writer1_2.png
│ │ └── ...
│ ├── writer2/
│ └── ...
└── unlabeled/ # 7.92M unlabeled samples
├── writer1/
│ ├── writer1_1.png
│ ├── writer1_2.png
│ └── ...
└── ...
```
### Data Fields
**CSV Format (for labeled data)**:
- `image_filename` (string): Name of the word image file
- `transcription` (string): Ground truth text transcription
**Example**:
```csv
image_filename,transcription
writer1_1.png,handwritten
writer1_2.png,recognition
writer1_3.png,dataset
```
### Data Splits
The labeled subset (2.08M samples) is divided as follows:
- **Training**: 60% (1.25M samples)
- **Testing**: 40% (0.83M samples)
The unlabeled subset (7.92M samples) is used for self-supervised pretraining.
## Supported Tasks
### 1. Handwriting Text Recognition (HTR)
Train models to recognize handwritten text from word images.
### 2. Self-Supervised Learning
Use the large unlabeled subset for pretraining with methods like contrastive learning.
### 3. Writer Identification
Identify writers based on handwriting characteristics (852 unique writers).
### 4. Domain Adaptation
Transfer learning across different handwriting styles and domains.
### 5. Semi-Supervised Learning
Combine small labeled and large unlabeled subsets for improved performance.
## Usage
### Loading the Dataset
```python
from datasets import load_dataset
# Load the full dataset
dataset = load_dataset("your-username/ssl-hwd")
# Access labeled data
labeled = dataset['labeled']
# Access unlabeled data for self-supervised learning
unlabeled = dataset['unlabeled']
```
### Basic Example
```python
from PIL import Image
import pandas as pd
from pathlib import Path
# Load a writer's data
writer_folder = Path("labeled/writer1")
df = pd.read_csv(writer_folder / "writer1.csv")
# Load first image and transcription
img_name = df.iloc[0]['image_filename']
transcription = df.iloc[0]['transcription']
image = Image.open(writer_folder / img_name)
print(f"Transcription: {transcription}")
```
### PyTorch DataLoader
```python
import torch
from torch.utils.data import Dataset, DataLoader
from PIL import Image
import pandas as pd
class SSLHWDDataset(Dataset):
def __init__(self, writer_folder, transform=None):
self.folder = Path(writer_folder)
csv_file = list(self.folder.glob("*.csv"))[0]
self.data = pd.read_csv(csv_file)
self.transform = transform
def __len__(self):
return len(self.data)
def __getitem__(self, idx):
img_name = self.data.iloc[idx]['image_filename']
transcription = self.data.iloc[idx]['transcription']
img_path = self.folder / img_name
image = Image.open(img_path).convert('RGB')
if self.transform:
image = self.transform(image)
return image, transcription
# Usage
dataset = SSLHWDDataset('labeled/writer1')
dataloader = DataLoader(dataset, batch_size=32, shuffle=True)
```
## Benchmark Results
### State-of-the-Art Performance (from LoGo-HTR paper)
**IAM Dataset**:
- With 80% labeled data: WER 11.93%, CER 2.31%
- With 100% labeled data: WER 10.27%, CER 2.01%
**GNHK Dataset**:
- With 80% labeled data: WER 19.69%, CER 9.05%
- With 100% labeled data: WER 12.07%, CER 7.20%
**RIMES Dataset**:
- With 80% labeled data: WER 6.15%, CER 1.89%
- With 100% labeled data: WER 5.50%, CER 1.78%
**LAM Dataset** (line-level):
- With 80% labeled data: WER 7.2%, CER 3.2%
- With 100% labeled data: WER 6.3%, CER 2.39%
### Cross-Dataset Generalization
The dataset demonstrates strong cross-dataset transfer capabilities:
- SSL-HWD → IAM: WER 13.2%, CER 2.9%
- SSL-HWD → GNHK: WER 10.1%, CER 6.8%
- SSL-HWD → RIMES: WER 11.2%, CER 3.5%
- SSL-HWD → LAM: WER 16.4%, CER 7.2%
## Dataset Creation
### Source Data
The dataset was curated from publicly available digitized manuscripts from web sources, selected for being fully or substantially handwritten. Documents span:
- Personal diaries
- Academic notes
- Historical correspondence
- Scientific manuscripts
- Mathematical writings
- Literature and more
### Data Quality
- **Diverse Sources**: 852 unique writers across 20+ domains
- **Real-world Challenges**: Includes blur, noise, distortions, and background interference
## Applications
### Self-Supervised Learning (Primary Use)
Use the 7.92M unlabeled samples for pretraining with methods like:
- Contrastive learning (SimCLR, MoCo)
- Masked image modeling
- Local-global objectives (as in LoGo-HTR)
### Semi-Supervised Learning
Combine labeled and unlabeled subsets for improved performance with limited annotations.
### Few-Shot Learning
Train models with minimal labeled data by leveraging pretrained representations.
### Transfer Learning
Pretrain on SSL-HWD and fine-tune on domain-specific datasets.
## Limitations and Considerations
### Known Limitations
- **Language**: Primarily English handwritten text
- **Geographic Bias**: Predominantly Western handwriting styles
- **Historical Period**: Concentrated in specific time periods
- **Domain Coverage**: While diverse, may not represent all handwriting variations
### Ethical Considerations
- Dataset contains historical documents and handwritten materials
- Personal information may be present in some samples
- Users should be aware of privacy considerations when using this data
## Citation
If you use the SSL-HWD dataset in your research, please cite:
```bibtex
@inproceedings{mitra2026learning,
title={Learning Beyond Labels: Self-Supervised Handwritten Text Recognition},
author={Mitra, Shree and Mondal, Ajoy and Jawahar, C. V.},
booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
year={2026}
}
```
## Additional Resources
- **Project Website**: [https://logo-ssl.github.io/](https://logo-ssl.github.io/)
## License
This dataset is released under the **Apache License 2.0**.
## Acknowledgments
This work is supported by MeitY, Government of India, through the NLTM-Bhashini project.
## Contact
For questions or issues regarding the dataset:
- **Authors**: Shree Mitra, Ajoy Mondal, C.V. Jawahar
- **Institution**: IIIT Hyderabad
- **Email**: shree.mitra@research.iiit.ac.in
---
**Dataset Version**: 1.0
**Last Updated**: January 2026
**Status**: Active |