File size: 1,867 Bytes
d563276 a7e056b d563276 a7e056b | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 | ---
license: apache-2.0
base_model:
- microsoft/resnet-50
tags:
- ocr
- chinese
- handwritten-chinese-calligraphy-ocr
- traditional-chinese-ocr
---
# Model Card: Chinese Calligraphy Character Classifier (ResNet50-based)
## Model Details
- Architecture: ResNet50 pretrained on ImageNet + custom classifier head
- Classes: 1200 Chinese calligraphy characters
- Input: 224x224 RGB images (grayscale converted to RGB)
- Framework: PyTorch
## Intended Use
- Handwritten Chinese calligraphy OCR and recognition
- For research, cultural preservation, and academic purposes
## Dataset
- EthicalSplit5508v3
- Train: 60,168 images | Val: 1,200 | Test: 1,200
- 1200 classes with fixed splits
## Training
- Batch size: 64, Learning rate: 3e-5 with OneCycleLR scheduler
- Epochs: up to 50, early stopping enabled
- Optimizer: Adam with weight decay 1e-4
- Loss: Cross-entropy with label smoothing (0.1)
## Performance
- Validation loss reduced from ~5.7 to ~1.06
- Test accuracy: ~88%+
- Model size: ~25M parameters
## Limitations
- May underperform on unseen handwriting styles or poor image quality
- Uses RGB input; grayscale-specific training not applied
- Dataset biases may affect generalization
## Ethical Considerations
- Dataset complies with ethical usage; no PII involved
- Intended for cultural and academic use only
## Usage Example
```python
model = ChineseClassifier(embed_dim=512, num_classes=1200, pretrainedEncoder=True, unfreezeEncoder=True)
checkpoint = torch.load("best_checkpoint.pth", map_location=device)
model.load_state_dict(checkpoint["model_state_dict"])
model.eval()
transform = CalligraphyCharacterDataset.defaultTransform()
img = Image.open("path_to_image.jpg").convert("RGB")
input_tensor = transform(img).unsqueeze(0).to(device)
outputs = model(input_tensor)
pred_idx = torch.argmax(outputs, dim=1).item()
pred_char = idx2char[pred_idx] |