Instructions to use viklofg/swedish-ocr-correction with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use viklofg/swedish-ocr-correction with Transformers:
# Load model directly from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("viklofg/swedish-ocr-correction") model = AutoModelForSeq2SeqLM.from_pretrained("viklofg/swedish-ocr-correction") - Notebooks
- Google Colab
- Kaggle
Very good job!
#2
by johnlockejrr - opened
Very good model for Swedish.
I tried (unsuccessfully) to train a ByT5 myself for Hebrew and Samaritan post OCR correction. Do you, kindly, have any code to share for preparing the dataset and most important, to train?
Thank you!