Instructions to use philschmid/lilt-en-funsd with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use philschmid/lilt-en-funsd with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("token-classification", model="philschmid/lilt-en-funsd")# Load model directly from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("philschmid/lilt-en-funsd") model = AutoModelForTokenClassification.from_pretrained("philschmid/lilt-en-funsd") - Notebooks
- Google Colab
- Kaggle
Inquiry Regarding Commercial Use of lilt-en-funsd Model
#4
by Ifyouknowthenyouknow - opened
Hi Phil [ @philschmid ],
Quick question about your lilt-en-funsd model on Hugging Face. It's under an MIT license, but I see it uses the LayoutLMv3 processor, which isn't licensed for commercial use. Can you clarify if your model is still okay for commercial projects?
Thanks!
LayoutLMv3 is licensed under apache 2.0 since it is transformers code. For LayoutLMv3 only the weights are non commercial.
Thank you for your prompt response. @philschmid
I'm looking to fine-tune the LILT model for Dutch and am in need of an annotated Dutch dataset. Do you know of any existing annotated Dutch datasets that could be useful for this purpose, before I start creating one from scratch?