Instructions to use mudes/en-large with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use mudes/en-large with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("token-classification", model="mudes/en-large")# Load model directly from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("mudes/en-large") model = AutoModelForTokenClassification.from_pretrained("mudes/en-large") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- e69489a763b7c195e2c3edd8781e19e7a36748d7db579423c8479cf43c15b61a
- Size of remote file:
- 1.42 GB
- SHA256:
- cfd9b244bead415eff53c72a5bb227dd153a15b0f13ba0e67e200d04daf8e8f6
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.