Instructions to use ran/c10 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use ran/c10 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="ran/c10")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("ran/c10") model = AutoModelForSequenceClassification.from_pretrained("ran/c10") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- 9a3f008c0de2016f1328f03973409bed061e78833940b19d6f582571609a48e4
- Size of remote file:
- 438 MB
- SHA256:
- ca99897ee8a1abc61334e2cdac432e5ed321a936f1ee13eb4fb00c0ce8b43947
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.