Instructions to use hf-tiny-model-private/tiny-random-LiltForSequenceClassification with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use hf-tiny-model-private/tiny-random-LiltForSequenceClassification with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="hf-tiny-model-private/tiny-random-LiltForSequenceClassification")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("hf-tiny-model-private/tiny-random-LiltForSequenceClassification") model = AutoModelForSequenceClassification.from_pretrained("hf-tiny-model-private/tiny-random-LiltForSequenceClassification") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- f6731a420c9e9776484f2eab783ee41833e5b5fd3d760946d9a80b5a54e09651
- Size of remote file:
- 281 kB
- SHA256:
- 38075cd020d932548ceef987ff76b55a80541099b311515a2217a1c4ec7f2ea5
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.