Instructions to use aliosm/ComVE-gpt2-medium with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use aliosm/ComVE-gpt2-medium with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("feature-extraction", model="aliosm/ComVE-gpt2-medium")# Load model directly from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("aliosm/ComVE-gpt2-medium") model = AutoModel.from_pretrained("aliosm/ComVE-gpt2-medium") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- 25c6be3362170232735cc13d8160b5c9ec4f10dbb7e78ebd7a2f16ab7b7e3e73
- Size of remote file:
- 1.42 GB
- SHA256:
- 56a425df451d83cb1a107ffb80469d468e6c96636fcfc0b746a285ab6a81aa7e
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.