Instructions to use geodesic-research/nemotron-instruct-tokenizer with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use geodesic-research/nemotron-instruct-tokenizer with Transformers:
# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("geodesic-research/nemotron-instruct-tokenizer", dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- 32569e44702186b12870b6b1de657b222323b32cad882ab71cf35b8e50e31325
- Size of remote file:
- 17.1 MB
- SHA256:
- 623c34567aebb18582765289fbe23d901c62704d6518d71866e0e58db892b5b7
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.