Instructions to use GD/cq-bert-model-repo with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use GD/cq-bert-model-repo with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="GD/cq-bert-model-repo")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("GD/cq-bert-model-repo") model = AutoModelForSequenceClassification.from_pretrained("GD/cq-bert-model-repo") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- 0f76fc95ba8890fae21f2f0c24f70ecc537c1093992e287645556fe0a0bce263
- Size of remote file:
- 438 MB
- SHA256:
- f87e2bfafcfc2bb4fcb496bb8af5f25edcfb2bc4e1777213f04da7cf7744a440
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.