Instructions to use nvidia/groupvit-gcc-yfcc with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use nvidia/groupvit-gcc-yfcc with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("feature-extraction", model="nvidia/groupvit-gcc-yfcc")# Load model directly from transformers import AutoProcessor, AutoModel processor = AutoProcessor.from_pretrained("nvidia/groupvit-gcc-yfcc") model = AutoModel.from_pretrained("nvidia/groupvit-gcc-yfcc") - Inference
- Notebooks
- Google Colab
- Kaggle
Add TF weights
#1
by ariG23498 HF Staff - opened
Model converted by the transformers' pt_to_tf CLI. All converted model outputs and hidden layers were validated against its Pytorch counterpart.
Maximum crossload output difference=9.537e-06; Maximum crossload hidden layer difference=2.441e-04;
Maximum conversion output difference=9.537e-06; Maximum conversion hidden layer difference=2.441e-04;
CAUTION: The maximum admissible error was manually increased to 0.001!
Tagging @nielsr @sgugger @joaogante
PR in Transformers: https://github.com/huggingface/transformers/pull/18020
sgugger changed pull request status to merged