How to use nvidia/omnivinci with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("feature-extraction", model="nvidia/omnivinci", trust_remote_code=True)
# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("nvidia/omnivinci", trust_remote_code=True, dtype="auto")
Trying to understand if this model can be loaded for inference on a Jetson AGX Orin?
Yes, it can be. But takes about 15 seconds per image on average to infer
· Sign up or log in to comment