How to use UCSC-VLAA/openvision-vit-base-patch8-384 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-feature-extraction", model="UCSC-VLAA/openvision-vit-base-patch8-384")
# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("UCSC-VLAA/openvision-vit-base-patch8-384", dtype="auto")
This PR adds a model card to document the OpenVision model, including links to the paper, project page, and code repository.
· Sign up or log in to comment