How to use OpenFace-CQUPT/Human_LLaVA with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("visual-question-answering", model="OpenFace-CQUPT/Human_LLaVA")
# Load model directly from transformers import AutoProcessor, AutoModelForImageTextToText processor = AutoProcessor.from_pretrained("OpenFace-CQUPT/Human_LLaVA") model = AutoModelForImageTextToText.from_pretrained("OpenFace-CQUPT/Human_LLaVA")