# Load model directly
from transformers import AutoModel
model = AutoModel.from_pretrained("ColorfulAI/LSTP-Chat", dtype="auto")Quick Links
LSTP-Chat: Language-guided Spatial-Temporal Prompt Learning for Video Chat
Available Models:
- LSTP-FlanT5xl
- LSTP-Chat-7B (Vicuna-7b)
For more details, please refer to our official repository
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-text-to-text", model="ColorfulAI/LSTP-Chat")