LoRA weights

#2
by lesan20 - opened

Hello,

Thank you for your work on InternVideo2 — the project is very interesting and useful.

I’m currently working with the InternVideo2-CLIP-1B-224p-f8 model and noticed that the provided 1B_clip.pth checkpoint seems to miss LoRA weights (lora_A / lora_B). As a result, the text encoder produces nearly identical embeddings for different inputs.

Could you please clarify whether a full checkpoint with trained LoRA adapters is available?

Thank you in advance!

Sign up or log in to comment