How to use infgrad/Jasper-Token-Compression-600M with Adapters:
from adapters import AutoAdapterModel model = AutoAdapterModel.from_pretrained("undefined") model.load_adapter("infgrad/Jasper-Token-Compression-600M", set_active=True)
How to use infgrad/Jasper-Token-Compression-600M with sentence-transformers:
from sentence_transformers import SentenceTransformer model = SentenceTransformer("infgrad/Jasper-Token-Compression-600M", trust_remote_code=True) sentences = [ "That is a happy person", "That is a happy dog", "That is a very happy person", "Today is a sunny day" ] embeddings = model.encode(sentences) similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [4, 4]
At each training step (step 1 ~ 4), what parameters did you freeze? - Did you use LoRA? If yes, what is the LoRA Config?
Thank you for your nice work.
Hi, thank you for your interest. I don’t freeze the parameters at each training step.
· Sign up or log in to comment