How to use thenlper/gte-small with sentence-transformers:
from sentence_transformers import SentenceTransformer model = SentenceTransformer("thenlper/gte-small") sentences = [ "That is a happy person", "That is a happy dog", "That is a very happy person", "Today is a sunny day" ] embeddings = model.encode(sentences) similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [4, 4]
I'm curious about how the GTE model achieves SOTA (State-of-the-Art) performance on a small-sized model. It seems that I couldn't find any related research papers. Could you please provide a brief introduction?
· Sign up or log in to comment