Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing
    • Website
      • Tasks
      • HuggingChat
      • Collections
      • Languages
      • Organizations
    • Community
      • Blog
      • Posts
      • Daily Papers
      • Learn
      • Discord
      • Forum
      • GitHub
    • Solutions
      • Team & Enterprise
      • Hugging Face PRO
      • Enterprise Support
      • Inference Providers
      • Inference Endpoints
      • Storage Buckets

  • Log In
  • Sign Up

BAAI
/
bge-multilingual-gemma2

Feature Extraction
sentence-transformers
Safetensors
Transformers
gemma2
sentence-similarity
mteb
Eval Results (legacy)
Model card Files Files and versions
xet
Community
17

Instructions to use BAAI/bge-multilingual-gemma2 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • sentence-transformers

    How to use BAAI/bge-multilingual-gemma2 with sentence-transformers:

    from sentence_transformers import SentenceTransformer
    
    model = SentenceTransformer("BAAI/bge-multilingual-gemma2")
    
    sentences = [
        "The weather is lovely today.",
        "It's so sunny outside!",
        "He drove to the stadium."
    ]
    embeddings = model.encode(sentences)
    
    similarities = model.similarity(embeddings, embeddings)
    print(similarities.shape)
    # [3, 3]
  • Transformers

    How to use BAAI/bge-multilingual-gemma2 with Transformers:

    # Use a pipeline as a high-level helper
    from transformers import pipeline
    
    pipe = pipeline("feature-extraction", model="BAAI/bge-multilingual-gemma2")
    # Load model directly
    from transformers import AutoTokenizer, AutoModel
    
    tokenizer = AutoTokenizer.from_pretrained("BAAI/bge-multilingual-gemma2")
    model = AutoModel.from_pretrained("BAAI/bge-multilingual-gemma2")
  • Inference
  • Notebooks
  • Google Colab
  • Kaggle
New discussion
Resources
  • PR & discussions documentation
  • Code of Conduct
  • Hub documentation

add AIBOM

#17 opened 11 months ago by
RiccardoDav

Update config.json

#16 opened over 1 year ago by
michaelfeil

Could not reproduce the MTEB-Fr retrieval results

👍 2
#15 opened over 1 year ago by
tulifu

return value for one string

#14 opened over 1 year ago by
rudykierbel

Update instructions for usage with infinity

#13 opened over 1 year ago by
michaelfeil

Further fine-tuning for domain adaptation

1
#12 opened over 1 year ago by
al-h

Create requirements.txt

#11 opened over 1 year ago by
x-one-poznan

unable to use custom inference endpoint

👍 1
1
#9 opened over 1 year ago by
HonestAnnie

能否提供jax框架支持

#8 opened over 1 year ago by
fbs0

2B Model

👍 3
#7 opened almost 2 years ago by
lemon-mint

在PyTorch 2.1.0的环境中跑样例结果差异过大

#6 opened almost 2 years ago by
shizue

导入FlagLLMModel报错

1
#5 opened almost 2 years ago by
Abey0422

FlagEmbedding调用显卡问题

1
#4 opened almost 2 years ago by
labniz

The speed of obtaining embeddings on CPU/GPU

👍 1
1
#3 opened almost 2 years ago by
hiauiarau

FlagEmbedding example returns [[nan nan] [nan nan]]

❤️ 1
4
#2 opened almost 2 years ago by
hcnhcn012
Company
TOS Privacy About Careers
Website
Models Datasets Spaces Pricing Docs