Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing
    • Website
      • Tasks
      • HuggingChat
      • Collections
      • Languages
      • Organizations
    • Community
      • Blog
      • Posts
      • Daily Papers
      • Learn
      • Discord
      • Forum
      • GitHub
    • Solutions
      • Team & Enterprise
      • Hugging Face PRO
      • Enterprise Support
      • Inference Providers
      • Inference Endpoints
      • Storage Buckets

  • Log In
  • Sign Up

Nuf-hugginface
/
modernbert-embed-quickb

Sentence Similarity
sentence-transformers
Safetensors
English
modernbert
feature-extraction
Generated from Trainer
dataset_size:127
loss:MatryoshkaLoss
loss:MultipleNegativesRankingLoss
Eval Results (legacy)
text-embeddings-inference
Model card Files Files and versions
xet
Community

Instructions to use Nuf-hugginface/modernbert-embed-quickb with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • sentence-transformers

    How to use Nuf-hugginface/modernbert-embed-quickb with sentence-transformers:

    from sentence_transformers import SentenceTransformer
    
    model = SentenceTransformer("Nuf-hugginface/modernbert-embed-quickb")
    
    sentences = [
        "What is the difference between traditional programming and ML?",
        "Over the past few years, the field of ML has advanced rapidly, especially in the area of Natural Language Processing (NLP)—the ability of machines to understand and generate human language. At the forefront of this progress are Large Language Models (LLMs), such as OpenAI’s GPT (Generative Pre-trained Transformer), Google’s PaLM, and Meta’s LLaMA",
        ". For example, integrating an LLM into a customer support chatbot might involve connecting it to a company’s internal knowledge base, enabling it to answer customer questions using accurate, up-to-date information.",
        "A major subset of AI is Machine Learning (ML), which involves algorithms that learn from data rather than being explicitly programmed. Instead of writing detailed instructions for every task, ML models find patterns in large datasets and use these patterns to make predictions or decisions"
    ]
    embeddings = model.encode(sentences)
    
    similarities = model.similarity(embeddings, embeddings)
    print(similarities.shape)
    # [4, 4]
  • Notebooks
  • Google Colab
  • Kaggle
modernbert-embed-quickb
600 MB
Ctrl+K
Ctrl+K
  • 1 contributor
History: 6 commits
Nuf-hugginface's picture
Nuf-hugginface
Add new SentenceTransformer model
30ab736 verified about 1 year ago
  • 1_Pooling
    Add new SentenceTransformer model about 1 year ago
  • .gitattributes
    1.52 kB
    initial commit about 1 year ago
  • README.md
    28.9 kB
    Add new SentenceTransformer model about 1 year ago
  • config.json
    1.34 kB
    Add new SentenceTransformer model about 1 year ago
  • config_sentence_transformers.json
    212 Bytes
    Add new SentenceTransformer model about 1 year ago
  • model.safetensors
    596 MB
    xet
    Add new SentenceTransformer model about 1 year ago
  • modules.json
    368 Bytes
    Add new SentenceTransformer model about 1 year ago
  • sentence_bert_config.json
    57 Bytes
    Add new SentenceTransformer model about 1 year ago
  • special_tokens_map.json
    731 Bytes
    Add new SentenceTransformer model about 1 year ago
  • tokenizer.json
    3.58 MB
    Add new SentenceTransformer model about 1 year ago
  • tokenizer_config.json
    21.8 kB
    Add new SentenceTransformer model about 1 year ago