Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

zacCMU
/
miniLM2-ENG2

Sentence Similarity
sentence-transformers
Safetensors
bert
feature-extraction
dense
Generated from Trainer
dataset_size:637
loss:TripletLoss
text-embeddings-inference
Model card Files Files and versions
xet
Community

Instructions to use zacCMU/miniLM2-ENG2 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • sentence-transformers

    How to use zacCMU/miniLM2-ENG2 with sentence-transformers:

    from sentence_transformers import SentenceTransformer
    
    model = SentenceTransformer("zacCMU/miniLM2-ENG2")
    
    sentences = [
        "We are the data controller in respect of your personal data and will handle your data in accordance with \nour obligations under the Privacy Laws. We will use this information solely in connection with \nadministering the Championship and exploiting the rights  granted to us pursuant to any separate \nagreement entered into with your team or otherwise. We are entitled to do so on the basis of our \nlegitimate interests, namely to enable us to operate the Championship and promote and exploit your \nparticipation in the same.",
        "The aerodynamic design of the new F1 car's rear wing has been optimized to reduce drag and improve downforce, allowing drivers to reach higher speeds on the straights.",
        "As the data controller, we will manage your personal information in accordance with privacy laws, using it solely to administer the Formula 1 Championship and promote your participation.",
        "The engine's ability to produce power is directly related to the pressure of the fuel-air mixture it receives. As the pressure increases, so does the potential for power output, with atmospheric pressure serving as the maximum threshold for normally aspirated engines."
    ]
    embeddings = model.encode(sentences)
    
    similarities = model.similarity(embeddings, embeddings)
    print(similarities.shape)
    # [4, 4]
  • Notebooks
  • Google Colab
  • Kaggle
miniLM2-ENG2
Ctrl+K
Ctrl+K
  • 1 contributor
History: 2 commits
zacCMU's picture
zacCMU
Initial upload of MiniLM2 model fine-tuned with TripletLoss
c15dad7 verified 5 months ago
  • 1_Pooling
    Initial upload of MiniLM2 model fine-tuned with TripletLoss 5 months ago
  • .gitattributes
    1.52 kB
    initial commit 5 months ago
  • README.md
    25 kB
    Initial upload of MiniLM2 model fine-tuned with TripletLoss 5 months ago
  • config.json
    611 Bytes
    Initial upload of MiniLM2 model fine-tuned with TripletLoss 5 months ago
  • config_sentence_transformers.json
    283 Bytes
    Initial upload of MiniLM2 model fine-tuned with TripletLoss 5 months ago
  • model.safetensors
    90.9 MB
    xet
    Initial upload of MiniLM2 model fine-tuned with TripletLoss 5 months ago
  • modules.json
    349 Bytes
    Initial upload of MiniLM2 model fine-tuned with TripletLoss 5 months ago
  • sentence_bert_config.json
    57 Bytes
    Initial upload of MiniLM2 model fine-tuned with TripletLoss 5 months ago
  • special_tokens_map.json
    695 Bytes
    Initial upload of MiniLM2 model fine-tuned with TripletLoss 5 months ago
  • tokenizer.json
    712 kB
    Initial upload of MiniLM2 model fine-tuned with TripletLoss 5 months ago
  • tokenizer_config.json
    1.46 kB
    Initial upload of MiniLM2 model fine-tuned with TripletLoss 5 months ago
  • vocab.txt
    232 kB
    Initial upload of MiniLM2 model fine-tuned with TripletLoss 5 months ago