Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

vineet10
/
new_model

Sentence Similarity
sentence-transformers
Safetensors
bert
feature-extraction
Generated from Trainer
dataset_size:26
loss:MultipleNegativesRankingLoss
loss:MatryoshkaLoss
text-embeddings-inference
Model card Files Files and versions
xet
Community

Instructions to use vineet10/new_model with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • sentence-transformers

    How to use vineet10/new_model with sentence-transformers:

    from sentence_transformers import SentenceTransformer
    
    model = SentenceTransformer("vineet10/new_model")
    
    sentences = [
        "The Employee agrees to diligently, honestly, and to the best of their abilities, perform all",
        "What are the Payment Terms for the Batteries?",
        "What are the general obligations of the Employee?",
        "according to the MOU?"
    ]
    embeddings = model.encode(sentences)
    
    similarities = model.similarity(embeddings, embeddings)
    print(similarities.shape)
    # [4, 4]
  • Notebooks
  • Google Colab
  • Kaggle
new_model
439 MB
Ctrl+K
Ctrl+K
  • 1 contributor
History: 2 commits
vineet10's picture
vineet10
Add new SentenceTransformer model.
e8c1077 verified almost 2 years ago
  • 1_Pooling
    Add new SentenceTransformer model. almost 2 years ago
  • .gitattributes
    1.52 kB
    initial commit almost 2 years ago
  • README.md
    15.3 kB
    Add new SentenceTransformer model. almost 2 years ago
  • config.json
    740 Bytes
    Add new SentenceTransformer model. almost 2 years ago
  • config_sentence_transformers.json
    201 Bytes
    Add new SentenceTransformer model. almost 2 years ago
  • model.safetensors
    438 MB
    xet
    Add new SentenceTransformer model. almost 2 years ago
  • modules.json
    349 Bytes
    Add new SentenceTransformer model. almost 2 years ago
  • sentence_bert_config.json
    52 Bytes
    Add new SentenceTransformer model. almost 2 years ago
  • special_tokens_map.json
    695 Bytes
    Add new SentenceTransformer model. almost 2 years ago
  • tokenizer.json
    712 kB
    Add new SentenceTransformer model. almost 2 years ago
  • tokenizer_config.json
    1.24 kB
    Add new SentenceTransformer model. almost 2 years ago
  • vocab.txt
    232 kB
    Add new SentenceTransformer model. almost 2 years ago