Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

imaneb942
/
MNLP_M2_document_encoder

Sentence Similarity
sentence-transformers
PyTorch
ONNX
Safetensors
OpenVINO
English
bert
mteb
Sentence Transformers
Eval Results (legacy)
text-embeddings-inference
Model card Files Files and versions
xet
Community

Instructions to use imaneb942/MNLP_M2_document_encoder with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • sentence-transformers

    How to use imaneb942/MNLP_M2_document_encoder with sentence-transformers:

    from sentence_transformers import SentenceTransformer
    
    model = SentenceTransformer("imaneb942/MNLP_M2_document_encoder")
    
    sentences = [
        "That is a happy person",
        "That is a happy dog",
        "That is a very happy person",
        "Today is a sunny day"
    ]
    embeddings = model.encode(sentences)
    
    similarities = model.similarity(embeddings, embeddings)
    print(similarities.shape)
    # [4, 4]
  • Notebooks
  • Google Colab
  • Kaggle
MNLP_M2_document_encoder
5.36 GB
Ctrl+K
Ctrl+K
  • 1 contributor
History: 2 commits
imaneb942's picture
imaneb942
Upload 24 files
90e6ade verified 11 months ago
  • 1_Pooling
    Upload 24 files 11 months ago
  • onnx
    Upload 24 files 11 months ago
  • openvino
    Upload 24 files 11 months ago
  • .gitattributes
    1.55 kB
    Upload 24 files 11 months ago
  • README.md
    70.6 kB
    Upload 24 files 11 months ago
  • config.json
    644 Bytes
    Upload 24 files 11 months ago
  • model.safetensors
    670 MB
    xet
    Upload 24 files 11 months ago
  • modules.json
    404 Bytes
    Upload 24 files 11 months ago
  • pytorch_model.bin

    Detected Pickle imports (4)

    • "torch.HalfStorage",
    • "collections.OrderedDict",
    • "torch._utils._rebuild_tensor_v2",
    • "torch.LongStorage"

    What is a pickle import?

    670 MB
    xet
    Upload 24 files 11 months ago
  • sentence_bert_config.json
    60 Bytes
    Upload 24 files 11 months ago
  • special_tokens_map.json
    132 Bytes
    Upload 24 files 11 months ago
  • tokenizer.json
    742 kB
    Upload 24 files 11 months ago
  • tokenizer_config.json
    355 Bytes
    Upload 24 files 11 months ago
  • vocab.txt
    262 kB
    Upload 24 files 11 months ago