Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

KennethTM
/
MiniLM-L6-danish-encoder-v2

Sentence Similarity
sentence-transformers
Safetensors
Danish
bert
feature-extraction
text-embeddings-inference
Model card Files Files and versions
xet
Community

Instructions to use KennethTM/MiniLM-L6-danish-encoder-v2 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • sentence-transformers

    How to use KennethTM/MiniLM-L6-danish-encoder-v2 with sentence-transformers:

    from sentence_transformers import SentenceTransformer
    
    model = SentenceTransformer("KennethTM/MiniLM-L6-danish-encoder-v2")
    
    sentences = [
        "Kører der cykler på vejen?",
        "I Danmark er cykler et almindeligt transportmiddel, og de har lige så stor ret til at bruge vejene som bilister. Cyklister skal dog følge færdselsreglerne og vise hensyn til andre trafikanter.",
        "Solen skinner, og himlen er blå. Der er ingen vind, og temperaturen er perfekt. Det er den perfekte dag til at tage en tur på landet og nyde den friske luft."
    ]
    embeddings = model.encode(sentences)
    
    similarities = model.similarity(embeddings, embeddings)
    print(similarities.shape)
    # [3, 3]
  • Notebooks
  • Google Colab
  • Kaggle
MiniLM-L6-danish-encoder-v2
92.3 MB
Ctrl+K
Ctrl+K
  • 1 contributor
History: 5 commits
KennethTM's picture
KennethTM
Update README.md
c9b4eeb verified almost 2 years ago
  • 1_Pooling
    Upload 9 files almost 2 years ago
  • .gitattributes
    1.52 kB
    initial commit almost 2 years ago
  • README.md
    5.16 kB
    Update README.md almost 2 years ago
  • config.json
    678 Bytes
    Upload 9 files almost 2 years ago
  • config_sentence_transformers.json
    194 Bytes
    Upload 9 files almost 2 years ago
  • model.safetensors
    90.9 MB
    xet
    Upload 9 files almost 2 years ago
  • modules.json
    349 Bytes
    Upload 9 files almost 2 years ago
  • sentence_bert_config.json
    53 Bytes
    Upload 9 files almost 2 years ago
  • special_tokens_map.json
    695 Bytes
    Upload 9 files almost 2 years ago
  • tokenizer.json
    725 kB
    Upload 9 files almost 2 years ago
  • tokenizer_config.json
    661 kB
    Upload 9 files almost 2 years ago