Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

G-UDS
/
disaster_ko-bert

Sentence Similarity
sentence-transformers
Safetensors
bert
feature-extraction
Generated from Trainer
dataset_size:1022
loss:MultipleNegativesRankingLoss
text-embeddings-inference
Model card Files Files and versions
xet
Community

Instructions to use G-UDS/disaster_ko-bert with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • sentence-transformers

    How to use G-UDS/disaster_ko-bert with sentence-transformers:

    from sentence_transformers import SentenceTransformer
    
    model = SentenceTransformer("G-UDS/disaster_ko-bert")
    
    sentences = [
        "토목섬유튜브로 보강한 철도 교대 접속부 구조의 장기안정성 평가",
        "A Study on Mechanism of Fire Spread between Rooms",
        "Assessement of Long Term Stability of Railway Bridge Abutment Using Geosynthetics Tube",
        "Analysis on Reliability for the Storm Sewer considering Sedimentation"
    ]
    embeddings = model.encode(sentences)
    
    similarities = model.similarity(embeddings, embeddings)
    print(similarities.shape)
    # [4, 4]
  • Notebooks
  • Google Colab
  • Kaggle
disaster_ko-bert
Ctrl+K
Ctrl+K
  • 1 contributor
History: 2 commits
GloryKingsman's picture
GloryKingsman
Upload 11 files
3ab56a0 verified over 1 year ago
  • 1_Pooling
    Upload 11 files over 1 year ago
  • .gitattributes
    1.52 kB
    initial commit over 1 year ago
  • README.md
    12.8 kB
    Upload 11 files over 1 year ago
  • config.json
    678 Bytes
    Upload 11 files over 1 year ago
  • config_sentence_transformers.json
    199 Bytes
    Upload 11 files over 1 year ago
  • model.safetensors
    90.9 MB
    xet
    Upload 11 files over 1 year ago
  • modules.json
    349 Bytes
    Upload 11 files over 1 year ago
  • sentence_bert_config.json
    53 Bytes
    Upload 11 files over 1 year ago
  • special_tokens_map.json
    695 Bytes
    Upload 11 files over 1 year ago
  • tokenizer.json
    712 kB
    Upload 11 files over 1 year ago
  • tokenizer_config.json
    1.46 kB
    Upload 11 files over 1 year ago
  • vocab.txt
    232 kB
    Upload 11 files over 1 year ago