Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

Vira21
/
finetuned_arctic

Sentence Similarity
sentence-transformers
Safetensors
bert
feature-extraction
Generated from Trainer
dataset_size:600
loss:MatryoshkaLoss
loss:MultipleNegativesRankingLoss
Eval Results (legacy)
text-embeddings-inference
Model card Files Files and versions
xet
Community

Instructions to use Vira21/finetuned_arctic with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • sentence-transformers

    How to use Vira21/finetuned_arctic with sentence-transformers:

    from sentence_transformers import SentenceTransformer
    
    model = SentenceTransformer("Vira21/finetuned_arctic")
    
    sentences = [
        "What are the potential risks associated with the impersonation and cyber-attacks mentioned in the context?",
        "Technology Engagement Center \nUber Technologies \nUniversity of Pittsburgh \nUndergraduate Student \nCollaborative \nUpturn \nUS Technology Policy Committee \nof the Association of Computing \nMachinery \nVirginia Puccio \nVisar Berisha and Julie Liss \nXR Association \nXR Safety Initiative \n• As an additional effort to reach out to stakeholders regarding the RFI, OSTP conducted two listening sessions\nfor members of the public. The listening sessions together drew upwards of 300 participants. The Science and\nTechnology Policy Institute produced a synopsis of both the RFI submissions and the feedback at the listening\nsessions.115\n61",
        "across all subgroups, which could leave the groups facing underperformance with worse outcomes than \nif no GAI system were used. Disparate or reduced performance for lower-resource languages also \npresents challenges to model adoption, inclusion, and accessibility, and may make preservation of \nendangered languages more difficult if GAI systems become embedded in everyday processes that would \notherwise have been opportunities to use these languages.  \nBias is mutually reinforcing with the problem of undesired homogenization, in which GAI systems \nproduce skewed distributions of outputs that are overly uniform (for example, repetitive aesthetic styles",
        "impersonation, cyber-attacks, and weapons creation. \nCBRN Information or Capabilities; \nInformation Security \nMS-2.6-007 Regularly evaluate GAI system vulnerabilities to possible circumvention of safety \nmeasures.  \nCBRN Information or Capabilities; \nInformation Security \nAI Actor Tasks: AI Deployment, AI Impact Assessment, Domain Experts, Operation and Monitoring, TEVV"
    ]
    embeddings = model.encode(sentences)
    
    similarities = model.similarity(embeddings, embeddings)
    print(similarities.shape)
    # [4, 4]
  • Notebooks
  • Google Colab
  • Kaggle
finetuned_arctic
439 MB
Ctrl+K
Ctrl+K
  • 1 contributor
History: 2 commits
Vira21's picture
Vira21
Add new SentenceTransformer model
72e99b8 verified over 1 year ago
  • 1_Pooling
    Add new SentenceTransformer model over 1 year ago
  • .gitattributes
    1.52 kB
    initial commit over 1 year ago
  • README.md
    35.1 kB
    Add new SentenceTransformer model over 1 year ago
  • config.json
    683 Bytes
    Add new SentenceTransformer model over 1 year ago
  • config_sentence_transformers.json
    304 Bytes
    Add new SentenceTransformer model over 1 year ago
  • model.safetensors
    438 MB
    xet
    Add new SentenceTransformer model over 1 year ago
  • modules.json
    368 Bytes
    Add new SentenceTransformer model over 1 year ago
  • sentence_bert_config.json
    56 Bytes
    Add new SentenceTransformer model over 1 year ago
  • special_tokens_map.json
    732 Bytes
    Add new SentenceTransformer model over 1 year ago
  • tokenizer.json
    712 kB
    Add new SentenceTransformer model over 1 year ago
  • tokenizer_config.json
    1.47 kB
    Add new SentenceTransformer model over 1 year ago
  • vocab.txt
    232 kB
    Add new SentenceTransformer model over 1 year ago