Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

Cheselle
/
finetuned-arctic

Sentence Similarity
sentence-transformers
Safetensors
bert
feature-extraction
Generated from Trainer
dataset_size:600
loss:MatryoshkaLoss
loss:MultipleNegativesRankingLoss
Eval Results (legacy)
text-embeddings-inference
Model card Files Files and versions
xet
Community

Instructions to use Cheselle/finetuned-arctic with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • sentence-transformers

    How to use Cheselle/finetuned-arctic with sentence-transformers:

    from sentence_transformers import SentenceTransformer
    
    model = SentenceTransformer("Cheselle/finetuned-arctic")
    
    sentences = [
        "What are the existing regulatory safety requirements mentioned in the context for medical devices?",
        "47 \nAppendix A. Primary GAI Considerations \nThe following primary considerations were derived as overarching themes from the GAI PWG \nconsultation process. These considerations (Governance, Pre-Deployment Testing, Content Provenance, \nand Incident Disclosure) are relevant for voluntary use by any organization designing, developing, and \nusing GAI and also inform the Actions to Manage GAI risks. Information included about the primary \nconsiderations is not exhaustive, but highlights the most relevant topics derived from the GAI PWG.  \nAcknowledgments: These considerations could not have been surfaced without the helpful analysis and \ncontributions from the community and NIST staff GAI PWG leads: George Awad, Luca Belli, Harold Booth, \nMat Heyman, Yooyoung Lee, Mark Pryzbocki, Reva Schwartz, Martin Stanley, and Kyra Yee. \nA.1. Governance \nA.1.1. Overview \nLike any other technology system, governance principles and techniques can be used to manage risks",
        "behavior or outcomes of a GAI model or system, how they could occur, and stress test safeguards”. AI \nred-teaming can be performed before or after AI models or systems are made available to the broader \npublic; this section focuses on red-teaming in pre-deployment contexts.  \nThe quality of AI red-teaming outputs is related to the background and expertise of the AI red team \nitself. Demographically and interdisciplinarily diverse AI red teams can be used to identify flaws in the \nvarying contexts where GAI will be used. For best results, AI red teams should demonstrate domain \nexpertise, and awareness of socio-cultural aspects within the deployment context. AI red-teaming results \nshould be given additional analysis before they are incorporated into organizational governance and \ndecision making, policy and procedural updates, and AI risk management efforts. \nVarious types of AI red-teaming may be appropriate, depending on the use case: \n•",
        "SECTION TITLE\n \n \n \n \n \n \nApplying The Blueprint for an AI Bill of Rights \nRELATIONSHIP TO EXISTING LAW AND POLICY\nThere are regulatory safety requirements for medical devices, as well as sector-, population-, or technology-spe­\ncific privacy and security protections. Ensuring some of the additional protections proposed in this framework \nwould require new laws to be enacted or new policies and practices to be adopted. In some cases, exceptions to \nthe principles described in the Blueprint for an AI Bill of Rights may be necessary to comply with existing law, \nconform to the practicalities of a specific use case, or balance competing public interests. In particular, law \nenforcement, and other regulatory contexts may require government actors to protect civil rights, civil liberties, \nand privacy in a manner consistent with, but using alternate mechanisms to, the specific principles discussed in"
    ]
    embeddings = model.encode(sentences)
    
    similarities = model.similarity(embeddings, embeddings)
    print(similarities.shape)
    # [4, 4]
  • Notebooks
  • Google Colab
  • Kaggle
finetuned-arctic
439 MB
Ctrl+K
Ctrl+K
  • 1 contributor
History: 2 commits
Cheselle's picture
Cheselle
Add new SentenceTransformer model.
e675356 verified over 1 year ago
  • 1_Pooling
    Add new SentenceTransformer model. over 1 year ago
  • .gitattributes
    1.52 kB
    initial commit over 1 year ago
  • README.md
    36.7 kB
    Add new SentenceTransformer model. over 1 year ago
  • config.json
    657 Bytes
    Add new SentenceTransformer model. over 1 year ago
  • config_sentence_transformers.json
    277 Bytes
    Add new SentenceTransformer model. over 1 year ago
  • model.safetensors
    438 MB
    xet
    Add new SentenceTransformer model. over 1 year ago
  • modules.json
    349 Bytes
    Add new SentenceTransformer model. over 1 year ago
  • sentence_bert_config.json
    53 Bytes
    Add new SentenceTransformer model. over 1 year ago
  • special_tokens_map.json
    695 Bytes
    Add new SentenceTransformer model. over 1 year ago
  • tokenizer.json
    712 kB
    Add new SentenceTransformer model. over 1 year ago
  • tokenizer_config.json
    1.38 kB
    Add new SentenceTransformer model. over 1 year ago
  • vocab.txt
    232 kB
    Add new SentenceTransformer model. over 1 year ago