Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

JoeNoss1998
/
Noss

Sentence Similarity
sentence-transformers
Safetensors
bert
feature-extraction
Generated from Trainer
dataset_size:800
loss:MatryoshkaLoss
loss:MultipleNegativesRankingLoss
Eval Results (legacy)
text-embeddings-inference
Model card Files Files and versions
xet
Community

Instructions to use JoeNoss1998/Noss with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • sentence-transformers

    How to use JoeNoss1998/Noss with sentence-transformers:

    from sentence_transformers import SentenceTransformer
    
    model = SentenceTransformer("JoeNoss1998/Noss")
    
    sentences = [
        "How can bias testing influence the design and launch of automated systems?",
        "reinforce those legal protections but extend beyond them to ensure equity for underserved communities48 \neven in circumstances where a specific legal protection may not be clearly established. These protections \nshould be instituted throughout the design, development, and deployment process and are described below \nroughly in the order in which they would be instituted. \nProtect the public from algorithmic discrimination in a proactive and ongoing manner \nProactive assessment of equity in design. Those responsible for the development, use, or oversight of",
        "the severity of certain diseases in Black Americans. Instances of discriminatory practices built into and \nresulting from AI and other automated systems exist across many industries, areas, and contexts. While automated \nsystems have the capacity to drive extraordinary advances and innovations, algorithmic discrimination \nprotections should be built into their design, deployment, and ongoing use. \nMany companies, non-profits, and federal government agencies are already taking steps to ensure the public \nis protected from algorithmic discrimination. Some companies have instituted bias testing as part of their product \nquality assessment and launch procedures, and in some cases this testing has led products to be changed or not",
        "accuracy), and enable human users to understand, appropriately trust, and effectively manage the emerging \ngeneration of artificially intelligent partners.95 The National Science Foundation’s program on Fairness in \nArtificial Intelligence also includes a specific interest in research foundations for explainable AI.96\n45"
    ]
    embeddings = model.encode(sentences)
    
    similarities = model.similarity(embeddings, embeddings)
    print(similarities.shape)
    # [4, 4]
  • Notebooks
  • Google Colab
  • Kaggle
Noss
439 MB
Ctrl+K
Ctrl+K
  • 1 contributor
History: 2 commits
JoeNoss1998's picture
JoeNoss1998
Add new SentenceTransformer model.
1a513fd verified over 1 year ago
  • 1_Pooling
    Add new SentenceTransformer model. over 1 year ago
  • .gitattributes
    1.52 kB
    initial commit over 1 year ago
  • README.md
    28.9 kB
    Add new SentenceTransformer model. over 1 year ago
  • config.json
    657 Bytes
    Add new SentenceTransformer model. over 1 year ago
  • config_sentence_transformers.json
    281 Bytes
    Add new SentenceTransformer model. over 1 year ago
  • model.safetensors
    438 MB
    xet
    Add new SentenceTransformer model. over 1 year ago
  • modules.json
    349 Bytes
    Add new SentenceTransformer model. over 1 year ago
  • sentence_bert_config.json
    53 Bytes
    Add new SentenceTransformer model. over 1 year ago
  • special_tokens_map.json
    695 Bytes
    Add new SentenceTransformer model. over 1 year ago
  • tokenizer.json
    712 kB
    Add new SentenceTransformer model. over 1 year ago
  • tokenizer_config.json
    1.38 kB
    Add new SentenceTransformer model. over 1 year ago
  • vocab.txt
    232 kB
    Add new SentenceTransformer model. over 1 year ago