Rsr2425's picture
Add new SentenceTransformer model
37aabf4 verified
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:64
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: Snowflake/snowflake-arctic-embed-l
widget:
- source_sentence: '1. What is the first step to take when implementing architecture
as code according to the provided context?
2. How should the content of each file be formatted when outputting code?'
sentences:
- architecture is, in the end, implemented as code.\\n\\nThink step by step and
reason yourself to the right decisions to make sure we get it right.\\nYou will
first lay out the names of the core classes, functions, methods that will be necessary,
as well as a quick comment on their purpose.\\n\\nThen you will output the content
of each file including ALL code.\\nEach file must strictly follow a markdown code
block format, where the following tokens must be replaced such that\\nFILENAME
is the lowercase file name including the file extension,\\nLANG is the markup
code block language for the code\'s language, and CODE is the code:\\n\\nFILENAME\\n\`\`\`LANG\\nCODE\\n\`\`\`\\n\\nYou
will start with the \\"entrypoint\\" file, then go to the
- 'Stream tokens:
for message, metadata in graph.stream( {"question": "What is Task Decomposition?"},
stream_mode="messages"): print(message.content, end="|")
|Task| decomposition| is| the| process| of| breaking| down| complex| tasks| into|
smaller|,| more| manageable| steps|.| It| can| be| achieved| through| techniques|
like| Chain| of| Thought| (|Co|T|)| prompting|,| which| encourages| the| model|
to| think| step| by| step|,| or| through| more| structured| methods| like| the|
Tree| of| Thoughts|.| This| approach| not| only| simplifies| task| execution|
but| also| provides| insights| into| the| model|''s| reasoning| process|.||
tipFor async invocations, use:result = await graph.ainvoke(...)andasync for step
in graph.astream(...):'
- 'return {"answer": response.content}graph_builder = StateGraph(State).add_sequence([analyze_query,
retrieve, generate])graph_builder.add_edge(START, "analyze_query")graph = graph_builder.compile()'
- source_sentence: "1. What is the purpose of the DocumentTransformer object in the\
\ context provided? \n2. Where can one find detailed documentation on how to\
\ use DocumentTransformers?"
sentences:
- 'Learn more about splitting text using different methods by reading the how-to
docs
Code (py or js)
Scientific papers
Interface: API reference for the base interface.
DocumentTransformer: Object that performs a transformation on a list
of Document objects.
Docs: Detailed documentation on how to use DocumentTransformers
Integrations
Interface: API reference for the base interface.'
- '{''retrieve'': {''context'': [Document(id=''a42dc78b-8f76-472a-9e25-180508af74f3'',
metadata={''source'': ''https://lilianweng.github.io/posts/2023-06-23-agent/'',
''start_index'': 1585}, page_content=''Fig. 1. Overview of a LLM-powered autonomous
agent system.\nComponent One: Planning#\nA complicated task usually involves many
steps. An agent needs to know what they are and plan ahead.\nTask Decomposition#\nChain
of thought (CoT; Wei et al. 2022) has become a standard prompting technique for
enhancing model performance on complex tasks. The model is instructed to “think
step by step” to utilize more test-time computation to decompose hard tasks into
smaller and simpler steps. CoT transforms big tasks into multiple manageable tasks
and shed lights into'
- 'Do I need to use LangGraph?LangGraph is not required to build a RAG application.
Indeed, we can implement the same application logic through invocations of the
individual components:question = "..."retrieved_docs = vector_store.similarity_search(question)docs_content
= "\n\n".join(doc.page_content for doc in retrieved_docs)prompt = prompt.invoke({"question":
question, "context": docs_content})answer = llm.invoke(prompt)The benefits of
LangGraph include:
Support for multiple invocation modes: this logic would need to be rewritten if
we wanted to stream output tokens, or stream the results of individual steps;
Automatic support for tracing via LangSmith and deployments via LangGraph Platform;'
- source_sentence: '1. What mode did the agent move into after the clarifications
were made?
2. What instructions were given to the agent regarding the code writing process?'
sentences:
- '= RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)all_splits
= text_splitter.split_documents(docs)# Update metadata (illustration purposes)total_documents
= len(all_splits)third = total_documents // 3for i, document in enumerate(all_splits): if
i < third: document.metadata["section"] = "beginning" elif i < 2 * third: document.metadata["section"]
= "middle" else: document.metadata["section"] = "end"# Index chunksvector_store
= InMemoryVectorStore(embeddings)_ = vector_store.add_documents(all_splits)# Define
schema for searchclass Search(TypedDict): """Search query.""" query: Annotated[str,
..., "Search query to run."] section: Annotated[ Literal["beginning",
"middle", "end"],'
- 'limitations:''), Document(id=''ca7f06e4-2c2e-4788-9a81-2418d82213d9'', metadata={''source'':
''https://lilianweng.github.io/posts/2023-06-23-agent/'', ''start_index'': 32942,
''section'': ''end''}, page_content=''}\n]\nThen after these clarification, the
agent moved into the code writing mode with a different system message.\nSystem
message:''), Document(id=''1fcc2736-30f4-4ef6-90f2-c64af92118cb'', metadata={''source'':
''https://lilianweng.github.io/posts/2023-06-23-agent/'', ''start_index'': 35127,
''section'': ''end''}, page_content=''"content": "You will get instructions for
code to write.\\nYou will write a very long answer. Make sure that every detail
of the architecture is, in the end, implemented as code.\\nMake sure that every
detail of the architecture is,'
- 'Build a Retrieval Augmented Generation (RAG) App: Part 1 | 🦜️🔗 LangChain'
- source_sentence: '1. What is the purpose of the `getpass` module in the provided
context?
2. How is the chat model initialized in the given code snippet?'
sentences:
- 'Select chat model:Groq▾GroqOpenAIAnthropicAzureGoogle VertexAWSCohereNVIDIAFireworks
AIMistral AITogether AIIBM watsonxDatabrickspip install -qU "langchain[groq]"import
getpassimport osif not os.environ.get("GROQ_API_KEY"): os.environ["GROQ_API_KEY"]
= getpass.getpass("Enter API key for Groq: ")from langchain.chat_models import
init_chat_modelllm = init_chat_model("llama3-8b-8192", model_provider="groq")'
- 'One of the most powerful applications enabled by LLMs is sophisticated question-answering
(Q&A) chatbots. These are applications that can answer questions about specific
source information. These applications use a technique known as Retrieval Augmented
Generation, or RAG.
This is a multi-part tutorial:'
- 'user''s request in a straightforward manner. Then describe the task process and
show your analysis and model inference results to the user in the first person.
If inference results contain a file path, must tell the user the complete file
path.")]}}----------------{''generate'': {''answer'': ''Task decomposition is
the process of breaking down a complex task into smaller, more manageable steps.
This technique, often enhanced by methods like Chain of Thought (CoT) or Tree
of Thoughts, allows models to reason through tasks systematically and improves
performance by clarifying the thought process. It can be achieved through simple
prompts, task-specific instructions, or human inputs.''}}----------------'
- source_sentence: "1. How do chat models utilize the state of the graph to recover\
\ sources for generated answers? \n2. What is the significance of the \"context\"\
\ field in the state when returning sources?"
sentences:
- 'Docs: Detailed documentation on how to use embeddings.
Integrations: 30+ integrations to choose from.
Interface: API reference for the base interface.
VectorStore: Wrapper around a vector database, used for storing and
querying embeddings.
Docs: Detailed documentation on how to use vector stores.
Integrations: 40+ integrations to choose from.
Interface: API reference for the base interface.'
- 'Returning sources​
Note that by storing the retrieved context in the state of the graph, we recover
sources for the model''s generated answer in the "context" field of the state.
See this guide on returning sources for more detail.
Go deeper​
Chat models take in a sequence of messages and return a message.'
- display(Image(graph.get_graph().draw_mermaid_png()))
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: Unknown
type: unknown
metrics:
- type: cosine_accuracy@1
value: 1.0
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 1.0
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 1.0
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1.0
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 1.0
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3333333333333333
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.2
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.1
name: Cosine Precision@10
- type: cosine_recall@1
value: 1.0
name: Cosine Recall@1
- type: cosine_recall@3
value: 1.0
name: Cosine Recall@3
- type: cosine_recall@5
value: 1.0
name: Cosine Recall@5
- type: cosine_recall@10
value: 1.0
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 1.0
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 1.0
name: Cosine Mrr@10
- type: cosine_map@100
value: 1.0
name: Cosine Map@100
---
# SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l) <!-- at revision d8fb21ca8d905d2832ee8b96c894d3298964346b -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Rsr2425/simplify-ft-arctic-embed-l")
# Run inference
sentences = [
'1. How do chat models utilize the state of the graph to recover sources for generated answers? \n2. What is the significance of the "context" field in the state when returning sources?',
'Returning sources\u200b\nNote that by storing the retrieved context in the state of the graph, we recover sources for the model\'s generated answer in the "context" field of the state. See this guide on returning sources for more detail.\nGo deeper\u200b\nChat models take in a sequence of messages and return a message.',
'Docs: Detailed documentation on how to use embeddings.\nIntegrations: 30+ integrations to choose from.\nInterface: API reference for the base interface.\n\nVectorStore: Wrapper around a vector database, used for storing and\nquerying embeddings.\n\nDocs: Detailed documentation on how to use vector stores.\nIntegrations: 40+ integrations to choose from.\nInterface: API reference for the base interface.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:--------|
| cosine_accuracy@1 | 1.0 |
| cosine_accuracy@3 | 1.0 |
| cosine_accuracy@5 | 1.0 |
| cosine_accuracy@10 | 1.0 |
| cosine_precision@1 | 1.0 |
| cosine_precision@3 | 0.3333 |
| cosine_precision@5 | 0.2 |
| cosine_precision@10 | 0.1 |
| cosine_recall@1 | 1.0 |
| cosine_recall@3 | 1.0 |
| cosine_recall@5 | 1.0 |
| cosine_recall@10 | 1.0 |
| **cosine_ndcg@10** | **1.0** |
| cosine_mrr@10 | 1.0 |
| cosine_map@100 | 1.0 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 64 training samples
* Columns: <code>sentence_0</code> and <code>sentence_1</code>
* Approximate statistics based on the first 64 samples:
| | sentence_0 | sentence_1 |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 23 tokens</li><li>mean: 37.42 tokens</li><li>max: 49 tokens</li></ul> | <ul><li>min: 19 tokens</li><li>mean: 153.86 tokens</li><li>max: 286 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 |
|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>1. How do chat models utilize the state of the graph to recover sources for generated answers? <br>2. What is the significance of the "context" field in the state when returning sources?</code> | <code>Returning sources​<br>Note that by storing the retrieved context in the state of the graph, we recover sources for the model's generated answer in the "context" field of the state. See this guide on returning sources for more detail.<br>Go deeper​<br>Chat models take in a sequence of messages and return a message.</code> |
| <code>1. What is the purpose of the indexing process in the data pipeline?<br>2. How does the retrieval and generation phase utilize the indexed data to respond to user queries?</code> | <code>Indexing: a pipeline for ingesting data from a source and indexing it. This usually happens offline.<br>Retrieval and generation: the actual RAG chain, which takes the user query at run time and retrieves the relevant data from the index, then passes that to the model.<br>Note: the indexing portion of this tutorial will largely follow the semantic search tutorial.<br>The most common full sequence from raw data to answer looks like:<br>Indexing​</code> |
| <code>1. What is task decomposition and how does it help in problem-solving?<br>2. Can you explain the methods used in task decomposition, such as chain of thought prompting and the tree of thoughts approach?</code> | <code>user's request in a straightforward manner. Then describe the task process and show your analysis and model inference results to the user in the first person. If inference results contain a file path, must tell the user the complete file path.")]Answer: Task decomposition is a technique used to break down complex tasks into smaller, manageable steps, allowing for more efficient problem-solving. This can be achieved through methods like chain of thought prompting or the tree of thoughts approach, which explores multiple reasoning possibilities at each step. It can be initiated through simple prompts, task-specific instructions, or human inputs.</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `num_train_epochs`: 10
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 10
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | cosine_ndcg@10 |
|:-----:|:----:|:--------------:|
| 1.0 | 4 | 1.0 |
| 2.0 | 8 | 1.0 |
| 3.0 | 12 | 1.0 |
| 4.0 | 16 | 1.0 |
| 5.0 | 20 | 1.0 |
| 6.0 | 24 | 1.0 |
| 7.0 | 28 | 1.0 |
| 8.0 | 32 | 1.0 |
| 9.0 | 36 | 1.0 |
| 10.0 | 40 | 1.0 |
### Framework Versions
- Python: 3.11.11
- Sentence Transformers: 3.4.1
- Transformers: 4.48.3
- PyTorch: 2.5.1+cu124
- Accelerate: 1.3.0
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->