|
|
--- |
|
|
tags: |
|
|
- sentence-transformers |
|
|
- sentence-similarity |
|
|
- feature-extraction |
|
|
- generated_from_trainer |
|
|
- dataset_size:53851 |
|
|
- loss:MultipleNegativesRankingLoss |
|
|
base_model: BAAI/bge-base-en-v1.5 |
|
|
widget: |
|
|
- source_sentence: A certain junior class has 1000 students and a certain senior class |
|
|
has 900 students. Among these students, there are 60 siblings pairs each consisting |
|
|
of 1 junior and 1 senior. If 1 student is to be selected at random from each class, |
|
|
what is the probability that the 2 students selected will be a sibling pair? |
|
|
sentences: |
|
|
- Let's see Pick 60/1000 first Then we can only pick 1 other pair from the 800 So |
|
|
total will be 60 / 900 *1000 Simplify and you get 2/30000 |
|
|
- To maximize number of hot dogs with 300$ Total number of hot dogs bought in 250-pack |
|
|
= 22.95*13 =298.35$ Amount remaining = 300 - 298.35 = 1.65$ This amount is too |
|
|
less to buy any 8- pack . Greatest number of hot dogs one can buy with 300 $ = |
|
|
250*13 = 3250 |
|
|
- artificial leg |
|
|
- source_sentence: A stock trader originally bought 300 shares of stock from a company |
|
|
at a total cost of m dollars. If each share was sold at 80% above the original |
|
|
cost per share of stock, then interns of m for how many dollars was each share |
|
|
sold? |
|
|
sentences: |
|
|
- Let Cost of 300 shares be $ 3000 So, Cost of 1 shares be $ 10 =>m/300 Selling |
|
|
price per share = (100+80)/100 * m/300 Or, Selling price per share = 9/5 * m/300 |
|
|
=> 9m/1500 |
|
|
- The prognostic value of p53 nuclear accumulation in gastric cancer is still unclear, |
|
|
as shown by the discordant results still reported in the literature. In this study, |
|
|
we evaluated the correlation between p53 accumulation and long-term survival of |
|
|
patients resected for intestinal and diffuse-type gastric cancer. Eighty-three |
|
|
patients with carcinoma of the intestinal type and 53 patients with carcinoma |
|
|
of the diffuse type were included in the study. Immunohistochemical staining of |
|
|
the paraffin sections was performed by using monoclonal antibody DO1; cases were |
|
|
considered positive when nuclear immunostaining was observed in 10% or more of |
|
|
the tumor cells. Prognostic significance of different variables was investigated |
|
|
by univariate and multivariate analysis. p53 positivity was found in 51.8% of |
|
|
intestinal-type and 50.9% of diffuse-type cases. No significant correlation between |
|
|
the rate of p53 overexpression and age, sex, tumor location, tumor size, depth |
|
|
of invasion, lymph node involvement, distant metastases, and surgical radicality |
|
|
was found in the two groups of patients. A statistically significant difference |
|
|
in survival rate was observed between p53-negative and p53-positive cases in the |
|
|
intestinal type (P < .05), confirmed by multivariate analysis (P < .005; relative |
|
|
risk = 3.09). On the contrary, no correlation with survival was found in diffuse-type |
|
|
cases according to p53 overexpression. |
|
|
- Many animal behaviors occur in a regular cycle. Two types of cyclic behaviors |
|
|
are circadian rhythms and migration. |
|
|
- source_sentence: Are lactate levels in severe malarial anaemia associated with haemozoin-containing |
|
|
neutrophils and low levels of IL-12? |
|
|
sentences: |
|
|
- Hyperlactataemia is often associated with a poor outcome in severe malaria in |
|
|
African children. To unravel the complex pathophysiology of this condition the |
|
|
relationship between plasma lactate levels, parasite density, pro- and anti-inflammatory |
|
|
cytokines, and haemozoin-containing leucocytes was studied in children with severe |
|
|
falciparum malarial anaemia. Twenty-six children with a primary diagnosis of severe |
|
|
malarial anaemia with any asexual Plasmodium falciparum parasite density and Hb |
|
|
< 5 g/dL were studied and the association of plasma lactate levels and haemozoin-containing |
|
|
leucocytes, parasite density, pro- and anti-inflammatory cytokines was measured. |
|
|
The same associations were measured in non-severe malaria controls (N = 60). Parasite |
|
|
density was associated with lactate levels on admission (r = 0.56, P < 0.005). |
|
|
Moreover, haemozoin-containing neutrophils and IL-12 were strongly associated |
|
|
with plasma lactate levels, independently of parasite density (r = 0.60, P = 0.003 |
|
|
and r = -0.46, P = 0.02, respectively). These associations were not found in controls |
|
|
with uncomplicated malarial anaemia. |
|
|
- one of two female reproductive organs that produces eggs and secretes estrogen. |
|
|
- hydrogen |
|
|
- source_sentence: Does phosphatidylethanol mediate its effects on the vascular endothelial |
|
|
growth factor via HDL receptor in endothelial cells? |
|
|
sentences: |
|
|
- 'Patients having previous bariatric surgery are at risk for weight regain and |
|
|
return of co-morbidities. If an anatomic basis for the failure is identified, |
|
|
many surgeons advocate revision or conversion to a Roux-en-Y gastric bypass. The |
|
|
aim of this study was to determine whether revisional bariatric surgery leads |
|
|
to sufficient weight loss and co-morbidity remission. From 2005-2012, patients |
|
|
undergoing revision were entered into a prospectively maintained database. Perioperative |
|
|
outcomes, including complications, weight loss, and co-morbidity remission, were |
|
|
examined for all patients with a history of a previous vertical banded gastroplasty |
|
|
(VBG) or Roux-en-Y gastric bypass (RYGB). Twenty-two patients with a history of |
|
|
RYGB and 56 with a history of VBG were identified. Following the revisional procedure, |
|
|
the RYGB group experienced 35.8% excess weight loss (%EWL) and a 31.8% morbidity |
|
|
rate. For the VBG group, patients experienced a 46.2% %EWL from their weight before |
|
|
the revisional operation with a 51.8% morbidity rate. Co-morbidity remission rate |
|
|
was excellent. Diabetes (VBG:100%, RYGB: 85.7%), gastroesophageal reflux disease |
|
|
(VBG: 94.4%, RYGB: 80%), and hypertension (VBG: 74.2%, RYGB:60%) demonstrated |
|
|
significant improvement.' |
|
|
- 'Explanation: Let A, B, C represent their respective weights. Then, we have: A |
|
|
+ B + C = (45 x 3) = 135 …. (i) A + B = (40 x 2) = 80 …. (ii) B + C = (44 x 2) |
|
|
= 88 ….(iii) Adding (ii) and (iii), we get: A + 2B + C = 168 …. (iv) Subtracting |
|
|
(i) from (iv), we get : B = 33. B’s weight = 33 kg.' |
|
|
- Previous epidemiological studies have shown that light to moderate alcohol consumption |
|
|
has protective effects against coronary heart disease but the mechanisms of the |
|
|
beneficial effect of alcohol are not known. Ethanol may increase high density |
|
|
lipoprotein (HDL) cholesterol concentration, augment the reverse cholesterol transport, |
|
|
or regulate growth factors or adhesion molecules. To study whether qualitative |
|
|
changes in HDL phospholipids mediate part of the beneficial effects of alcohol |
|
|
on atherosclerosis by HDL receptor, we investigated whether phosphatidylethanol |
|
|
(PEth) in HDL particles affects the secretion of vascular endothelial growth factor |
|
|
(VEGF) by a human scavenger receptor CD36 and LIMPII analog-I (CLA-1)-mediated |
|
|
pathway. Human EA.hy 926 endothelial cells were incubated in the presence of native |
|
|
HDL or PEth-HDL. VEGF concentration and CLA-1 protein expression were measured. |
|
|
Human CLA-1 receptor-mediated mechanisms in endothelial cells were studied using |
|
|
CLA-1 blocking antibody and protein kinase inhibitors. Phosphatidylethanol-containing |
|
|
HDL particles caused a 6-fold increase in the expression of CLA-1 in endothelial |
|
|
cells compared with the effect of native HDL. That emergent effect was mediated |
|
|
mainly through protein kinase C and p44/42 mitogen-activated protein kinase pathways. |
|
|
PEth increased the secretion of VEGF and that increase could be abolished by a |
|
|
CLA-1 blocking antibody. |
|
|
- source_sentence: Said to go hand-in-hand with science, what evolves as new materials, |
|
|
designs, and processes are invented? |
|
|
sentences: |
|
|
- Technology evolves as new materials, designs, and processes are invented. |
|
|
- Technological design constraints may be physical or social. |
|
|
- let x=44444444,then 44444445=x+1 88888885=2x-3 44444442=x-2 44444438=x-6 44444444^2=x^2 |
|
|
then substitute it in equation (x+1)(2x-3)(x-2)+(x-6)/x^2 ans is 2x-5 i.e 88888883 |
|
|
pipeline_tag: sentence-similarity |
|
|
library_name: sentence-transformers |
|
|
--- |
|
|
|
|
|
# SentenceTransformer based on BAAI/bge-base-en-v1.5 |
|
|
|
|
|
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. |
|
|
|
|
|
## Model Details |
|
|
|
|
|
### Model Description |
|
|
- **Model Type:** Sentence Transformer |
|
|
- **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) <!-- at revision a5beb1e3e68b9ab74eb54cfd186867f64f240e1a --> |
|
|
- **Maximum Sequence Length:** 512 tokens |
|
|
- **Output Dimensionality:** 768 dimensions |
|
|
- **Similarity Function:** Cosine Similarity |
|
|
<!-- - **Training Dataset:** Unknown --> |
|
|
<!-- - **Language:** Unknown --> |
|
|
<!-- - **License:** Unknown --> |
|
|
|
|
|
### Model Sources |
|
|
|
|
|
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net) |
|
|
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) |
|
|
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) |
|
|
|
|
|
### Full Model Architecture |
|
|
|
|
|
``` |
|
|
SentenceTransformer( |
|
|
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel |
|
|
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) |
|
|
(2): Normalize() |
|
|
) |
|
|
``` |
|
|
|
|
|
## Usage |
|
|
|
|
|
### Direct Usage (Sentence Transformers) |
|
|
|
|
|
First install the Sentence Transformers library: |
|
|
|
|
|
```bash |
|
|
pip install -U sentence-transformers |
|
|
``` |
|
|
|
|
|
Then you can load this model and run inference. |
|
|
```python |
|
|
from sentence_transformers import SentenceTransformer |
|
|
|
|
|
# Download from the 🤗 Hub |
|
|
model = SentenceTransformer("danthepol/MNLP_M3_document_encoder") |
|
|
# Run inference |
|
|
sentences = [ |
|
|
'Said to go hand-in-hand with science, what evolves as new materials, designs, and processes are invented?', |
|
|
'Technology evolves as new materials, designs, and processes are invented.', |
|
|
'Technological design constraints may be physical or social.', |
|
|
] |
|
|
embeddings = model.encode(sentences) |
|
|
print(embeddings.shape) |
|
|
# [3, 768] |
|
|
|
|
|
# Get the similarity scores for the embeddings |
|
|
similarities = model.similarity(embeddings, embeddings) |
|
|
print(similarities.shape) |
|
|
# [3, 3] |
|
|
``` |
|
|
|
|
|
<!-- |
|
|
### Direct Usage (Transformers) |
|
|
|
|
|
<details><summary>Click to see the direct usage in Transformers</summary> |
|
|
|
|
|
</details> |
|
|
--> |
|
|
|
|
|
<!-- |
|
|
### Downstream Usage (Sentence Transformers) |
|
|
|
|
|
You can finetune this model on your own dataset. |
|
|
|
|
|
<details><summary>Click to expand</summary> |
|
|
|
|
|
</details> |
|
|
--> |
|
|
|
|
|
<!-- |
|
|
### Out-of-Scope Use |
|
|
|
|
|
*List how the model may foreseeably be misused and address what users ought not to do with the model.* |
|
|
--> |
|
|
|
|
|
<!-- |
|
|
## Bias, Risks and Limitations |
|
|
|
|
|
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* |
|
|
--> |
|
|
|
|
|
<!-- |
|
|
### Recommendations |
|
|
|
|
|
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* |
|
|
--> |
|
|
|
|
|
## Training Details |
|
|
|
|
|
### Training Dataset |
|
|
|
|
|
#### Unnamed Dataset |
|
|
|
|
|
* Size: 53,851 training samples |
|
|
* Columns: <code>sentence_0</code> and <code>sentence_1</code> |
|
|
* Approximate statistics based on the first 1000 samples: |
|
|
| | sentence_0 | sentence_1 | |
|
|
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| |
|
|
| type | string | string | |
|
|
| details | <ul><li>min: 8 tokens</li><li>mean: 31.16 tokens</li><li>max: 143 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 160.39 tokens</li><li>max: 512 tokens</li></ul> | |
|
|
* Samples: |
|
|
| sentence_0 | sentence_1 | |
|
|
|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| |
|
|
| <code>For integers U and V, when U is divided by V, the remainder is odd. Which of the following must be true?</code> | <code>At least one of U and V is odd</code> | |
|
|
| <code>A mailman puts .05% of letters in the wrong mailbox. How many deliveries must he make to misdeliver 2 items?</code> | <code>Let the number of total deliveries be x Then, .05% of x=2 (5/100)*(1/100)*x=2 x=4000</code> | |
|
|
| <code>A certain ball team has an equal number of right- and left-handed players. On a certain day, two-thirds of the players were absent from practice. Of the players at practice that day, two-third were left handed. What is the ratio of the number of right-handed players who were not at practice that day to the number of lefthanded players who were not at practice?</code> | <code>Say the total number of players is 18, 9 right-handed and 9 left-handed. On a certain day, two-thirds of the players were absent from practice --> 12 absent and 6 present. Of the players at practice that day, one-third were left-handed --> 6*2/3=4 were left-handed and 2 right-handed. The number of right-handed players who were not at practice that day is 9-2=7. The number of left-handed players who were not at practice that days is 9-4=5. The ratio = 7/5.</code> | |
|
|
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: |
|
|
```json |
|
|
{ |
|
|
"scale": 20.0, |
|
|
"similarity_fct": "cos_sim" |
|
|
} |
|
|
``` |
|
|
|
|
|
### Training Hyperparameters |
|
|
#### Non-Default Hyperparameters |
|
|
|
|
|
- `per_device_train_batch_size`: 32 |
|
|
- `per_device_eval_batch_size`: 32 |
|
|
- `multi_dataset_batch_sampler`: round_robin |
|
|
|
|
|
#### All Hyperparameters |
|
|
<details><summary>Click to expand</summary> |
|
|
|
|
|
- `overwrite_output_dir`: False |
|
|
- `do_predict`: False |
|
|
- `eval_strategy`: no |
|
|
- `prediction_loss_only`: True |
|
|
- `per_device_train_batch_size`: 32 |
|
|
- `per_device_eval_batch_size`: 32 |
|
|
- `per_gpu_train_batch_size`: None |
|
|
- `per_gpu_eval_batch_size`: None |
|
|
- `gradient_accumulation_steps`: 1 |
|
|
- `eval_accumulation_steps`: None |
|
|
- `torch_empty_cache_steps`: None |
|
|
- `learning_rate`: 5e-05 |
|
|
- `weight_decay`: 0.0 |
|
|
- `adam_beta1`: 0.9 |
|
|
- `adam_beta2`: 0.999 |
|
|
- `adam_epsilon`: 1e-08 |
|
|
- `max_grad_norm`: 1 |
|
|
- `num_train_epochs`: 3 |
|
|
- `max_steps`: -1 |
|
|
- `lr_scheduler_type`: linear |
|
|
- `lr_scheduler_kwargs`: {} |
|
|
- `warmup_ratio`: 0.0 |
|
|
- `warmup_steps`: 0 |
|
|
- `log_level`: passive |
|
|
- `log_level_replica`: warning |
|
|
- `log_on_each_node`: True |
|
|
- `logging_nan_inf_filter`: True |
|
|
- `save_safetensors`: True |
|
|
- `save_on_each_node`: False |
|
|
- `save_only_model`: False |
|
|
- `restore_callback_states_from_checkpoint`: False |
|
|
- `no_cuda`: False |
|
|
- `use_cpu`: False |
|
|
- `use_mps_device`: False |
|
|
- `seed`: 42 |
|
|
- `data_seed`: None |
|
|
- `jit_mode_eval`: False |
|
|
- `use_ipex`: False |
|
|
- `bf16`: False |
|
|
- `fp16`: False |
|
|
- `fp16_opt_level`: O1 |
|
|
- `half_precision_backend`: auto |
|
|
- `bf16_full_eval`: False |
|
|
- `fp16_full_eval`: False |
|
|
- `tf32`: None |
|
|
- `local_rank`: 0 |
|
|
- `ddp_backend`: None |
|
|
- `tpu_num_cores`: None |
|
|
- `tpu_metrics_debug`: False |
|
|
- `debug`: [] |
|
|
- `dataloader_drop_last`: False |
|
|
- `dataloader_num_workers`: 0 |
|
|
- `dataloader_prefetch_factor`: None |
|
|
- `past_index`: -1 |
|
|
- `disable_tqdm`: False |
|
|
- `remove_unused_columns`: True |
|
|
- `label_names`: None |
|
|
- `load_best_model_at_end`: False |
|
|
- `ignore_data_skip`: False |
|
|
- `fsdp`: [] |
|
|
- `fsdp_min_num_params`: 0 |
|
|
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} |
|
|
- `tp_size`: 0 |
|
|
- `fsdp_transformer_layer_cls_to_wrap`: None |
|
|
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} |
|
|
- `deepspeed`: None |
|
|
- `label_smoothing_factor`: 0.0 |
|
|
- `optim`: adamw_torch |
|
|
- `optim_args`: None |
|
|
- `adafactor`: False |
|
|
- `group_by_length`: False |
|
|
- `length_column_name`: length |
|
|
- `ddp_find_unused_parameters`: None |
|
|
- `ddp_bucket_cap_mb`: None |
|
|
- `ddp_broadcast_buffers`: False |
|
|
- `dataloader_pin_memory`: True |
|
|
- `dataloader_persistent_workers`: False |
|
|
- `skip_memory_metrics`: True |
|
|
- `use_legacy_prediction_loop`: False |
|
|
- `push_to_hub`: False |
|
|
- `resume_from_checkpoint`: None |
|
|
- `hub_model_id`: None |
|
|
- `hub_strategy`: every_save |
|
|
- `hub_private_repo`: None |
|
|
- `hub_always_push`: False |
|
|
- `gradient_checkpointing`: False |
|
|
- `gradient_checkpointing_kwargs`: None |
|
|
- `include_inputs_for_metrics`: False |
|
|
- `include_for_metrics`: [] |
|
|
- `eval_do_concat_batches`: True |
|
|
- `fp16_backend`: auto |
|
|
- `push_to_hub_model_id`: None |
|
|
- `push_to_hub_organization`: None |
|
|
- `mp_parameters`: |
|
|
- `auto_find_batch_size`: False |
|
|
- `full_determinism`: False |
|
|
- `torchdynamo`: None |
|
|
- `ray_scope`: last |
|
|
- `ddp_timeout`: 1800 |
|
|
- `torch_compile`: False |
|
|
- `torch_compile_backend`: None |
|
|
- `torch_compile_mode`: None |
|
|
- `include_tokens_per_second`: False |
|
|
- `include_num_input_tokens_seen`: False |
|
|
- `neftune_noise_alpha`: None |
|
|
- `optim_target_modules`: None |
|
|
- `batch_eval_metrics`: False |
|
|
- `eval_on_start`: False |
|
|
- `use_liger_kernel`: False |
|
|
- `eval_use_gather_object`: False |
|
|
- `average_tokens_across_devices`: False |
|
|
- `prompts`: None |
|
|
- `batch_sampler`: batch_sampler |
|
|
- `multi_dataset_batch_sampler`: round_robin |
|
|
|
|
|
</details> |
|
|
|
|
|
### Training Logs |
|
|
| Epoch | Step | Training Loss | |
|
|
|:------:|:----:|:-------------:| |
|
|
| 0.2971 | 500 | 0.1286 | |
|
|
| 0.5942 | 1000 | 0.0769 | |
|
|
| 0.8913 | 1500 | 0.0682 | |
|
|
| 1.1884 | 2000 | 0.0416 | |
|
|
| 1.4854 | 2500 | 0.0369 | |
|
|
| 1.7825 | 3000 | 0.0326 | |
|
|
| 2.0796 | 3500 | 0.0331 | |
|
|
| 2.3767 | 4000 | 0.0213 | |
|
|
| 2.6738 | 4500 | 0.0211 | |
|
|
| 2.9709 | 5000 | 0.0207 | |
|
|
|
|
|
|
|
|
### Framework Versions |
|
|
- Python: 3.12.8 |
|
|
- Sentence Transformers: 3.4.1 |
|
|
- Transformers: 4.51.3 |
|
|
- PyTorch: 2.5.1+cu124 |
|
|
- Accelerate: 1.3.0 |
|
|
- Datasets: 3.2.0 |
|
|
- Tokenizers: 0.21.0 |
|
|
|
|
|
## Citation |
|
|
|
|
|
### BibTeX |
|
|
|
|
|
#### Sentence Transformers |
|
|
```bibtex |
|
|
@inproceedings{reimers-2019-sentence-bert, |
|
|
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", |
|
|
author = "Reimers, Nils and Gurevych, Iryna", |
|
|
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", |
|
|
month = "11", |
|
|
year = "2019", |
|
|
publisher = "Association for Computational Linguistics", |
|
|
url = "https://arxiv.org/abs/1908.10084", |
|
|
} |
|
|
``` |
|
|
|
|
|
#### MultipleNegativesRankingLoss |
|
|
```bibtex |
|
|
@misc{henderson2017efficient, |
|
|
title={Efficient Natural Language Response Suggestion for Smart Reply}, |
|
|
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, |
|
|
year={2017}, |
|
|
eprint={1705.00652}, |
|
|
archivePrefix={arXiv}, |
|
|
primaryClass={cs.CL} |
|
|
} |
|
|
``` |
|
|
|
|
|
<!-- |
|
|
## Glossary |
|
|
|
|
|
*Clearly define terms in order to be accessible across audiences.* |
|
|
--> |
|
|
|
|
|
<!-- |
|
|
## Model Card Authors |
|
|
|
|
|
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* |
|
|
--> |
|
|
|
|
|
<!-- |
|
|
## Model Card Contact |
|
|
|
|
|
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* |
|
|
--> |