bcwarner commited on
Commit
a37f124
·
verified ·
1 Parent(s): 1a0e14d

Uploading README

Browse files
Files changed (1) hide show
  1. README.md +38 -126
README.md CHANGED
@@ -1,126 +1,38 @@
1
- ---
2
- pipeline_tag: sentence-similarity
3
- tags:
4
- - sentence-transformers
5
- - feature-extraction
6
- - sentence-similarity
7
- - transformers
8
-
9
- ---
10
-
11
- # {MODEL_NAME}
12
-
13
- This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
14
-
15
- <!--- Describe your model here -->
16
-
17
- ## Usage (Sentence-Transformers)
18
-
19
- Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
20
-
21
- ```
22
- pip install -U sentence-transformers
23
- ```
24
-
25
- Then you can use the model like this:
26
-
27
- ```python
28
- from sentence_transformers import SentenceTransformer
29
- sentences = ["This is an example sentence", "Each sentence is converted"]
30
-
31
- model = SentenceTransformer('{MODEL_NAME}')
32
- embeddings = model.encode(sentences)
33
- print(embeddings)
34
- ```
35
-
36
-
37
-
38
- ## Usage (HuggingFace Transformers)
39
- Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
40
-
41
- ```python
42
- from transformers import AutoTokenizer, AutoModel
43
- import torch
44
-
45
-
46
- #Mean Pooling - Take attention mask into account for correct averaging
47
- def mean_pooling(model_output, attention_mask):
48
- token_embeddings = model_output[0] #First element of model_output contains all token embeddings
49
- input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
50
- return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
51
-
52
-
53
- # Sentences we want sentence embeddings for
54
- sentences = ['This is an example sentence', 'Each sentence is converted']
55
-
56
- # Load model from HuggingFace Hub
57
- tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
58
- model = AutoModel.from_pretrained('{MODEL_NAME}')
59
-
60
- # Tokenize sentences
61
- encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
62
-
63
- # Compute token embeddings
64
- with torch.no_grad():
65
- model_output = model(**encoded_input)
66
-
67
- # Perform pooling. In this case, mean pooling.
68
- sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
69
-
70
- print("Sentence embeddings:")
71
- print(sentence_embeddings)
72
- ```
73
-
74
-
75
-
76
- ## Evaluation Results
77
-
78
- <!--- Describe how your model was evaluated -->
79
-
80
- For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
81
-
82
-
83
- ## Training
84
- The model was trained with the parameters:
85
-
86
- **DataLoader**:
87
-
88
- `torch.utils.data.dataloader.DataLoader` of length 1641 with parameters:
89
- ```
90
- {'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
91
- ```
92
-
93
- **Loss**:
94
-
95
- `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
96
-
97
- Parameters of the fit()-Method:
98
- ```
99
- {
100
- "epochs": 10,
101
- "evaluation_steps": 0,
102
- "evaluator": "NoneType",
103
- "max_grad_norm": 1,
104
- "optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
105
- "optimizer_params": {
106
- "lr": 2e-05
107
- },
108
- "scheduler": "WarmupLinear",
109
- "steps_per_epoch": null,
110
- "warmup_steps": 100,
111
- "weight_decay": 0.01
112
- }
113
- ```
114
-
115
-
116
- ## Full Model Architecture
117
- ```
118
- SentenceTransformer(
119
- (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MegatronBertModel
120
- (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
121
- )
122
- ```
123
-
124
- ## Citing & Authors
125
-
126
- <!--- Describe where people can find more information -->
 
1
+ ---
2
+ license: mit
3
+ pipeline_tag: sentence-similarity
4
+ tags:
5
+ - sentence-similarity
6
+ - sentence-transformers
7
+ - medical
8
+ model_name: gatortron-base-sts-combined
9
+ ---
10
+ # gatortron-base-sts-combined
11
+
12
+ This repo contains a fine-tuned version of UFNLP/gatortron-base to generate semantic textual similarity pairs, primarily for use in the `sts-select` feature selection package detailed [here](https://github.com/bcwarner/sts-select).
13
+ Details about the model and vocabulary can be in the paper [here](https://huggingface.co/papers/2308.09892).
14
+
15
+ ## Citation
16
+
17
+ If you use this model for STS-based feature selection, please cite the following paper:
18
+
19
+ ```
20
+ @misc{warner2023utilizing,
21
+ title={Utilizing Semantic Textual Similarity for Clinical Survey Data Feature Selection},
22
+ author={Benjamin C. Warner and Ziqi Xu and Simon Haroutounian and Thomas Kannampallil and Chenyang Lu},
23
+ year={2023},
24
+ eprint={2308.09892},
25
+ archivePrefix={arXiv},
26
+ primaryClass={cs.CL}
27
+ }
28
+
29
+ ```
30
+ Additionally, the original model and fine-tuning papers should be cited as follows:
31
+ ```
32
+ @article{Gu_Tinn_Cheng_Lucas_Usuyama_Liu_Naumann_Gao_Poon_2021, title={Domain-specific language model pretraining for biomedical natural language processing}, volume={3}, number={1}, journal={ACM Transactions on Computing for Healthcare (HEALTH)}, publisher={ACM New York, NY}, author={Gu, Yu and Tinn, Robert and Cheng, Hao and Lucas, Michael and Usuyama, Naoto and Liu, Xiaodong and Naumann, Tristan and Gao, Jianfeng and Poon, Hoifung}, year={2021}, pages={1–23} }
33
+
34
+ @inproceedings{Cer_Diab_Agirre_Lopez-Gazpio_Specia_2017, address={Vancouver, Canada}, title={SemEval-2017 Task 1: Semantic Textual Similarity Multilingual and Crosslingual Focused Evaluation}, url={https://aclanthology.org/S17-2001}, DOI={10.18653/v1/S17-2001}, booktitle={Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017)}, publisher={Association for Computational Linguistics}, author={Cer, Daniel and Diab, Mona and Agirre, Eneko and Lopez-Gazpio, Iñigo and Specia, Lucia}, year={2017}, month=aug, pages={1–14} }
35
+ @article{Chiu_Pyysalo_Vulić_Korhonen_2018, title={Bio-SimVerb and Bio-SimLex: wide-coverage evaluation sets of word similarity in biomedicine}, volume={19}, number={1}, journal={BMC bioinformatics}, publisher={BioMed Central}, author={Chiu, Billy and Pyysalo, Sampo and Vulić, Ivan and Korhonen, Anna}, year={2018}, pages={1–13} }
36
+ @inproceedings{May_2021, title={Machine translated multilingual STS benchmark dataset.}, url={https://github.com/PhilipMay/stsb-multi-mt}, author={May, Philip}, year={2021} }
37
+ @article{Pedersen_Pakhomov_Patwardhan_Chute_2007, title={Measures of semantic similarity and relatedness in the biomedical domain}, volume={40}, number={3}, journal={Journal of biomedical informatics}, publisher={Elsevier}, author={Pedersen, Ted and Pakhomov, Serguei VS and Patwardhan, Siddharth and Chute, Christopher G}, year={2007}, pages={288–299} }
38
+ ```