laura.vasquezrodriguez commited on
Commit
d0c0109
·
1 Parent(s): 799d3d4

Update paper link in README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -3,7 +3,7 @@ license: cc-by-4.0
3
  ---
4
 
5
 
6
- ## Prompt-based learning for Lexical Simplification: prompt-ls-en-2
7
 
8
  We present **PromptLS**, a method for fine-tuning large pre-trained masked language models to perform the task of Lexical Simplification.
9
  This model is part of a series of models presented at the [TSAR-2022 Shared Task](https://taln.upf.edu/pages/tsar2022-st/)
@@ -32,7 +32,7 @@ For the zero-shot setting, we used the original models with no further training.
32
  ## Results
33
 
34
  We include the [official results](https://github.com/LaSTUS-TALN-UPF/TSAR-2022-Shared-Task/tree/main/results/official) from the competition test set as a reference. However, we encourage the users to also check our results in the development set, which show an increased performance for Spanish and Portuguese.
35
- You can find more details in our [paper](https://drive.google.com/file/d/10nOMKuM62khIfRea8-XHdG6jsyMXsZtP/view?usp=share_link).
36
 
37
  | Language | # | Model | Setting | Prompt1 | Prompt2 | w | k | Acc@1 | A@3 | M@3 | P@3 |
38
  |------------|---|-------|--------------|---------|---------|---|---|-------|-----|-----|-------------|
@@ -50,11 +50,11 @@ You can find more details in our [paper](https://drive.google.com/file/d/10nOMKu
50
  ## Citation
51
 
52
  If you use our results and scripts in your research, please cite our work:
53
- "[UoM&MMU at TSAR-2022 Shared Task — PromptLS: Prompt Learning for Lexical Simplification](https://drive.google.com/file/d/10nOMKuM62khIfRea8-XHdG6jsyMXsZtP/view?usp=share_link)".
54
 
55
  ```
56
  @inproceedings{vasquez-rodriguez-etal-2022-prompt-ls,
57
- title = "UoM\&MMU at TSAR-2022 Shared Task — PromptLS: Prompt Learning for Lexical Simplification",
58
  author = "V{\'a}squez-Rodr{\'\i}guez, Laura and
59
  Nguyen, Nhung T. H. and
60
  Shardlow, Matthew and
@@ -63,4 +63,4 @@ If you use our results and scripts in your research, please cite our work:
63
  month = dec,
64
  year = "2022",
65
  }
66
- ```
 
3
  ---
4
 
5
 
6
+ ## UoM&MMU at TSAR-2022 Shared Task - Prompt Learning for Lexical Simplification: prompt-ls-en-2
7
 
8
  We present **PromptLS**, a method for fine-tuning large pre-trained masked language models to perform the task of Lexical Simplification.
9
  This model is part of a series of models presented at the [TSAR-2022 Shared Task](https://taln.upf.edu/pages/tsar2022-st/)
 
32
  ## Results
33
 
34
  We include the [official results](https://github.com/LaSTUS-TALN-UPF/TSAR-2022-Shared-Task/tree/main/results/official) from the competition test set as a reference. However, we encourage the users to also check our results in the development set, which show an increased performance for Spanish and Portuguese.
35
+ You can find more details in our [paper](https://drive.google.com/file/d/1x5dRxgcSGAaCCrjsgpCHnYek9G-TmZff/view?usp=share_link).
36
 
37
  | Language | # | Model | Setting | Prompt1 | Prompt2 | w | k | Acc@1 | A@3 | M@3 | P@3 |
38
  |------------|---|-------|--------------|---------|---------|---|---|-------|-----|-----|-------------|
 
50
  ## Citation
51
 
52
  If you use our results and scripts in your research, please cite our work:
53
+ "[UoM&MMU at TSAR-2022 Shared Task: Prompt Learning for Lexical Simplification](https://drive.google.com/file/d/1x5dRxgcSGAaCCrjsgpCHnYek9G-TmZff/view?usp=share_link)".
54
 
55
  ```
56
  @inproceedings{vasquez-rodriguez-etal-2022-prompt-ls,
57
+ title = "UoM\&MMU at TSAR-2022 Shared Task: Prompt Learning for Lexical Simplification",
58
  author = "V{\'a}squez-Rodr{\'\i}guez, Laura and
59
  Nguyen, Nhung T. H. and
60
  Shardlow, Matthew and
 
63
  month = dec,
64
  year = "2022",
65
  }
66
+ ```