laura.vasquezrodriguez commited on
Commit
0800ed8
·
1 Parent(s): 2f8d012

Add models files for English model

Browse files
README.md CHANGED
@@ -1,3 +1,65 @@
1
  ---
2
  license: cc-by-4.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-4.0
3
  ---
4
+
5
+
6
+ ## Prompt-based learning for Lexical Simplification: prompt-ls-en-1
7
+
8
+ We present **PromptLS**, a method for fine-tuning large pre-trained masked language models to perform the task of Lexical Simplification.
9
+ This model is part of a series of models presented at the [TSAR-2022 Shared Task](https://taln.upf.edu/pages/tsar2022-st/)
10
+ by the University of Manchester and Manchester Metropolitan University (UoM&MMU) Team in English, Spanish and Portuguese.
11
+ You can find more details about the project in our [GitHub](https://github.com/lmvasque/ls-prompt-tsar2022).
12
+
13
+ ## Models
14
+
15
+ Our models were fine-tuned using prompt-learning for **Lexical Simplification**. These are the available models you can use (current model page in bold):
16
+
17
+ | Model Name | Run # | Language | Setting |
18
+ |----------------------------------------------------------------------|-------|:-----------:|---------------|
19
+ | **[prompt-ls-en-1](https://huggingface.co/lmvasque/prompt-ls-en-1)** | **1** | **English** | **fine-tune** |
20
+ | [prompt-ls-en-2](https://huggingface.co/lmvasque/prompt-ls-en-2) | 2 | English | fine-tune |
21
+ | [roberta-large](https://huggingface.co/roberta-large) | 3 | English | zero-shot |
22
+ | [prompt-ls-es-1](https://huggingface.co/lmvasque/prompt-ls-es-1) | 1 | Spanish | fine-tune |
23
+ | [prompt-ls-es-2](https://huggingface.co/lmvasque/prompt-ls-es-2) | 2 | Spanish | fine-tune |
24
+ | [prompt-ls-es-3](https://huggingface.co/lmvasque/prompt-ls-es-3) | 3 | Spanish | fine-tune |
25
+ | [prompt-ls-pt-1](https://huggingface.co/lmvasque/prompt-ls-pt-1) | 1 | Portuguese | fine-tune |
26
+ | [prompt-ls-pt-2](https://huggingface.co/lmvasque/prompt-ls-pt-2) | 2 | Portuguese | fine-tune |
27
+ | [prompt-ls-pt-3](https://huggingface.co/lmvasque/prompt-ls-pt-3) | 3 | Portuguese | fine-tune |
28
+
29
+
30
+ For the zero-shot setting, we used the original models with no further training. Links to these models are also updated in the table above.
31
+
32
+ ## Results
33
+
34
+ We include the [official results](https://github.com/LaSTUS-TALN-UPF/TSAR-2022-Shared-Task/tree/main/results/official) from the competition test set as a reference. However, we encourage the users to also check our results in development, which show an increased performance for Spanish and Portuguese.
35
+ You can find more details in our [paper]().
36
+
37
+ | Language | # | Model | Setting | Prompt1 | Prompt2 | w | k | Acc@1 | A@3 | M@3 | P@3 |
38
+ |------------|---|-------|--------------|---------|---------|---|---|-------|-----|-----|-------------|
39
+ | English | 1 | RoBERTa-L | fine-tune | simple | word | 5 | 5 | **0.6353** | **0.5308** | **0.4244** | **0.8739** |
40
+ | English | 2 | mBERT | multilingual | easier | word | 10 | 10 | 0.4959 | 0.4235 | 0.3273 | 0.7560 |
41
+ | English | 3 | RoBERTa-L | zero-shot | easier | word | 5 | - | 0.2654 | 0.268 | 0.1820 | 0.4906 |
42
+ | Spanish | 1 | BERTIN | fine-tune | sinónimo | fácil | - | 3 | 0.3451 | **0.2907** | **0.2238** | **0.5543** |
43
+ | Spanish | 2 | BERTIN | fine-tune | palabra | simple | - | 10 | 0.3614 | **0.2907**| 0.2225 | 0.538 |
44
+ | Spanish | 3 | BERTIN | fine-tune | sinónimo | fácil | 10 | 10 | **0.3668** | 0.269 | 0.2128 | 0.5326 |
45
+ | Portuguese | 1 | BR_BERTo | fine-tune | palavra | simples | - | 8 | **0.1711** | 0.1096 | 0.1011 | 0.2486 |
46
+ | Portuguese | 2 | BR_BERTo | fine-tune | sinônimo | fácil | - | 10 | 0.1363 | 0.0962 | 0.0944 | 0.2379 |
47
+ | Portuguese | 3 | BR_BERTo | fine-tune | sinônimo | simples | 5 | 10 | 0.1577 | **0.1283**| **0.1071**| **0.2834**|
48
+
49
+
50
+ ## Citation
51
+
52
+ If you use our results and scripts in your research, please cite our work: "[UoM&MMU at TSAR-2022 Shared Task]()" (to be published)
53
+
54
+ ```
55
+ @inproceedings{vasquez-rodriguez-etal-2022-prompt-ls,
56
+ title = "UoM|MMU at TSAR-2022 Shared Task",
57
+ author = "V{|'a}squez-Rodr{|'|i}guez, Laura and
58
+ Nguyen, Nhung T. H. and
59
+ Shardlow, Matthew and
60
+ Ananiadou, Sophia",
61
+ booktitle = "Shared Task on Text Simplification, Accessibility, and Readability (TSAR-2022), EMNLP 2022",
62
+ month = dec,
63
+ year = "2022",
64
+ }
65
+ ```
config.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "bert-base-multilingual-uncased",
3
+ "architectures": [
4
+ "BertForMaskedLM"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "directionality": "bidi",
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout_prob": 0.1,
11
+ "hidden_size": 768,
12
+ "initializer_range": 0.02,
13
+ "intermediate_size": 3072,
14
+ "layer_norm_eps": 1e-12,
15
+ "max_position_embeddings": 512,
16
+ "model_type": "bert",
17
+ "num_attention_heads": 12,
18
+ "num_hidden_layers": 12,
19
+ "pad_token_id": 0,
20
+ "pooler_fc_size": 768,
21
+ "pooler_num_attention_heads": 12,
22
+ "pooler_num_fc_layers": 3,
23
+ "pooler_size_per_head": 128,
24
+ "pooler_type": "first_token_transform",
25
+ "position_embedding_type": "absolute",
26
+ "torch_dtype": "float32",
27
+ "transformers_version": "4.19.2",
28
+ "type_vocab_size": 2,
29
+ "use_cache": true,
30
+ "vocab_size": 105879
31
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e69d24a7077d49a6ef7d840c5a9e47a35866c80c57ed9c6a37317a136bc9ff12
3
+ size 669926955
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]"}
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"do_lower_case": true, "unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]", "tokenize_chinese_chars": true, "strip_accents": null, "model_max_length": 512, "special_tokens_map_file": null, "name_or_path": "bert-base-multilingual-uncased", "tokenizer_class": "BertTokenizer"}
vocab.txt ADDED
The diff for this file is too large to render. See raw diff