calpt commited on
Commit
c811d8a
·
verified ·
1 Parent(s): c61aaa3

Add adapter xlm-roberta-base_formality_classify_gyafc_pfeiffer version 1

Browse files
README.md ADDED
@@ -0,0 +1,74 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - adapter-transformers
4
+ - xlm-roberta
5
+ - text-classification
6
+ - adapterhub:formality_classify/gyafc
7
+ datasets:
8
+ - gyafc
9
+ license: "apache-2.0"
10
+ ---
11
+
12
+ # Adapter `xlm-roberta-base_formality_classify_gyafc_pfeiffer` for xlm-roberta-base
13
+
14
+ **Note: This adapter was not trained by the AdapterHub team, but by these author(s): Kalpesh Krishna.
15
+ See author details below.**
16
+
17
+ This adapter has been trained on the English formality classification GYAFC dataset and tested with other language adapters (like hindi) for zero-shot transfer. Make sure to remove tokenization, lowercase and remove trailing punctuation for best results.
18
+
19
+
20
+ **This adapter was created for usage with the [Adapters](https://github.com/Adapter-Hub/adapters) library.**
21
+
22
+ ## Usage
23
+
24
+ First, install `adapters`:
25
+
26
+ ```
27
+ pip install -U adapters
28
+ ```
29
+
30
+ Now, the adapter can be loaded and activated like this:
31
+
32
+ ```python
33
+ from adapters import AutoAdapterModel
34
+
35
+ model = AutoAdapterModel.from_pretrained("xlm-roberta-base")
36
+ adapter_name = model.load_adapter("AdapterHub/xlm-roberta-base_formality_classify_gyafc_pfeiffer")
37
+ model.set_active_adapters(adapter_name)
38
+ ```
39
+
40
+ ## Architecture & Training
41
+
42
+ - Adapter architecture: pfeiffer
43
+ - Prediction head: classification
44
+ - Dataset: [Grammarly's Yahoo Answers Formality Corpus (GYAFC)](https://github.com/raosudha89/GYAFC-corpus)
45
+
46
+ ## Author Information
47
+
48
+ - Author name(s): Kalpesh Krishna
49
+ - Author email: kalpesh@cs.umass.edu
50
+ - Author links: [Website](https://martiansideofthemoon.github.io/), [GitHub](https://github.com/martiansideofthemoon), [Twitter](https://twitter.com/@kalpeshk2011)
51
+
52
+
53
+
54
+ ## Citation
55
+
56
+ ```bibtex
57
+ @inproceedings{krishna-etal-2020-reformulating,
58
+ title = "Reformulating Unsupervised Style Transfer as Paraphrase Generation",
59
+ author = "Krishna, Kalpesh and
60
+ Wieting, John and
61
+ Iyyer, Mohit",
62
+ booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
63
+ month = nov,
64
+ year = "2020",
65
+ address = "Online",
66
+ publisher = "Association for Computational Linguistics",
67
+ url = "https://aclanthology.org/2020.emnlp-main.55",
68
+ doi = "10.18653/v1/2020.emnlp-main.55",
69
+ pages = "737--762",
70
+ }
71
+
72
+ ```
73
+
74
+ *This adapter has been auto-imported from https://github.com/Adapter-Hub/Hub/blob/master/adapters/martiansideofthemoon/xlm-roberta-base_formality_classify_gyafc_pfeiffer.yaml*.
adapter_config.json ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "config": {
3
+ "adapter_residual_before_ln": false,
4
+ "cross_adapter": false,
5
+ "dropout": 0.0,
6
+ "factorized_phm_W": true,
7
+ "factorized_phm_rule": false,
8
+ "hypercomplex_nonlinearity": "glorot-uniform",
9
+ "init_weights": "bert",
10
+ "inv_adapter": null,
11
+ "inv_adapter_reduction_factor": null,
12
+ "is_parallel": false,
13
+ "learn_phm": true,
14
+ "leave_out": [],
15
+ "ln_after": false,
16
+ "ln_before": false,
17
+ "mh_adapter": false,
18
+ "non_linearity": "relu",
19
+ "original_ln_after": true,
20
+ "original_ln_before": true,
21
+ "output_adapter": true,
22
+ "phm_bias": true,
23
+ "phm_c_init": "normal",
24
+ "phm_dim": 4,
25
+ "phm_init_range": 0.0001,
26
+ "phm_layer": false,
27
+ "phm_rank": 1,
28
+ "reduction_factor": 16,
29
+ "residual_before_ln": true,
30
+ "scaling": 1.0,
31
+ "shared_W_phm": false,
32
+ "shared_phm_rule": true,
33
+ "use_gating": false
34
+ },
35
+ "hidden_size": 768,
36
+ "model_class": "XLMRobertaAdapterModel",
37
+ "model_name": "xlm-roberta-base",
38
+ "model_type": "xlm-roberta",
39
+ "name": "gyafc",
40
+ "version": "0.2.0"
41
+ }
head_config.json ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "config": {
3
+ "activation_function": "tanh",
4
+ "bias": true,
5
+ "dropout_prob": null,
6
+ "head_type": "classification",
7
+ "label2id": {
8
+ "LABEL_0": 0,
9
+ "LABEL_1": 1
10
+ },
11
+ "layers": 2,
12
+ "num_labels": 2,
13
+ "use_pooler": false
14
+ },
15
+ "hidden_size": 768,
16
+ "model_class": "XLMRobertaAdapterModel",
17
+ "model_name": "xlm-roberta-base",
18
+ "model_type": "xlm-roberta",
19
+ "name": "gyafc",
20
+ "version": "0.2.0"
21
+ }
pytorch_adapter.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b35f10d6d81f3095dbd96a548806aba73568baaceef2a4f6e9e8d9150c22c60b
3
+ size 3594918
pytorch_model_head.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3b44ac7795cf4e3c3109c429a37b6b141ee87c7c5949c53303d7b6e4ff98e171
3
+ size 2370600