venue
stringclasses 1
value | title
stringlengths 18
162
| abstract
stringlengths 252
1.89k
| doc_id
stringlengths 32
32
| publication_year
int64 2.02k
2.02k
| sentences
listlengths 1
13
| events
listlengths 1
24
| document
listlengths 50
348
|
|---|---|---|---|---|---|---|---|
ACL
|
Program Transfer for Answering Complex Questions over Knowledge Bases
|
Program induction for answering complex questions over knowledge bases (KBs) aims to decompose a question into a multi-step program, whose execution against the KB produces the final answer. Learning to induce programs relies on a large number of parallel question-program pairs for the given KB. However, for most KBs, the gold program annotations are usually lacking, making learning difficult. In this paper, we propose the approach of program transfer, which aims to leverage the valuable program annotations on the rich-resourced KBs as external supervision signals to aid program induction for the low-resourced KBs that lack program annotations. For program transfer, we design a novel two-stage parsing framework with an efficient ontology-guided pruning strategy. First, a sketch parser translates the question into a high-level program sketch, which is the composition of functions. Second, given the question and sketch, an argument parser searches the detailed arguments from the KB for functions. During the searching, we incorporate the KB ontology to prune the search space. The experiments on ComplexWebQuestions and WebQuestionSP show that our method outperforms SOTA methods significantly, demonstrating the effectiveness of program transfer and our framework. Our codes and datasets can be obtained from https://github.com/THU-KEG/ProgramTransfer.
|
febd573ada9568c635f6d8aeada27ec5
| 2,022
|
[
"program induction for answering complex questions over knowledge bases ( kbs ) aims to decompose a question into a multi - step program , whose execution against the kb produces the final answer .",
"learning to induce programs relies on a large number of parallel question - program pairs for the given kb .",
"however , for most kbs , the gold program annotations are usually lacking , making learning difficult .",
"in this paper , we propose the approach of program transfer , which aims to leverage the valuable program annotations on the rich - resourced kbs as external supervision signals to aid program induction for the low - resourced kbs that lack program annotations .",
"for program transfer , we design a novel two - stage parsing framework with an efficient ontology - guided pruning strategy .",
"first , a sketch parser translates the question into a high - level program sketch , which is the composition of functions .",
"second , given the question and sketch , an argument parser searches the detailed arguments from the kb for functions .",
"during the searching , we incorporate the kb ontology to prune the search space .",
"the experiments on complexwebquestions and webquestionsp show that our method outperforms sota methods significantly , demonstrating the effectiveness of program transfer and our framework .",
"our codes and datasets can be obtained from https : / / github . com / thu - keg / programtransfer ."
] |
[
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
0,
1
],
"text": "program induction",
"tokens": [
"program",
"induction"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
14
],
"text": "decompose",
"tokens": [
"decompose"
]
}
},
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
61,
62,
63
],
"text": "gold program annotations",
"tokens": [
"gold",
"program",
"annotations"
]
},
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
66
],
"text": "lacking",
"tokens": [
"lacking"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
66
],
"text": "lacking",
"tokens": [
"lacking"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
76
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
79,
80,
81,
82
],
"text": "approach of program transfer",
"tokens": [
"approach",
"of",
"program",
"transfer"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
77
],
"text": "propose",
"tokens": [
"propose"
]
}
},
{
"arguments": [
{
"argument_type": "TriedComponent",
"nugget_type": "FEA",
"offsets": [
89,
90,
91,
92,
93,
94,
95,
96,
7,
8
],
"text": "valuable program annotations on the rich - resourced kbs",
"tokens": [
"valuable",
"program",
"annotations",
"on",
"the",
"rich",
"-",
"resourced",
"knowledge",
"bases"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
103
],
"text": "aid",
"tokens": [
"aid"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
87
],
"text": "leverage",
"tokens": [
"leverage"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
104,
105
],
"text": "program induction",
"tokens": [
"program",
"induction"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
103
],
"text": "aid",
"tokens": [
"aid"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
121
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
118,
119
],
"text": "program transfer",
"tokens": [
"program",
"transfer"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
125,
126,
127,
128,
129
],
"text": "two - stage parsing framework",
"tokens": [
"two",
"-",
"stage",
"parsing",
"framework"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
122
],
"text": "design",
"tokens": [
"design"
]
}
},
{
"arguments": [
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
146
],
"text": "question",
"tokens": [
"question"
]
},
{
"argument_type": "TriedComponent",
"nugget_type": "FEA",
"offsets": [
149,
150,
151,
152,
153
],
"text": "high - level program sketch",
"tokens": [
"high",
"-",
"level",
"program",
"sketch"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
144
],
"text": "translates",
"tokens": [
"translates"
]
}
},
{
"arguments": [
{
"argument_type": "TriedComponent",
"nugget_type": "FEA",
"offsets": [
175,
176
],
"text": "detailed arguments",
"tokens": [
"detailed",
"arguments"
]
},
{
"argument_type": "BaseComponent",
"nugget_type": "DST",
"offsets": [
7,
8
],
"text": "kb",
"tokens": [
"knowledge",
"bases"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
181
],
"text": "functions",
"tokens": [
"functions"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
173
],
"text": "searches",
"tokens": [
"searches"
]
}
},
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "FEA",
"offsets": [
166
],
"text": "question",
"tokens": [
"question"
]
},
{
"argument_type": "Content",
"nugget_type": "FEA",
"offsets": [
168
],
"text": "sketch",
"tokens": [
"sketch"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
164
],
"text": "given",
"tokens": [
"given"
]
}
},
{
"arguments": [
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
183,
184,
185
],
"text": "during the searching",
"tokens": [
"during",
"the",
"searching"
]
},
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
187
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
7,
8,
191
],
"text": "kb ontology",
"tokens": [
"knowledge",
"bases",
"ontology"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
193
],
"text": "prune",
"tokens": [
"prune"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
188
],
"text": "incorporate",
"tokens": [
"incorporate"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
195,
196
],
"text": "search space",
"tokens": [
"search",
"space"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
193
],
"text": "prune",
"tokens": [
"prune"
]
}
},
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "E-CMP",
"offsets": [
208
],
"text": "outperforms",
"tokens": [
"outperforms"
]
}
],
"event_type": "FIN",
"trigger": {
"offsets": [
204
],
"text": "show",
"tokens": [
"show"
]
}
},
{
"arguments": [
{
"argument_type": "Arg1",
"nugget_type": "APP",
"offsets": [
125,
126,
127,
128,
129
],
"text": "two - stage parsing framework",
"tokens": [
"two",
"-",
"stage",
"parsing",
"framework"
]
},
{
"argument_type": "Result",
"nugget_type": "STR",
"offsets": [
208
],
"text": "outperforms",
"tokens": [
"outperforms"
]
},
{
"argument_type": "Arg2",
"nugget_type": "APP",
"offsets": [
209,
210
],
"text": "sota methods",
"tokens": [
"sota",
"methods"
]
}
],
"event_type": "CMP",
"trigger": {
"offsets": [
208
],
"text": "outperforms",
"tokens": [
"outperforms"
]
}
},
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "E-FAC",
"offsets": [
213
],
"text": "demonstrating",
"tokens": [
"demonstrating"
]
}
],
"event_type": "FIN",
"trigger": {
"offsets": [
204
],
"text": "show",
"tokens": [
"show"
]
}
},
{
"arguments": [
{
"argument_type": "Object",
"nugget_type": "STR",
"offsets": [
215,
216,
217,
218
],
"text": "effectiveness of program transfer",
"tokens": [
"effectiveness",
"of",
"program",
"transfer"
]
},
{
"argument_type": "Object",
"nugget_type": "STR",
"offsets": [
215,
216,
125,
126,
127,
128,
129
],
"text": "effectiveness of our framework",
"tokens": [
"effectiveness",
"of",
"two",
"-",
"stage",
"parsing",
"framework"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
213
],
"text": "demonstrating",
"tokens": [
"demonstrating"
]
}
}
] |
[
"program",
"induction",
"for",
"answering",
"complex",
"questions",
"over",
"knowledge",
"bases",
"(",
"kbs",
")",
"aims",
"to",
"decompose",
"a",
"question",
"into",
"a",
"multi",
"-",
"step",
"program",
",",
"whose",
"execution",
"against",
"the",
"kb",
"produces",
"the",
"final",
"answer",
".",
"learning",
"to",
"induce",
"programs",
"relies",
"on",
"a",
"large",
"number",
"of",
"parallel",
"question",
"-",
"program",
"pairs",
"for",
"the",
"given",
"kb",
".",
"however",
",",
"for",
"most",
"kbs",
",",
"the",
"gold",
"program",
"annotations",
"are",
"usually",
"lacking",
",",
"making",
"learning",
"difficult",
".",
"in",
"this",
"paper",
",",
"we",
"propose",
"the",
"approach",
"of",
"program",
"transfer",
",",
"which",
"aims",
"to",
"leverage",
"the",
"valuable",
"program",
"annotations",
"on",
"the",
"rich",
"-",
"resourced",
"kbs",
"as",
"external",
"supervision",
"signals",
"to",
"aid",
"program",
"induction",
"for",
"the",
"low",
"-",
"resourced",
"kbs",
"that",
"lack",
"program",
"annotations",
".",
"for",
"program",
"transfer",
",",
"we",
"design",
"a",
"novel",
"two",
"-",
"stage",
"parsing",
"framework",
"with",
"an",
"efficient",
"ontology",
"-",
"guided",
"pruning",
"strategy",
".",
"first",
",",
"a",
"sketch",
"parser",
"translates",
"the",
"question",
"into",
"a",
"high",
"-",
"level",
"program",
"sketch",
",",
"which",
"is",
"the",
"composition",
"of",
"functions",
".",
"second",
",",
"given",
"the",
"question",
"and",
"sketch",
",",
"an",
"argument",
"parser",
"searches",
"the",
"detailed",
"arguments",
"from",
"the",
"kb",
"for",
"functions",
".",
"during",
"the",
"searching",
",",
"we",
"incorporate",
"the",
"kb",
"ontology",
"to",
"prune",
"the",
"search",
"space",
".",
"the",
"experiments",
"on",
"complexwebquestions",
"and",
"webquestionsp",
"show",
"that",
"our",
"method",
"outperforms",
"sota",
"methods",
"significantly",
",",
"demonstrating",
"the",
"effectiveness",
"of",
"program",
"transfer",
"and",
"our",
"framework",
".",
"our",
"codes",
"and",
"datasets",
"can",
"be",
"obtained",
"from",
"https",
":",
"/",
"/",
"github",
".",
"com",
"/",
"thu",
"-",
"keg",
"/",
"programtransfer",
"."
] |
ACL
|
Are Natural Language Inference Models IMPPRESsive? Learning IMPlicature and PRESupposition
|
Natural language inference (NLI) is an increasingly important task for natural language understanding, which requires one to infer whether a sentence entails another. However, the ability of NLI models to make pragmatic inferences remains understudied. We create an IMPlicature and PRESupposition diagnostic dataset (IMPPRES), consisting of 32K semi-automatically generated sentence pairs illustrating well-studied pragmatic inference types. We use IMPPRES to evaluate whether BERT, InferSent, and BOW NLI models trained on MultiNLI (Williams et al., 2018) learn to make pragmatic inferences. Although MultiNLI appears to contain very few pairs illustrating these inference types, we find that BERT learns to draw pragmatic inferences. It reliably treats scalar implicatures triggered by “some” as entailments. For some presupposition triggers like “only”, BERT reliably recognizes the presupposition as an entailment, even when the trigger is embedded under an entailment canceling operator like negation. BOW and InferSent show weaker evidence of pragmatic reasoning. We conclude that NLI training encourages models to learn some, but not all, pragmatic inferences.
|
1a6285faf0918175c1ea9e0b7c8ea82e
| 2,020
|
[
"natural language inference ( nli ) is an increasingly important task for natural language understanding , which requires one to infer whether a sentence entails another .",
"however , the ability of nli models to make pragmatic inferences remains understudied .",
"we create an implicature and presupposition diagnostic dataset ( imppres ) , consisting of 32k semi - automatically generated sentence pairs illustrating well - studied pragmatic inference types .",
"we use imppres to evaluate whether bert , infersent , and bow nli models trained on multinli ( williams et al . , 2018 ) learn to make pragmatic inferences .",
"although multinli appears to contain very few pairs illustrating these inference types , we find that bert learns to draw pragmatic inferences .",
"it reliably treats scalar implicatures triggered by “ some ” as entailments .",
"for some presupposition triggers like “ only ” , bert reliably recognizes the presupposition as an entailment , even when the trigger is embedded under an entailment canceling operator like negation .",
"bow and infersent show weaker evidence of pragmatic reasoning .",
"we conclude that nli training encourages models to learn some , but not all , pragmatic inferences ."
] |
[
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
0,
1,
2
],
"text": "natural language inference",
"tokens": [
"natural",
"language",
"inference"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
10
],
"text": "task",
"tokens": [
"task"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
41
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "DST",
"offsets": [
44,
45,
46,
47,
48
],
"text": "implicature and presupposition diagnostic dataset",
"tokens": [
"implicature",
"and",
"presupposition",
"diagnostic",
"dataset"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
42
],
"text": "create",
"tokens": [
"create"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
70
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
76
],
"text": "bert",
"tokens": [
"bert"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
78
],
"text": "infersent",
"tokens": [
"infersent"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
81,
0,
1,
2,
83
],
"text": "bow nli models",
"tokens": [
"bow",
"natural",
"language",
"inference",
"models"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
84,
85,
86
],
"text": "trained on multinli",
"tokens": [
"trained",
"on",
"multinli"
]
},
{
"argument_type": "Dataset",
"nugget_type": "DST",
"offsets": [
44,
45,
46,
47,
48
],
"text": "implicature and presupposition diagnostic dataset",
"tokens": [
"implicature",
"and",
"presupposition",
"diagnostic",
"dataset"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
74
],
"text": "evaluate",
"tokens": [
"evaluate"
]
}
},
{
"arguments": [
{
"argument_type": "Finder",
"nugget_type": "OG",
"offsets": [
114
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "E-FAC",
"offsets": [
118
],
"text": "learns",
"tokens": [
"learns"
]
},
{
"argument_type": "Content",
"nugget_type": "E-FAC",
"offsets": [
125,
126
],
"text": "reliably treats",
"tokens": [
"reliably",
"treats"
]
},
{
"argument_type": "Content",
"nugget_type": "E-FAC",
"offsets": [
147,
148
],
"text": "reliably recognizes",
"tokens": [
"reliably",
"recognizes"
]
}
],
"event_type": "FIN",
"trigger": {
"offsets": [
115
],
"text": "find",
"tokens": [
"find"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
117
],
"text": "bert",
"tokens": [
"bert"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
120
],
"text": "draw",
"tokens": [
"draw"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
118
],
"text": "learns",
"tokens": [
"learns"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
121,
122
],
"text": "pragmatic inferences",
"tokens": [
"pragmatic",
"inferences"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
120
],
"text": "draw",
"tokens": [
"draw"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
169
],
"text": "bow",
"tokens": [
"bow"
]
},
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
171
],
"text": "infersent",
"tokens": [
"infersent"
]
},
{
"argument_type": "Object",
"nugget_type": "WEA",
"offsets": [
173,
174,
175,
176,
177
],
"text": "weaker evidence of pragmatic reasoning",
"tokens": [
"weaker",
"evidence",
"of",
"pragmatic",
"reasoning"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
172
],
"text": "show",
"tokens": [
"show"
]
}
},
{
"arguments": [
{
"argument_type": "Finder",
"nugget_type": "OG",
"offsets": [
179
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "E-FAC",
"offsets": [
184
],
"text": "encourages",
"tokens": [
"encourages"
]
}
],
"event_type": "FIN",
"trigger": {
"offsets": [
180
],
"text": "conclude",
"tokens": [
"conclude"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "MOD",
"offsets": [
0,
1,
2,
183
],
"text": "nli training",
"tokens": [
"natural",
"language",
"inference",
"training"
]
},
{
"argument_type": "Object",
"nugget_type": "APP",
"offsets": [
185
],
"text": "models",
"tokens": [
"models"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
187
],
"text": "learn",
"tokens": [
"learn"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
184
],
"text": "encourages",
"tokens": [
"encourages"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "FEA",
"offsets": [
188,
189,
190,
191,
192,
193,
194,
195
],
"text": "some , but not all , pragmatic inferences",
"tokens": [
"some",
",",
"but",
"not",
"all",
",",
"pragmatic",
"inferences"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
187
],
"text": "learn",
"tokens": [
"learn"
]
}
},
{
"arguments": [
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
39
],
"text": "understudied",
"tokens": [
"understudied"
]
},
{
"argument_type": "Concern",
"nugget_type": "APP",
"offsets": [
0,
1,
2,
33
],
"text": "nli models",
"tokens": [
"natural",
"language",
"inference",
"models"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
38
],
"text": "remains",
"tokens": [
"remains"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
117
],
"text": "bert",
"tokens": [
"bert"
]
},
{
"argument_type": "Object",
"nugget_type": "FEA",
"offsets": [
127,
128,
129,
130,
131,
132,
133
],
"text": "scalar implicatures triggered by “ some ”",
"tokens": [
"scalar",
"implicatures",
"triggered",
"by",
"“",
"some",
"”"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
135
],
"text": "entailments",
"tokens": [
"entailments"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
125,
126
],
"text": "reliably treats",
"tokens": [
"reliably",
"treats"
]
}
},
{
"arguments": [
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
137,
138,
139,
140,
141,
142,
143,
144
],
"text": "for some presupposition triggers like “ only ”",
"tokens": [
"for",
"some",
"presupposition",
"triggers",
"like",
"“",
"only",
"”"
]
},
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
146
],
"text": "bert",
"tokens": [
"bert"
]
},
{
"argument_type": "Object",
"nugget_type": "FEA",
"offsets": [
150
],
"text": "presupposition",
"tokens": [
"presupposition"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
153
],
"text": "entailment",
"tokens": [
"entailment"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
147,
148
],
"text": "reliably recognizes",
"tokens": [
"reliably",
"recognizes"
]
}
}
] |
[
"natural",
"language",
"inference",
"(",
"nli",
")",
"is",
"an",
"increasingly",
"important",
"task",
"for",
"natural",
"language",
"understanding",
",",
"which",
"requires",
"one",
"to",
"infer",
"whether",
"a",
"sentence",
"entails",
"another",
".",
"however",
",",
"the",
"ability",
"of",
"nli",
"models",
"to",
"make",
"pragmatic",
"inferences",
"remains",
"understudied",
".",
"we",
"create",
"an",
"implicature",
"and",
"presupposition",
"diagnostic",
"dataset",
"(",
"imppres",
")",
",",
"consisting",
"of",
"32k",
"semi",
"-",
"automatically",
"generated",
"sentence",
"pairs",
"illustrating",
"well",
"-",
"studied",
"pragmatic",
"inference",
"types",
".",
"we",
"use",
"imppres",
"to",
"evaluate",
"whether",
"bert",
",",
"infersent",
",",
"and",
"bow",
"nli",
"models",
"trained",
"on",
"multinli",
"(",
"williams",
"et",
"al",
".",
",",
"2018",
")",
"learn",
"to",
"make",
"pragmatic",
"inferences",
".",
"although",
"multinli",
"appears",
"to",
"contain",
"very",
"few",
"pairs",
"illustrating",
"these",
"inference",
"types",
",",
"we",
"find",
"that",
"bert",
"learns",
"to",
"draw",
"pragmatic",
"inferences",
".",
"it",
"reliably",
"treats",
"scalar",
"implicatures",
"triggered",
"by",
"“",
"some",
"”",
"as",
"entailments",
".",
"for",
"some",
"presupposition",
"triggers",
"like",
"“",
"only",
"”",
",",
"bert",
"reliably",
"recognizes",
"the",
"presupposition",
"as",
"an",
"entailment",
",",
"even",
"when",
"the",
"trigger",
"is",
"embedded",
"under",
"an",
"entailment",
"canceling",
"operator",
"like",
"negation",
".",
"bow",
"and",
"infersent",
"show",
"weaker",
"evidence",
"of",
"pragmatic",
"reasoning",
".",
"we",
"conclude",
"that",
"nli",
"training",
"encourages",
"models",
"to",
"learn",
"some",
",",
"but",
"not",
"all",
",",
"pragmatic",
"inferences",
"."
] |
ACL
|
Leveraging Task Transferability to Meta-learning for Clinical Section Classification with Limited Data
|
Identifying sections is one of the critical components of understanding medical information from unstructured clinical notes and developing assistive technologies for clinical note-writing tasks. Most state-of-the-art text classification systems require thousands of in-domain text data to achieve high performance. However, collecting in-domain and recent clinical note data with section labels is challenging given the high level of privacy and sensitivity. The present paper proposes an algorithmic way to improve the task transferability of meta-learning-based text classification in order to address the issue of low-resource target data. Specifically, we explore how to make the best use of the source dataset and propose a unique task transferability measure named Normalized Negative Conditional Entropy (NNCE). Leveraging the NNCE, we develop strategies for selecting clinical categories and sections from source task data to boost cross-domain meta-learning accuracy. Experimental results show that our task selection strategies improve section classification accuracy significantly compared to meta-learning algorithms.
|
6bab1cf097070e6d457c9c8fd0e74e57
| 2,022
|
[
"identifying sections is one of the critical components of understanding medical information from unstructured clinical notes and developing assistive technologies for clinical note - writing tasks .",
"most state - of - the - art text classification systems require thousands of in - domain text data to achieve high performance .",
"however , collecting in - domain and recent clinical note data with section labels is challenging given the high level of privacy and sensitivity .",
"the present paper proposes an algorithmic way to improve the task transferability of meta - learning - based text classification in order to address the issue of low - resource target data .",
"specifically , we explore how to make the best use of the source dataset and propose a unique task transferability measure named normalized negative conditional entropy ( nnce ) .",
"leveraging the nnce , we develop strategies for selecting clinical categories and sections from source task data to boost cross - domain meta - learning accuracy .",
"experimental results show that our task selection strategies improve section classification accuracy significantly compared to meta - learning algorithms ."
] |
[
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
28,
29,
30,
31,
32,
33,
34,
35,
36,
37
],
"text": "state - of - the - art text classification systems",
"tokens": [
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"text",
"classification",
"systems"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
38
],
"text": "require",
"tokens": [
"require"
]
}
},
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
81,
82
],
"text": "algorithmic way",
"tokens": [
"algorithmic",
"way"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
79
],
"text": "proposes",
"tokens": [
"proposes"
]
}
},
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
86,
87,
88,
89,
90,
91,
92,
93,
94,
95
],
"text": "task transferability of meta - learning - based text classification",
"tokens": [
"task",
"transferability",
"of",
"meta",
"-",
"learning",
"-",
"based",
"text",
"classification"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
99
],
"text": "address",
"tokens": [
"address"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
84
],
"text": "improve",
"tokens": [
"improve"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
101,
102,
103,
104,
105,
106,
107
],
"text": "issue of low - resource target data",
"tokens": [
"issue",
"of",
"low",
"-",
"resource",
"target",
"data"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
99
],
"text": "address",
"tokens": [
"address"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
111
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "DST",
"offsets": [
121,
122
],
"text": "source dataset",
"tokens": [
"source",
"dataset"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
115,
116,
117,
118,
119
],
"text": "make the best use of",
"tokens": [
"make",
"the",
"best",
"use",
"of"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
111
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
131,
132,
133,
134
],
"text": "normalized negative conditional entropy",
"tokens": [
"normalized",
"negative",
"conditional",
"entropy"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
124
],
"text": "propose",
"tokens": [
"propose"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
143
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
145
],
"text": "strategies",
"tokens": [
"strategies"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
157
],
"text": "boost",
"tokens": [
"boost"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
144
],
"text": "develop",
"tokens": [
"develop"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
158,
159,
160,
161,
162,
163,
164
],
"text": "cross - domain meta - learning accuracy",
"tokens": [
"cross",
"-",
"domain",
"meta",
"-",
"learning",
"accuracy"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
157
],
"text": "boost",
"tokens": [
"boost"
]
}
},
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "E-CMP",
"offsets": [
174
],
"text": "improve",
"tokens": [
"improve"
]
}
],
"event_type": "FIN",
"trigger": {
"offsets": [
168
],
"text": "show",
"tokens": [
"show"
]
}
},
{
"arguments": [
{
"argument_type": "Result",
"nugget_type": "STR",
"offsets": [
174
],
"text": "improve",
"tokens": [
"improve"
]
},
{
"argument_type": "Metrics",
"nugget_type": "TAK",
"offsets": [
175,
176,
177
],
"text": "section classification accuracy",
"tokens": [
"section",
"classification",
"accuracy"
]
},
{
"argument_type": "Extent",
"nugget_type": "DEG",
"offsets": [
178
],
"text": "significantly",
"tokens": [
"significantly"
]
}
],
"event_type": "CMP",
"trigger": {
"offsets": [
174
],
"text": "improve",
"tokens": [
"improve"
]
}
}
] |
[
"identifying",
"sections",
"is",
"one",
"of",
"the",
"critical",
"components",
"of",
"understanding",
"medical",
"information",
"from",
"unstructured",
"clinical",
"notes",
"and",
"developing",
"assistive",
"technologies",
"for",
"clinical",
"note",
"-",
"writing",
"tasks",
".",
"most",
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"text",
"classification",
"systems",
"require",
"thousands",
"of",
"in",
"-",
"domain",
"text",
"data",
"to",
"achieve",
"high",
"performance",
".",
"however",
",",
"collecting",
"in",
"-",
"domain",
"and",
"recent",
"clinical",
"note",
"data",
"with",
"section",
"labels",
"is",
"challenging",
"given",
"the",
"high",
"level",
"of",
"privacy",
"and",
"sensitivity",
".",
"the",
"present",
"paper",
"proposes",
"an",
"algorithmic",
"way",
"to",
"improve",
"the",
"task",
"transferability",
"of",
"meta",
"-",
"learning",
"-",
"based",
"text",
"classification",
"in",
"order",
"to",
"address",
"the",
"issue",
"of",
"low",
"-",
"resource",
"target",
"data",
".",
"specifically",
",",
"we",
"explore",
"how",
"to",
"make",
"the",
"best",
"use",
"of",
"the",
"source",
"dataset",
"and",
"propose",
"a",
"unique",
"task",
"transferability",
"measure",
"named",
"normalized",
"negative",
"conditional",
"entropy",
"(",
"nnce",
")",
".",
"leveraging",
"the",
"nnce",
",",
"we",
"develop",
"strategies",
"for",
"selecting",
"clinical",
"categories",
"and",
"sections",
"from",
"source",
"task",
"data",
"to",
"boost",
"cross",
"-",
"domain",
"meta",
"-",
"learning",
"accuracy",
".",
"experimental",
"results",
"show",
"that",
"our",
"task",
"selection",
"strategies",
"improve",
"section",
"classification",
"accuracy",
"significantly",
"compared",
"to",
"meta",
"-",
"learning",
"algorithms",
"."
] |
ACL
|
Generate, Delete and Rewrite: A Three-Stage Framework for Improving Persona Consistency of Dialogue Generation
|
Maintaining a consistent personality in conversations is quite natural for human beings, but is still a non-trivial task for machines. The persona-based dialogue generation task is thus introduced to tackle the personality-inconsistent problem by incorporating explicit persona text into dialogue generation models. Despite the success of existing persona-based models on generating human-like responses, their one-stage decoding framework can hardly avoid the generation of inconsistent persona words. In this work, we introduce a three-stage framework that employs a generate-delete-rewrite mechanism to delete inconsistent words from a generated response prototype and further rewrite it to a personality-consistent one. We carry out evaluations by both human and automatic metrics. Experiments on the Persona-Chat dataset show that our approach achieves good performance.
|
71ff0f02bc14a28822f0cdf6c508aae2
| 2,020
|
[
"maintaining a consistent personality in conversations is quite natural for human beings , but is still a non - trivial task for machines .",
"the persona - based dialogue generation task is thus introduced to tackle the personality - inconsistent problem by incorporating explicit persona text into dialogue generation models .",
"despite the success of existing persona - based models on generating human - like responses , their one - stage decoding framework can hardly avoid the generation of inconsistent persona words .",
"in this work , we introduce a three - stage framework that employs a generate - delete - rewrite mechanism to delete inconsistent words from a generated response prototype and further rewrite it to a personality - consistent one .",
"we carry out evaluations by both human and automatic metrics .",
"experiments on the persona - chat dataset show that our approach achieves good performance ."
] |
[
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
2,
3
],
"text": "consistent personality",
"tokens": [
"consistent",
"personality"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
0
],
"text": "maintaining",
"tokens": [
"maintaining"
]
}
},
{
"arguments": [
{
"argument_type": "BaseComponent",
"nugget_type": "APP",
"offsets": [
47,
48,
49
],
"text": "dialogue generation models",
"tokens": [
"dialogue",
"generation",
"models"
]
},
{
"argument_type": "TriedComponent",
"nugget_type": "FEA",
"offsets": [
43,
44,
45
],
"text": "explicit persona text",
"tokens": [
"explicit",
"persona",
"text"
]
}
],
"event_type": "RWS",
"trigger": {
"offsets": [
42
],
"text": "incorporating",
"tokens": [
"incorporating"
]
}
},
{
"arguments": [
{
"argument_type": "Concern",
"nugget_type": "APP",
"offsets": [
68,
69,
70,
71,
72
],
"text": "one - stage decoding framework",
"tokens": [
"one",
"-",
"stage",
"decoding",
"framework"
]
},
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
74,
75
],
"text": "hardly avoid",
"tokens": [
"hardly",
"avoid"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
74,
75
],
"text": "hardly avoid",
"tokens": [
"hardly",
"avoid"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
87
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
90,
91,
92,
93
],
"text": "three - stage framework",
"tokens": [
"three",
"-",
"stage",
"framework"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
104
],
"text": "delete",
"tokens": [
"delete"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
88
],
"text": "introduce",
"tokens": [
"introduce"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
105,
106,
107,
108,
109,
110,
111
],
"text": "inconsistent words from a generated response prototype",
"tokens": [
"inconsistent",
"words",
"from",
"a",
"generated",
"response",
"prototype"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
104
],
"text": "delete",
"tokens": [
"delete"
]
}
},
{
"arguments": [
{
"argument_type": "TriedComponent",
"nugget_type": "FEA",
"offsets": [
118,
119,
120,
121
],
"text": "personality - consistent one",
"tokens": [
"personality",
"-",
"consistent",
"one"
]
},
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
105,
106
],
"text": "inconsistent words",
"tokens": [
"inconsistent",
"words"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
113,
114
],
"text": "further rewrite",
"tokens": [
"further",
"rewrite"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
123
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
129,
132
],
"text": "human metrics",
"tokens": [
"human",
"metrics"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
131,
132
],
"text": "automatic metrics",
"tokens": [
"automatic",
"metrics"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
126
],
"text": "evaluations",
"tokens": [
"evaluations"
]
}
},
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "E-FAC",
"offsets": [
145
],
"text": "achieves",
"tokens": [
"achieves"
]
}
],
"event_type": "FIN",
"trigger": {
"offsets": [
141
],
"text": "show",
"tokens": [
"show"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
90,
91,
92,
93
],
"text": "three - stage framework",
"tokens": [
"three",
"-",
"stage",
"framework"
]
},
{
"argument_type": "Object",
"nugget_type": "STR",
"offsets": [
146,
147
],
"text": "good performance",
"tokens": [
"good",
"performance"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
145
],
"text": "achieves",
"tokens": [
"achieves"
]
}
}
] |
[
"maintaining",
"a",
"consistent",
"personality",
"in",
"conversations",
"is",
"quite",
"natural",
"for",
"human",
"beings",
",",
"but",
"is",
"still",
"a",
"non",
"-",
"trivial",
"task",
"for",
"machines",
".",
"the",
"persona",
"-",
"based",
"dialogue",
"generation",
"task",
"is",
"thus",
"introduced",
"to",
"tackle",
"the",
"personality",
"-",
"inconsistent",
"problem",
"by",
"incorporating",
"explicit",
"persona",
"text",
"into",
"dialogue",
"generation",
"models",
".",
"despite",
"the",
"success",
"of",
"existing",
"persona",
"-",
"based",
"models",
"on",
"generating",
"human",
"-",
"like",
"responses",
",",
"their",
"one",
"-",
"stage",
"decoding",
"framework",
"can",
"hardly",
"avoid",
"the",
"generation",
"of",
"inconsistent",
"persona",
"words",
".",
"in",
"this",
"work",
",",
"we",
"introduce",
"a",
"three",
"-",
"stage",
"framework",
"that",
"employs",
"a",
"generate",
"-",
"delete",
"-",
"rewrite",
"mechanism",
"to",
"delete",
"inconsistent",
"words",
"from",
"a",
"generated",
"response",
"prototype",
"and",
"further",
"rewrite",
"it",
"to",
"a",
"personality",
"-",
"consistent",
"one",
".",
"we",
"carry",
"out",
"evaluations",
"by",
"both",
"human",
"and",
"automatic",
"metrics",
".",
"experiments",
"on",
"the",
"persona",
"-",
"chat",
"dataset",
"show",
"that",
"our",
"approach",
"achieves",
"good",
"performance",
"."
] |
ACL
|
An In-depth Study on Internal Structure of Chinese Words
|
Unlike English letters, Chinese characters have rich and specific meanings. Usually, the meaning of a word can be derived from its constituent characters in some way. Several previous works on syntactic parsing propose to annotate shallow word-internal structures for better utilizing character-level information. This work proposes to model the deep internal structures of Chinese words as dependency trees with 11 labels for distinguishing syntactic relationships. First, based on newly compiled annotation guidelines, we manually annotate a word-internal structure treebank (WIST) consisting of over 30K multi-char words from Chinese Penn Treebank. To guarantee quality, each word is independently annotated by two annotators and inconsistencies are handled by a third senior annotator. Second, we present detailed and interesting analysis on WIST to reveal insights on Chinese word formation. Third, we propose word-internal structure parsing as a new task, and conduct benchmark experiments using a competitive dependency parser. Finally, we present two simple ways to encode word-internal structures, leading to promising gains on the sentence-level syntactic parsing task.
|
636dd0c8ece0788d40d37b9f500026d8
| 2,021
|
[
"unlike english letters , chinese characters have rich and specific meanings .",
"usually , the meaning of a word can be derived from its constituent characters in some way .",
"several previous works on syntactic parsing propose to annotate shallow word - internal structures for better utilizing character - level information .",
"this work proposes to model the deep internal structures of chinese words as dependency trees with 11 labels for distinguishing syntactic relationships .",
"first , based on newly compiled annotation guidelines , we manually annotate a word - internal structure treebank ( wist ) consisting of over 30k multi - char words from chinese penn treebank .",
"to guarantee quality , each word is independently annotated by two annotators and inconsistencies are handled by a third senior annotator .",
"second , we present detailed and interesting analysis on wist to reveal insights on chinese word formation .",
"third , we propose word - internal structure parsing as a new task , and conduct benchmark experiments using a competitive dependency parser .",
"finally , we present two simple ways to encode word - internal structures , leading to promising gains on the sentence - level syntactic parsing task ."
] |
[
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
4,
5
],
"text": "chinese characters",
"tokens": [
"chinese",
"characters"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
6
],
"text": "have",
"tokens": [
"have"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
31,
32,
33,
34,
35
],
"text": "previous works on syntactic parsing",
"tokens": [
"previous",
"works",
"on",
"syntactic",
"parsing"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
45,
46
],
"text": "better utilizing",
"tokens": [
"better",
"utilizing"
]
},
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
39,
40,
41,
42,
43
],
"text": "shallow word - internal structures",
"tokens": [
"shallow",
"word",
"-",
"internal",
"structures"
]
}
],
"event_type": "RWS",
"trigger": {
"offsets": [
38
],
"text": "annotate",
"tokens": [
"annotate"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "FEA",
"offsets": [
47,
48,
49,
50
],
"text": "character - level information",
"tokens": [
"character",
"-",
"level",
"information"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
45,
46
],
"text": "better utilizing",
"tokens": [
"better",
"utilizing"
]
}
},
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
71
],
"text": "distinguishing",
"tokens": [
"distinguishing"
]
},
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
58,
59,
60,
61,
62,
63
],
"text": "deep internal structures of chinese words",
"tokens": [
"deep",
"internal",
"structures",
"of",
"chinese",
"words"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
67,
68,
69
],
"text": "with 11 labels",
"tokens": [
"with",
"11",
"labels"
]
},
{
"argument_type": "TriedComponent",
"nugget_type": "FEA",
"offsets": [
65,
66
],
"text": "dependency trees",
"tokens": [
"dependency",
"trees"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
56
],
"text": "model",
"tokens": [
"model"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
72,
73
],
"text": "syntactic relationships",
"tokens": [
"syntactic",
"relationships"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
71
],
"text": "distinguishing",
"tokens": [
"distinguishing"
]
}
},
{
"arguments": [
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
77,
78,
79,
80,
81,
82
],
"text": "based on newly compiled annotation guidelines",
"tokens": [
"based",
"on",
"newly",
"compiled",
"annotation",
"guidelines"
]
},
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
84
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "DST",
"offsets": [
88,
89,
90,
91,
92
],
"text": "word - internal structure treebank",
"tokens": [
"word",
"-",
"internal",
"structure",
"treebank"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
85,
86
],
"text": "manually annotate",
"tokens": [
"manually",
"annotate"
]
}
},
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
142
],
"text": "reveal",
"tokens": [
"reveal"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
88,
89,
90,
91,
92
],
"text": "word - internal structure treebank",
"tokens": [
"word",
"-",
"internal",
"structure",
"treebank"
]
},
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
133
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
135,
138
],
"text": "detailed analysis",
"tokens": [
"detailed",
"analysis"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
137,
138
],
"text": "interesting analysis",
"tokens": [
"interesting",
"analysis"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
134
],
"text": "present",
"tokens": [
"present"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
143,
144,
145,
146,
147
],
"text": "insights on chinese word formation",
"tokens": [
"insights",
"on",
"chinese",
"word",
"formation"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
142
],
"text": "reveal",
"tokens": [
"reveal"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
151
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
153,
154,
155,
156,
157
],
"text": "word - internal structure parsing",
"tokens": [
"word",
"-",
"internal",
"structure",
"parsing"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
152
],
"text": "propose",
"tokens": [
"propose"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
151
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "MOD",
"offsets": [
169,
170,
171
],
"text": "competitive dependency parser",
"tokens": [
"competitive",
"dependency",
"parser"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
164
],
"text": "conduct",
"tokens": [
"conduct"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
167
],
"text": "using",
"tokens": [
"using"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
165,
166
],
"text": "benchmark experiments",
"tokens": [
"benchmark",
"experiments"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
164
],
"text": "conduct",
"tokens": [
"conduct"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
175
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
177,
178,
179
],
"text": "two simple ways",
"tokens": [
"two",
"simple",
"ways"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
181
],
"text": "encode",
"tokens": [
"encode"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
176
],
"text": "present",
"tokens": [
"present"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
182,
183,
184,
185
],
"text": "word - internal structures",
"tokens": [
"word",
"-",
"internal",
"structures"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
181
],
"text": "encode",
"tokens": [
"encode"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
177,
178,
179
],
"text": "two simple ways",
"tokens": [
"two",
"simple",
"ways"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
191,
192,
193,
194,
195,
196,
197,
198
],
"text": "on the sentence - level syntactic parsing task",
"tokens": [
"on",
"the",
"sentence",
"-",
"level",
"syntactic",
"parsing",
"task"
]
},
{
"argument_type": "Object",
"nugget_type": "STR",
"offsets": [
189,
190
],
"text": "promising gains",
"tokens": [
"promising",
"gains"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
187
],
"text": "leading",
"tokens": [
"leading"
]
}
}
] |
[
"unlike",
"english",
"letters",
",",
"chinese",
"characters",
"have",
"rich",
"and",
"specific",
"meanings",
".",
"usually",
",",
"the",
"meaning",
"of",
"a",
"word",
"can",
"be",
"derived",
"from",
"its",
"constituent",
"characters",
"in",
"some",
"way",
".",
"several",
"previous",
"works",
"on",
"syntactic",
"parsing",
"propose",
"to",
"annotate",
"shallow",
"word",
"-",
"internal",
"structures",
"for",
"better",
"utilizing",
"character",
"-",
"level",
"information",
".",
"this",
"work",
"proposes",
"to",
"model",
"the",
"deep",
"internal",
"structures",
"of",
"chinese",
"words",
"as",
"dependency",
"trees",
"with",
"11",
"labels",
"for",
"distinguishing",
"syntactic",
"relationships",
".",
"first",
",",
"based",
"on",
"newly",
"compiled",
"annotation",
"guidelines",
",",
"we",
"manually",
"annotate",
"a",
"word",
"-",
"internal",
"structure",
"treebank",
"(",
"wist",
")",
"consisting",
"of",
"over",
"30k",
"multi",
"-",
"char",
"words",
"from",
"chinese",
"penn",
"treebank",
".",
"to",
"guarantee",
"quality",
",",
"each",
"word",
"is",
"independently",
"annotated",
"by",
"two",
"annotators",
"and",
"inconsistencies",
"are",
"handled",
"by",
"a",
"third",
"senior",
"annotator",
".",
"second",
",",
"we",
"present",
"detailed",
"and",
"interesting",
"analysis",
"on",
"wist",
"to",
"reveal",
"insights",
"on",
"chinese",
"word",
"formation",
".",
"third",
",",
"we",
"propose",
"word",
"-",
"internal",
"structure",
"parsing",
"as",
"a",
"new",
"task",
",",
"and",
"conduct",
"benchmark",
"experiments",
"using",
"a",
"competitive",
"dependency",
"parser",
".",
"finally",
",",
"we",
"present",
"two",
"simple",
"ways",
"to",
"encode",
"word",
"-",
"internal",
"structures",
",",
"leading",
"to",
"promising",
"gains",
"on",
"the",
"sentence",
"-",
"level",
"syntactic",
"parsing",
"task",
"."
] |
ACL
|
Preview, Attend and Review: Schema-Aware Curriculum Learning for Multi-Domain Dialogue State Tracking
|
Existing dialog state tracking (DST) models are trained with dialog data in a random order, neglecting rich structural information in a dataset. In this paper, we propose to use curriculum learning (CL) to better leverage both the curriculum structure and schema structure for task-oriented dialogs. Specifically, we propose a model-agnostic framework called Schema-aware Curriculum Learning for Dialog State Tracking (SaCLog), which consists of a preview module that pre-trains a DST model with schema information, a curriculum module that optimizes the model with CL, and a review module that augments mispredicted data to reinforce the CL training. We show that our proposed approach improves DST performance over both a transformer-based and RNN-based DST model (TripPy and TRADE) and achieves new state-of-the-art results on WOZ2.0 and MultiWOZ2.1.
|
a0fd29c17984ed8d2e2b7f86831cb0a4
| 2,021
|
[
"existing dialog state tracking ( dst ) models are trained with dialog data in a random order , neglecting rich structural information in a dataset .",
"in this paper , we propose to use curriculum learning ( cl ) to better leverage both the curriculum structure and schema structure for task - oriented dialogs .",
"specifically , we propose a model - agnostic framework called schema - aware curriculum learning for dialog state tracking ( saclog ) , which consists of a preview module that pre - trains a dst model with schema information , a curriculum module that optimizes the model with cl , and a review module that augments mispredicted data to reinforce the cl training .",
"we show that our proposed approach improves dst performance over both a transformer - based and rnn - based dst model ( trippy and trade ) and achieves new state - of - the - art results on woz2 . 0 and multiwoz2 . 1 ."
] |
[
{
"arguments": [
{
"argument_type": "TriedComponent",
"nugget_type": "APP",
"offsets": [
34,
35
],
"text": "curriculum learning",
"tokens": [
"curriculum",
"learning"
]
},
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
44,
45
],
"text": "curriculum structure",
"tokens": [
"curriculum",
"structure"
]
},
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
47,
48
],
"text": "schema structure",
"tokens": [
"schema",
"structure"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
50,
51,
52,
53
],
"text": "task - oriented dialogs",
"tokens": [
"task",
"-",
"oriented",
"dialogs"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
40,
41
],
"text": "better leverage",
"tokens": [
"better",
"leverage"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
57
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
65,
66,
67,
68,
69,
70,
71,
72,
73
],
"text": "schema - aware curriculum learning for dialog state tracking",
"tokens": [
"schema",
"-",
"aware",
"curriculum",
"learning",
"for",
"dialog",
"state",
"tracking"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
58
],
"text": "propose",
"tokens": [
"propose"
]
}
},
{
"arguments": [
{
"argument_type": "Finder",
"nugget_type": "OG",
"offsets": [
119
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "E-CMP",
"offsets": [
125
],
"text": "improves",
"tokens": [
"improves"
]
},
{
"argument_type": "Content",
"nugget_type": "E-FAC",
"offsets": [
146
],
"text": "achieves",
"tokens": [
"achieves"
]
}
],
"event_type": "FIN",
"trigger": {
"offsets": [
120
],
"text": "show",
"tokens": [
"show"
]
}
},
{
"arguments": [
{
"argument_type": "Result",
"nugget_type": "STR",
"offsets": [
125
],
"text": "improves",
"tokens": [
"improves"
]
},
{
"argument_type": "Metrics",
"nugget_type": "TAK",
"offsets": [
1,
2,
3,
127
],
"text": "dst performance",
"tokens": [
"dialog",
"state",
"tracking",
"performance"
]
},
{
"argument_type": "Arg1",
"nugget_type": "APP",
"offsets": [
65,
66,
67,
68,
69,
70,
71,
72,
73
],
"text": "schema - aware curriculum learning for dialog state tracking",
"tokens": [
"schema",
"-",
"aware",
"curriculum",
"learning",
"for",
"dialog",
"state",
"tracking"
]
},
{
"argument_type": "Arg2",
"nugget_type": "APP",
"offsets": [
131,
132,
133,
138,
139
],
"text": "transformer - based dst model",
"tokens": [
"transformer",
"-",
"based",
"dst",
"model"
]
}
],
"event_type": "CMP",
"trigger": {
"offsets": [
125
],
"text": "improves",
"tokens": [
"improves"
]
}
},
{
"arguments": [
{
"argument_type": "Concern",
"nugget_type": "APP",
"offsets": [
138,
7
],
"text": "dialog state tracking ( dst ) models",
"tokens": [
"dst",
"models"
]
},
{
"argument_type": "Fault",
"nugget_type": "FEA",
"offsets": [
19,
20,
21
],
"text": "rich structural information",
"tokens": [
"rich",
"structural",
"information"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
18
],
"text": "neglecting",
"tokens": [
"neglecting"
]
}
},
{
"arguments": [
{
"argument_type": "Object",
"nugget_type": "STR",
"offsets": [
147,
148,
149,
150,
151,
152,
153,
154,
155
],
"text": "new state - of - the - art results",
"tokens": [
"new",
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"results"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
156,
157,
158,
159,
160,
161,
162,
163
],
"text": "on woz2 . 0 and multiwoz2 . 1",
"tokens": [
"on",
"woz2",
".",
"0",
"and",
"multiwoz2",
".",
"1"
]
},
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
65,
66,
67,
68,
69,
70,
71,
72,
73
],
"text": "schema - aware curriculum learning for dialog state tracking",
"tokens": [
"schema",
"-",
"aware",
"curriculum",
"learning",
"for",
"dialog",
"state",
"tracking"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
146
],
"text": "achieves",
"tokens": [
"achieves"
]
}
}
] |
[
"existing",
"dialog",
"state",
"tracking",
"(",
"dst",
")",
"models",
"are",
"trained",
"with",
"dialog",
"data",
"in",
"a",
"random",
"order",
",",
"neglecting",
"rich",
"structural",
"information",
"in",
"a",
"dataset",
".",
"in",
"this",
"paper",
",",
"we",
"propose",
"to",
"use",
"curriculum",
"learning",
"(",
"cl",
")",
"to",
"better",
"leverage",
"both",
"the",
"curriculum",
"structure",
"and",
"schema",
"structure",
"for",
"task",
"-",
"oriented",
"dialogs",
".",
"specifically",
",",
"we",
"propose",
"a",
"model",
"-",
"agnostic",
"framework",
"called",
"schema",
"-",
"aware",
"curriculum",
"learning",
"for",
"dialog",
"state",
"tracking",
"(",
"saclog",
")",
",",
"which",
"consists",
"of",
"a",
"preview",
"module",
"that",
"pre",
"-",
"trains",
"a",
"dst",
"model",
"with",
"schema",
"information",
",",
"a",
"curriculum",
"module",
"that",
"optimizes",
"the",
"model",
"with",
"cl",
",",
"and",
"a",
"review",
"module",
"that",
"augments",
"mispredicted",
"data",
"to",
"reinforce",
"the",
"cl",
"training",
".",
"we",
"show",
"that",
"our",
"proposed",
"approach",
"improves",
"dst",
"performance",
"over",
"both",
"a",
"transformer",
"-",
"based",
"and",
"rnn",
"-",
"based",
"dst",
"model",
"(",
"trippy",
"and",
"trade",
")",
"and",
"achieves",
"new",
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"results",
"on",
"woz2",
".",
"0",
"and",
"multiwoz2",
".",
"1",
"."
] |
ACL
|
Self-Attentional Models for Lattice Inputs
|
Lattices are an efficient and effective method to encode ambiguity of upstream systems in natural language processing tasks, for example to compactly capture multiple speech recognition hypotheses, or to represent multiple linguistic analyses. Previous work has extended recurrent neural networks to model lattice inputs and achieved improvements in various tasks, but these models suffer from very slow computation speeds. This paper extends the recently proposed paradigm of self-attention to handle lattice inputs. Self-attention is a sequence modeling technique that relates inputs to one another by computing pairwise similarities and has gained popularity for both its strong results and its computational efficiency. To extend such models to handle lattices, we introduce probabilistic reachability masks that incorporate lattice structure into the model and support lattice scores if available. We also propose a method for adapting positional embeddings to lattice structures. We apply the proposed model to a speech translation task and find that it outperforms all examined baselines while being much faster to compute than previous neural lattice models during both training and inference.
|
8e057b24ffe8ed4a5448b19bb7b9c2bf
| 2,019
|
[
"lattices are an efficient and effective method to encode ambiguity of upstream systems in natural language processing tasks , for example to compactly capture multiple speech recognition hypotheses , or to represent multiple linguistic analyses .",
"previous work has extended recurrent neural networks to model lattice inputs and achieved improvements in various tasks , but these models suffer from very slow computation speeds .",
"this paper extends the recently proposed paradigm of self - attention to handle lattice inputs .",
"self - attention is a sequence modeling technique that relates inputs to one another by computing pairwise similarities and has gained popularity for both its strong results and its computational efficiency .",
"to extend such models to handle lattices , we introduce probabilistic reachability masks that incorporate lattice structure into the model and support lattice scores if available .",
"we also propose a method for adapting positional embeddings to lattice structures .",
"we apply the proposed model to a speech translation task and find that it outperforms all examined baselines while being much faster to compute than previous neural lattice models during both training and inference ."
] |
[
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
0
],
"text": "lattices",
"tokens": [
"lattices"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
13,
14,
15,
16,
17
],
"text": "in natural language processing tasks",
"tokens": [
"in",
"natural",
"language",
"processing",
"tasks"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
6
],
"text": "method",
"tokens": [
"method"
]
}
},
{
"arguments": [
{
"argument_type": "Concern",
"nugget_type": "APP",
"offsets": [
36,
37
],
"text": "previous work",
"tokens": [
"previous",
"work"
]
},
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
59,
60,
61,
62
],
"text": "very slow computation speeds",
"tokens": [
"very",
"slow",
"computation",
"speeds"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
57
],
"text": "suffer",
"tokens": [
"suffer"
]
}
},
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
76
],
"text": "handle",
"tokens": [
"handle"
]
},
{
"argument_type": "Content",
"nugget_type": "MOD",
"offsets": [
68,
69,
70,
71,
72,
73,
74
],
"text": "recently proposed paradigm of self - attention",
"tokens": [
"recently",
"proposed",
"paradigm",
"of",
"self",
"-",
"attention"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
66
],
"text": "extends",
"tokens": [
"extends"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "MOD",
"offsets": [
77,
78
],
"text": "lattice inputs",
"tokens": [
"lattice",
"inputs"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
76
],
"text": "handle",
"tokens": [
"handle"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
120
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
113
],
"text": "extend",
"tokens": [
"extend"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
122,
123,
124
],
"text": "probabilistic reachability masks",
"tokens": [
"probabilistic",
"reachability",
"masks"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
121
],
"text": "introduce",
"tokens": [
"introduce"
]
}
},
{
"arguments": [
{
"argument_type": "BaseComponent",
"nugget_type": "MOD",
"offsets": [
146,
147
],
"text": "positional embeddings",
"tokens": [
"positional",
"embeddings"
]
},
{
"argument_type": "Target",
"nugget_type": "MOD",
"offsets": [
149,
150
],
"text": "lattice structures",
"tokens": [
"lattice",
"structures"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
145
],
"text": "adapting",
"tokens": [
"adapting"
]
}
},
{
"arguments": [
{
"argument_type": "Finder",
"nugget_type": "OG",
"offsets": [
152
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "E-CMP",
"offsets": [
166
],
"text": "outperforms",
"tokens": [
"outperforms"
]
},
{
"argument_type": "Content",
"nugget_type": "E-CMP",
"offsets": [
175
],
"text": "compute",
"tokens": [
"compute"
]
}
],
"event_type": "FIN",
"trigger": {
"offsets": [
163
],
"text": "find",
"tokens": [
"find"
]
}
},
{
"arguments": [
{
"argument_type": "Result",
"nugget_type": "STR",
"offsets": [
166
],
"text": "outperforms",
"tokens": [
"outperforms"
]
},
{
"argument_type": "Arg2",
"nugget_type": "APP",
"offsets": [
167,
168,
169
],
"text": "all examined baselines",
"tokens": [
"all",
"examined",
"baselines"
]
}
],
"event_type": "CMP",
"trigger": {
"offsets": [
166
],
"text": "outperforms",
"tokens": [
"outperforms"
]
}
},
{
"arguments": [
{
"argument_type": "Arg2",
"nugget_type": "APP",
"offsets": [
177,
178,
179,
180
],
"text": "previous neural lattice models",
"tokens": [
"previous",
"neural",
"lattice",
"models"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
181,
182,
183,
184,
185
],
"text": "during both training and inference",
"tokens": [
"during",
"both",
"training",
"and",
"inference"
]
},
{
"argument_type": "Result",
"nugget_type": "STR",
"offsets": [
173
],
"text": "faster",
"tokens": [
"faster"
]
},
{
"argument_type": "Extent",
"nugget_type": "DEG",
"offsets": [
172
],
"text": "much",
"tokens": [
"much"
]
},
{
"argument_type": "Arg1",
"nugget_type": "APP",
"offsets": [
122,
123,
124
],
"text": "probabilistic reachability masks",
"tokens": [
"probabilistic",
"reachability",
"masks"
]
}
],
"event_type": "CMP",
"trigger": {
"offsets": [
175
],
"text": "compute",
"tokens": [
"compute"
]
}
},
{
"arguments": [
{
"argument_type": "BaseComponent",
"nugget_type": "MOD",
"offsets": [
127,
128
],
"text": "lattice structure",
"tokens": [
"lattice",
"structure"
]
},
{
"argument_type": "Target",
"nugget_type": "APP",
"offsets": [
131
],
"text": "model",
"tokens": [
"model"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
126
],
"text": "incorporate",
"tokens": [
"incorporate"
]
}
},
{
"arguments": [
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
136,
137
],
"text": "if available",
"tokens": [
"if",
"available"
]
},
{
"argument_type": "Aim",
"nugget_type": "APP",
"offsets": [
115
],
"text": "models",
"tokens": [
"models"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
113
],
"text": "extend",
"tokens": [
"extend"
]
}
}
] |
[
"lattices",
"are",
"an",
"efficient",
"and",
"effective",
"method",
"to",
"encode",
"ambiguity",
"of",
"upstream",
"systems",
"in",
"natural",
"language",
"processing",
"tasks",
",",
"for",
"example",
"to",
"compactly",
"capture",
"multiple",
"speech",
"recognition",
"hypotheses",
",",
"or",
"to",
"represent",
"multiple",
"linguistic",
"analyses",
".",
"previous",
"work",
"has",
"extended",
"recurrent",
"neural",
"networks",
"to",
"model",
"lattice",
"inputs",
"and",
"achieved",
"improvements",
"in",
"various",
"tasks",
",",
"but",
"these",
"models",
"suffer",
"from",
"very",
"slow",
"computation",
"speeds",
".",
"this",
"paper",
"extends",
"the",
"recently",
"proposed",
"paradigm",
"of",
"self",
"-",
"attention",
"to",
"handle",
"lattice",
"inputs",
".",
"self",
"-",
"attention",
"is",
"a",
"sequence",
"modeling",
"technique",
"that",
"relates",
"inputs",
"to",
"one",
"another",
"by",
"computing",
"pairwise",
"similarities",
"and",
"has",
"gained",
"popularity",
"for",
"both",
"its",
"strong",
"results",
"and",
"its",
"computational",
"efficiency",
".",
"to",
"extend",
"such",
"models",
"to",
"handle",
"lattices",
",",
"we",
"introduce",
"probabilistic",
"reachability",
"masks",
"that",
"incorporate",
"lattice",
"structure",
"into",
"the",
"model",
"and",
"support",
"lattice",
"scores",
"if",
"available",
".",
"we",
"also",
"propose",
"a",
"method",
"for",
"adapting",
"positional",
"embeddings",
"to",
"lattice",
"structures",
".",
"we",
"apply",
"the",
"proposed",
"model",
"to",
"a",
"speech",
"translation",
"task",
"and",
"find",
"that",
"it",
"outperforms",
"all",
"examined",
"baselines",
"while",
"being",
"much",
"faster",
"to",
"compute",
"than",
"previous",
"neural",
"lattice",
"models",
"during",
"both",
"training",
"and",
"inference",
"."
] |
ACL
|
Joint Effects of Context and User History for Predicting Online Conversation Re-entries
|
As the online world continues its exponential growth, interpersonal communication has come to play an increasingly central role in opinion formation and change. In order to help users better engage with each other online, we study a challenging problem of re-entry prediction foreseeing whether a user will come back to a conversation they once participated in. We hypothesize that both the context of the ongoing conversations and the users’ previous chatting history will affect their continued interests in future engagement. Specifically, we propose a neural framework with three main layers, each modeling context, user history, and interactions between them, to explore how the conversation context and user chatting history jointly result in their re-entry behavior. We experiment with two large-scale datasets collected from Twitter and Reddit. Results show that our proposed framework with bi-attention achieves an F1 score of 61.1 on Twitter conversations, outperforming the state-of-the-art methods from previous work.
|
54dc18f3c81976ab42c7f5f4bd591db4
| 2,019
|
[
"as the online world continues its exponential growth , interpersonal communication has come to play an increasingly central role in opinion formation and change .",
"in order to help users better engage with each other online , we study a challenging problem of re - entry prediction foreseeing whether a user will come back to a conversation they once participated in .",
"we hypothesize that both the context of the ongoing conversations and the users ’ previous chatting history will affect their continued interests in future engagement .",
"specifically , we propose a neural framework with three main layers , each modeling context , user history , and interactions between them , to explore how the conversation context and user chatting history jointly result in their re - entry behavior .",
"we experiment with two large - scale datasets collected from twitter and reddit .",
"results show that our proposed framework with bi - attention achieves an f1 score of 61 . 1 on twitter conversations , outperforming the state - of - the - art methods from previous work ."
] |
[
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
9,
10
],
"text": "interpersonal communication",
"tokens": [
"interpersonal",
"communication"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
14
],
"text": "play",
"tokens": [
"play"
]
}
},
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
28
],
"text": "help",
"tokens": [
"help"
]
},
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
37
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
40,
41,
42,
43,
44,
45,
46
],
"text": "challenging problem of re - entry prediction",
"tokens": [
"challenging",
"problem",
"of",
"re",
"-",
"entry",
"prediction"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
38
],
"text": "study",
"tokens": [
"study"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
29,
30,
31,
32,
33,
34,
35
],
"text": "users better engage with each other online",
"tokens": [
"users",
"better",
"engage",
"with",
"each",
"other",
"online"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
28
],
"text": "help",
"tokens": [
"help"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
62
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86
],
"text": "both the context of the ongoing conversations and the users ’ previous chatting history will affect their continued interests in future engagement",
"tokens": [
"both",
"the",
"context",
"of",
"the",
"ongoing",
"conversations",
"and",
"the",
"users",
"’",
"previous",
"chatting",
"history",
"will",
"affect",
"their",
"continued",
"interests",
"in",
"future",
"engagement"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
63
],
"text": "hypothesize",
"tokens": [
"hypothesize"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
90
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
93,
94,
95,
96,
97,
98
],
"text": "neural framework with three main layers",
"tokens": [
"neural",
"framework",
"with",
"three",
"main",
"layers"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
113
],
"text": "explore",
"tokens": [
"explore"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
91
],
"text": "propose",
"tokens": [
"propose"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129
],
"text": "how the conversation context and user chatting history jointly result in their re - entry behavior",
"tokens": [
"how",
"the",
"conversation",
"context",
"and",
"user",
"chatting",
"history",
"jointly",
"result",
"in",
"their",
"re",
"-",
"entry",
"behavior"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
113
],
"text": "explore",
"tokens": [
"explore"
]
}
},
{
"arguments": [
{
"argument_type": "Dataset",
"nugget_type": "DST",
"offsets": [
134,
135,
136,
137,
138,
139,
140,
141,
142,
143
],
"text": "two large - scale datasets collected from twitter and reddit",
"tokens": [
"two",
"large",
"-",
"scale",
"datasets",
"collected",
"from",
"twitter",
"and",
"reddit"
]
},
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
131
],
"text": "we",
"tokens": [
"we"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
132
],
"text": "experiment",
"tokens": [
"experiment"
]
}
},
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "E-FAC",
"offsets": [
155
],
"text": "achieves",
"tokens": [
"achieves"
]
},
{
"argument_type": "Content",
"nugget_type": "E-CMP",
"offsets": [
167
],
"text": "outperforming",
"tokens": [
"outperforming"
]
}
],
"event_type": "FIN",
"trigger": {
"offsets": [
146
],
"text": "show",
"tokens": [
"show"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
93,
94,
95,
96,
97,
98,
151,
152,
153,
154
],
"text": "our proposed framework with bi - attention",
"tokens": [
"neural",
"framework",
"with",
"three",
"main",
"layers",
"with",
"bi",
"-",
"attention"
]
},
{
"argument_type": "Object",
"nugget_type": "TAK",
"offsets": [
157,
158
],
"text": "f1 score",
"tokens": [
"f1",
"score"
]
},
{
"argument_type": "Extent",
"nugget_type": "DEG",
"offsets": [
160,
161,
162
],
"text": "61 . 1",
"tokens": [
"61",
".",
"1"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
163,
164,
165
],
"text": "on twitter conversations",
"tokens": [
"on",
"twitter",
"conversations"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
155
],
"text": "achieves",
"tokens": [
"achieves"
]
}
},
{
"arguments": [
{
"argument_type": "Result",
"nugget_type": "STR",
"offsets": [
167
],
"text": "outperforming",
"tokens": [
"outperforming"
]
},
{
"argument_type": "Arg1",
"nugget_type": "APP",
"offsets": [
93,
94
],
"text": "neural framework",
"tokens": [
"neural",
"framework"
]
}
],
"event_type": "CMP",
"trigger": {
"offsets": [
167
],
"text": "outperforming",
"tokens": [
"outperforming"
]
}
}
] |
[
"as",
"the",
"online",
"world",
"continues",
"its",
"exponential",
"growth",
",",
"interpersonal",
"communication",
"has",
"come",
"to",
"play",
"an",
"increasingly",
"central",
"role",
"in",
"opinion",
"formation",
"and",
"change",
".",
"in",
"order",
"to",
"help",
"users",
"better",
"engage",
"with",
"each",
"other",
"online",
",",
"we",
"study",
"a",
"challenging",
"problem",
"of",
"re",
"-",
"entry",
"prediction",
"foreseeing",
"whether",
"a",
"user",
"will",
"come",
"back",
"to",
"a",
"conversation",
"they",
"once",
"participated",
"in",
".",
"we",
"hypothesize",
"that",
"both",
"the",
"context",
"of",
"the",
"ongoing",
"conversations",
"and",
"the",
"users",
"’",
"previous",
"chatting",
"history",
"will",
"affect",
"their",
"continued",
"interests",
"in",
"future",
"engagement",
".",
"specifically",
",",
"we",
"propose",
"a",
"neural",
"framework",
"with",
"three",
"main",
"layers",
",",
"each",
"modeling",
"context",
",",
"user",
"history",
",",
"and",
"interactions",
"between",
"them",
",",
"to",
"explore",
"how",
"the",
"conversation",
"context",
"and",
"user",
"chatting",
"history",
"jointly",
"result",
"in",
"their",
"re",
"-",
"entry",
"behavior",
".",
"we",
"experiment",
"with",
"two",
"large",
"-",
"scale",
"datasets",
"collected",
"from",
"twitter",
"and",
"reddit",
".",
"results",
"show",
"that",
"our",
"proposed",
"framework",
"with",
"bi",
"-",
"attention",
"achieves",
"an",
"f1",
"score",
"of",
"61",
".",
"1",
"on",
"twitter",
"conversations",
",",
"outperforming",
"the",
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"methods",
"from",
"previous",
"work",
"."
] |
ACL
|
Probing for Predicate Argument Structures in Pretrained Language Models
|
Thanks to the effectiveness and wide availability of modern pretrained language models (PLMs), recently proposed approaches have achieved remarkable results in dependency- and span-based, multilingual and cross-lingual Semantic Role Labeling (SRL). These results have prompted researchers to investigate the inner workings of modern PLMs with the aim of understanding how, where, and to what extent they encode information about SRL. In this paper, we follow this line of research and probe for predicate argument structures in PLMs. Our study shows that PLMs do encode semantic structures directly into the contextualized representation of a predicate, and also provides insights into the correlation between predicate senses and their structures, the degree of transferability between nominal and verbal structures, and how such structures are encoded across languages. Finally, we look at the practical implications of such insights and demonstrate the benefits of embedding predicate argument structure information into an SRL model.
|
81cb7fa52062f9d17d9e93a1e4567dec
| 2,022
|
[
"thanks to the effectiveness and wide availability of modern pretrained language models ( plms ) , recently proposed approaches have achieved remarkable results in dependency - and span - based , multilingual and cross - lingual semantic role labeling ( srl ) .",
"these results have prompted researchers to investigate the inner workings of modern plms with the aim of understanding how , where , and to what extent they encode information about srl .",
"in this paper , we follow this line of research and probe for predicate argument structures in plms .",
"our study shows that plms do encode semantic structures directly into the contextualized representation of a predicate , and also provides insights into the correlation between predicate senses and their structures , the degree of transferability between nominal and verbal structures , and how such structures are encoded across languages .",
"finally , we look at the practical implications of such insights and demonstrate the benefits of embedding predicate argument structure information into an srl model ."
] |
[
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
36,
37,
38
],
"text": "semantic role labeling",
"tokens": [
"semantic",
"role",
"labeling"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
20
],
"text": "achieved",
"tokens": [
"achieved"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
79
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "MOD",
"offsets": [
88,
89,
90
],
"text": "predicate argument structures",
"tokens": [
"predicate",
"argument",
"structures"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
91,
9,
10,
11
],
"text": "in plms",
"tokens": [
"in",
"pretrained",
"language",
"models"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
86
],
"text": "probe",
"tokens": [
"probe"
]
}
},
{
"arguments": [
{
"argument_type": "Object",
"nugget_type": "MOD",
"offsets": [
101,
102
],
"text": "semantic structures",
"tokens": [
"semantic",
"structures"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
104,
105,
106,
107,
108,
109,
110
],
"text": "into the contextualized representation of a predicate",
"tokens": [
"into",
"the",
"contextualized",
"representation",
"of",
"a",
"predicate"
]
},
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
9,
10,
11
],
"text": "pretrained language models",
"tokens": [
"pretrained",
"language",
"models"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
100
],
"text": "encode",
"tokens": [
"encode"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
9,
10,
11
],
"text": "pretrained language models",
"tokens": [
"pretrained",
"language",
"models"
]
},
{
"argument_type": "Object",
"nugget_type": "TAK",
"offsets": [
127,
128,
129
],
"text": "degree of transferability",
"tokens": [
"degree",
"of",
"transferability"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
130,
131,
132,
133,
134
],
"text": "between nominal and verbal structures",
"tokens": [
"between",
"nominal",
"and",
"verbal",
"structures"
]
},
{
"argument_type": "Object",
"nugget_type": "FEA",
"offsets": [
139
],
"text": "structures",
"tokens": [
"structures"
]
},
{
"argument_type": "Object",
"nugget_type": "TAK",
"offsets": [
115,
116,
117,
118
],
"text": "insights into the correlation",
"tokens": [
"insights",
"into",
"the",
"correlation"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
119,
120,
121,
122,
123,
124
],
"text": "between predicate senses and their structures",
"tokens": [
"between",
"predicate",
"senses",
"and",
"their",
"structures"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
114
],
"text": "provides",
"tokens": [
"provides"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
147
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
151,
152,
153,
154,
155
],
"text": "practical implications of such insights",
"tokens": [
"practical",
"implications",
"of",
"such",
"insights"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
148,
149
],
"text": "look at",
"tokens": [
"look",
"at"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
147
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "STR",
"offsets": [
159,
160,
161,
162,
163,
164,
165,
166,
167,
36,
37,
38,
169
],
"text": "benefits of embedding predicate argument structure information into an srl model",
"tokens": [
"benefits",
"of",
"embedding",
"predicate",
"argument",
"structure",
"information",
"into",
"an",
"semantic",
"role",
"labeling",
"model"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
157
],
"text": "demonstrate",
"tokens": [
"demonstrate"
]
}
},
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "E-FAC",
"offsets": [
100
],
"text": "encode",
"tokens": [
"encode"
]
}
],
"event_type": "FIN",
"trigger": {
"offsets": [
96
],
"text": "shows",
"tokens": [
"shows"
]
}
}
] |
[
"thanks",
"to",
"the",
"effectiveness",
"and",
"wide",
"availability",
"of",
"modern",
"pretrained",
"language",
"models",
"(",
"plms",
")",
",",
"recently",
"proposed",
"approaches",
"have",
"achieved",
"remarkable",
"results",
"in",
"dependency",
"-",
"and",
"span",
"-",
"based",
",",
"multilingual",
"and",
"cross",
"-",
"lingual",
"semantic",
"role",
"labeling",
"(",
"srl",
")",
".",
"these",
"results",
"have",
"prompted",
"researchers",
"to",
"investigate",
"the",
"inner",
"workings",
"of",
"modern",
"plms",
"with",
"the",
"aim",
"of",
"understanding",
"how",
",",
"where",
",",
"and",
"to",
"what",
"extent",
"they",
"encode",
"information",
"about",
"srl",
".",
"in",
"this",
"paper",
",",
"we",
"follow",
"this",
"line",
"of",
"research",
"and",
"probe",
"for",
"predicate",
"argument",
"structures",
"in",
"plms",
".",
"our",
"study",
"shows",
"that",
"plms",
"do",
"encode",
"semantic",
"structures",
"directly",
"into",
"the",
"contextualized",
"representation",
"of",
"a",
"predicate",
",",
"and",
"also",
"provides",
"insights",
"into",
"the",
"correlation",
"between",
"predicate",
"senses",
"and",
"their",
"structures",
",",
"the",
"degree",
"of",
"transferability",
"between",
"nominal",
"and",
"verbal",
"structures",
",",
"and",
"how",
"such",
"structures",
"are",
"encoded",
"across",
"languages",
".",
"finally",
",",
"we",
"look",
"at",
"the",
"practical",
"implications",
"of",
"such",
"insights",
"and",
"demonstrate",
"the",
"benefits",
"of",
"embedding",
"predicate",
"argument",
"structure",
"information",
"into",
"an",
"srl",
"model",
"."
] |
ACL
|
Where to Go for the Holidays: Towards Mixed-Type Dialogs for Clarification of User Goals
|
Most dialog systems posit that users have figured out clear and specific goals before starting an interaction. For example, users have determined the departure, the destination, and the travel time for booking a flight. However, in many scenarios, limited by experience and knowledge, users may know what they need, but still struggle to figure out clear and specific goals by determining all the necessary slots. In this paper, we identify this challenge, and make a step forward by collecting a new human-to-human mixed-type dialog corpus. It contains 5k dialog sessions and 168k utterances for 4 dialog types and 5 domains. Within each session, an agent first provides user-goal-related knowledge to help figure out clear and specific goals, and then help achieve them. Furthermore, we propose a mixed-type dialog model with a novel Prompt-based continual learning mechanism. Specifically, the mechanism enables the model to continually strengthen its ability on any specific type by utilizing existing dialog corpora effectively.
|
e6819b3ce223923478bb9d3b63e830a6
| 2,022
|
[
"most dialog systems posit that users have figured out clear and specific goals before starting an interaction .",
"for example , users have determined the departure , the destination , and the travel time for booking a flight .",
"however , in many scenarios , limited by experience and knowledge , users may know what they need , but still struggle to figure out clear and specific goals by determining all the necessary slots .",
"in this paper , we identify this challenge , and make a step forward by collecting a new human - to - human mixed - type dialog corpus .",
"it contains 5k dialog sessions and 168k utterances for 4 dialog types and 5 domains .",
"within each session , an agent first provides user - goal - related knowledge to help figure out clear and specific goals , and then help achieve them .",
"furthermore , we propose a mixed - type dialog model with a novel prompt - based continual learning mechanism .",
"specifically , the mechanism enables the model to continually strengthen its ability on any specific type by utilizing existing dialog corpora effectively ."
] |
[
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
1,
2
],
"text": "dialog systems",
"tokens": [
"dialog",
"systems"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
3
],
"text": "posit",
"tokens": [
"posit"
]
}
},
{
"arguments": [
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
59,
60,
61,
62,
63
],
"text": "still struggle to figure out",
"tokens": [
"still",
"struggle",
"to",
"figure",
"out"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
64,
65,
66,
67
],
"text": "clear and specific goals",
"tokens": [
"clear",
"and",
"specific",
"goals"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
59,
60,
61,
62,
63
],
"text": "still struggle to figure out",
"tokens": [
"still",
"struggle",
"to",
"figure",
"out"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
79
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "WEA",
"offsets": [
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73
],
"text": "still struggle to figure out clear and specific goals by determining all the necessary slots",
"tokens": [
"still",
"struggle",
"to",
"figure",
"out",
"clear",
"and",
"specific",
"goals",
"by",
"determining",
"all",
"the",
"necessary",
"slots"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
80
],
"text": "identify",
"tokens": [
"identify"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
79
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "DST",
"offsets": [
93,
94,
95,
96,
97,
98,
99,
100,
101,
102
],
"text": "human - to - human mixed - type dialog corpus",
"tokens": [
"human",
"-",
"to",
"-",
"human",
"mixed",
"-",
"type",
"dialog",
"corpus"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
90
],
"text": "collecting",
"tokens": [
"collecting"
]
}
},
{
"arguments": [
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
128,
129,
130,
131,
132,
133
],
"text": "user - goal - related knowledge",
"tokens": [
"user",
"-",
"goal",
"-",
"related",
"knowledge"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
136,
137
],
"text": "figure out",
"tokens": [
"figure",
"out"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
146
],
"text": "achieve",
"tokens": [
"achieve"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
127
],
"text": "provides",
"tokens": [
"provides"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
138,
139,
140,
141
],
"text": "clear and specific goals",
"tokens": [
"clear",
"and",
"specific",
"goals"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
136,
137
],
"text": "figure out",
"tokens": [
"figure",
"out"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
138,
139,
140,
141
],
"text": "clear and specific goals",
"tokens": [
"clear",
"and",
"specific",
"goals"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
146
],
"text": "achieve",
"tokens": [
"achieve"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
151
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167
],
"text": "mixed - type dialog model with a novel prompt - based continual learning mechanism",
"tokens": [
"mixed",
"-",
"type",
"dialog",
"model",
"with",
"a",
"novel",
"prompt",
"-",
"based",
"continual",
"learning",
"mechanism"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
152
],
"text": "propose",
"tokens": [
"propose"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "MOD",
"offsets": [
162,
163,
164,
165,
166,
167
],
"text": "prompt - based continual learning mechanism",
"tokens": [
"prompt",
"-",
"based",
"continual",
"learning",
"mechanism"
]
},
{
"argument_type": "Object",
"nugget_type": "APP",
"offsets": [
154,
155,
156,
157,
158
],
"text": "mixed - type dialog model",
"tokens": [
"mixed",
"-",
"type",
"dialog",
"model"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
177,
178
],
"text": "continually strengthen",
"tokens": [
"continually",
"strengthen"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
181,
182,
183,
184
],
"text": "on any specific type",
"tokens": [
"on",
"any",
"specific",
"type"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
186,
187,
188,
189,
190
],
"text": "utilizing existing dialog corpora effectively",
"tokens": [
"utilizing",
"existing",
"dialog",
"corpora",
"effectively"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
173
],
"text": "enables",
"tokens": [
"enables"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
154,
155,
156,
157,
158,
180
],
"text": "its ability",
"tokens": [
"mixed",
"-",
"type",
"dialog",
"model",
"ability"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
177,
178
],
"text": "continually strengthen",
"tokens": [
"continually",
"strengthen"
]
}
}
] |
[
"most",
"dialog",
"systems",
"posit",
"that",
"users",
"have",
"figured",
"out",
"clear",
"and",
"specific",
"goals",
"before",
"starting",
"an",
"interaction",
".",
"for",
"example",
",",
"users",
"have",
"determined",
"the",
"departure",
",",
"the",
"destination",
",",
"and",
"the",
"travel",
"time",
"for",
"booking",
"a",
"flight",
".",
"however",
",",
"in",
"many",
"scenarios",
",",
"limited",
"by",
"experience",
"and",
"knowledge",
",",
"users",
"may",
"know",
"what",
"they",
"need",
",",
"but",
"still",
"struggle",
"to",
"figure",
"out",
"clear",
"and",
"specific",
"goals",
"by",
"determining",
"all",
"the",
"necessary",
"slots",
".",
"in",
"this",
"paper",
",",
"we",
"identify",
"this",
"challenge",
",",
"and",
"make",
"a",
"step",
"forward",
"by",
"collecting",
"a",
"new",
"human",
"-",
"to",
"-",
"human",
"mixed",
"-",
"type",
"dialog",
"corpus",
".",
"it",
"contains",
"5k",
"dialog",
"sessions",
"and",
"168k",
"utterances",
"for",
"4",
"dialog",
"types",
"and",
"5",
"domains",
".",
"within",
"each",
"session",
",",
"an",
"agent",
"first",
"provides",
"user",
"-",
"goal",
"-",
"related",
"knowledge",
"to",
"help",
"figure",
"out",
"clear",
"and",
"specific",
"goals",
",",
"and",
"then",
"help",
"achieve",
"them",
".",
"furthermore",
",",
"we",
"propose",
"a",
"mixed",
"-",
"type",
"dialog",
"model",
"with",
"a",
"novel",
"prompt",
"-",
"based",
"continual",
"learning",
"mechanism",
".",
"specifically",
",",
"the",
"mechanism",
"enables",
"the",
"model",
"to",
"continually",
"strengthen",
"its",
"ability",
"on",
"any",
"specific",
"type",
"by",
"utilizing",
"existing",
"dialog",
"corpora",
"effectively",
"."
] |
ACL
|
DVD: A Diagnostic Dataset for Multi-step Reasoning in Video Grounded Dialogue
|
A video-grounded dialogue system is required to understand both dialogue, which contains semantic dependencies from turn to turn, and video, which contains visual cues of spatial and temporal scene variations. Building such dialogue systems is a challenging problem, involving various reasoning types on both visual and language inputs. Existing benchmarks do not have enough annotations to thoroughly analyze dialogue systems and understand their capabilities and limitations in isolation. These benchmarks are also not explicitly designed to minimise biases that models can exploit without actual reasoning. To address these limitations, in this paper, we present DVD, a Diagnostic Dataset for Video-grounded Dialogue. The dataset is designed to contain minimal biases and has detailed annotations for the different types of reasoning over the spatio-temporal space of video. Dialogues are synthesized over multiple question turns, each of which is injected with a set of cross-turn semantic relationships. We use DVD to analyze existing approaches, providing interesting insights into their abilities and limitations. In total, DVD is built from 11k CATER synthetic videos and contains 10 instances of 10-round dialogues for each video, resulting in more than 100k dialogues and 1M question-answer pairs. Our code and dataset are publicly available.
|
d45cd0ddedda4f5e033a5ce54cd0afb9
| 2,021
|
[
"a video - grounded dialogue system is required to understand both dialogue , which contains semantic dependencies from turn to turn , and video , which contains visual cues of spatial and temporal scene variations .",
"building such dialogue systems is a challenging problem , involving various reasoning types on both visual and language inputs .",
"existing benchmarks do not have enough annotations to thoroughly analyze dialogue systems and understand their capabilities and limitations in isolation .",
"these benchmarks are also not explicitly designed to minimise biases that models can exploit without actual reasoning .",
"to address these limitations , in this paper , we present dvd , a diagnostic dataset for video - grounded dialogue .",
"the dataset is designed to contain minimal biases and has detailed annotations for the different types of reasoning over the spatio - temporal space of video .",
"dialogues are synthesized over multiple question turns , each of which is injected with a set of cross - turn semantic relationships .",
"we use dvd to analyze existing approaches , providing interesting insights into their abilities and limitations .",
"in total , dvd is built from 11k cater synthetic videos and contains 10 instances of 10 - round dialogues for each video , resulting in more than 100k dialogues and 1m question - answer pairs .",
"our code and dataset are publicly available ."
] |
[
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "APP",
"offsets": [
1,
2,
3,
4,
5
],
"text": "video - grounded dialogue system",
"tokens": [
"video",
"-",
"grounded",
"dialogue",
"system"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
7
],
"text": "required",
"tokens": [
"required"
]
}
},
{
"arguments": [
{
"argument_type": "Concern",
"nugget_type": "APP",
"offsets": [
56,
57
],
"text": "existing benchmarks",
"tokens": [
"existing",
"benchmarks"
]
},
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
59,
60,
61,
62
],
"text": "not have enough annotations",
"tokens": [
"not",
"have",
"enough",
"annotations"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
65
],
"text": "analyze",
"tokens": [
"analyze"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
59,
60,
61,
62
],
"text": "not have enough annotations",
"tokens": [
"not",
"have",
"enough",
"annotations"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "APP",
"offsets": [
66,
67
],
"text": "dialogue systems",
"tokens": [
"dialogue",
"systems"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
65
],
"text": "analyze",
"tokens": [
"analyze"
]
}
},
{
"arguments": [
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
74,
75
],
"text": "in isolation",
"tokens": [
"in",
"isolation"
]
},
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
66,
67,
71
],
"text": "their capabilities",
"tokens": [
"dialogue",
"systems",
"capabilities"
]
},
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
66,
67,
73
],
"text": "their limitations",
"tokens": [
"dialogue",
"systems",
"limitations"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
69
],
"text": "understand",
"tokens": [
"understand"
]
}
},
{
"arguments": [
{
"argument_type": "Concern",
"nugget_type": "APP",
"offsets": [
78
],
"text": "benchmarks",
"tokens": [
"benchmarks"
]
},
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
85,
86
],
"text": "minimise biases",
"tokens": [
"minimise",
"biases"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
81,
82,
83
],
"text": "not explicitly designed",
"tokens": [
"not",
"explicitly",
"designed"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
104
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "DST",
"offsets": [
109,
110
],
"text": "diagnostic dataset",
"tokens": [
"diagnostic",
"dataset"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
112,
113,
114,
115
],
"text": "video - grounded dialogue",
"tokens": [
"video",
"-",
"grounded",
"dialogue"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
105
],
"text": "present",
"tokens": [
"present"
]
}
},
{
"arguments": [
{
"argument_type": "BaseComponent",
"nugget_type": "DST",
"offsets": [
118
],
"text": "dataset",
"tokens": [
"dataset"
]
},
{
"argument_type": "TriedComponent",
"nugget_type": "FEA",
"offsets": [
123,
124
],
"text": "minimal biases",
"tokens": [
"minimal",
"biases"
]
},
{
"argument_type": "TriedComponent",
"nugget_type": "FEA",
"offsets": [
127,
128
],
"text": "detailed annotations",
"tokens": [
"detailed",
"annotations"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
122
],
"text": "contain",
"tokens": [
"contain"
]
}
},
{
"arguments": [
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
148,
149,
150
],
"text": "multiple question turns",
"tokens": [
"multiple",
"question",
"turns"
]
},
{
"argument_type": "TriedComponent",
"nugget_type": "FEA",
"offsets": [
159,
160,
161,
162,
163,
164,
165
],
"text": "set of cross - turn semantic relationships",
"tokens": [
"set",
"of",
"cross",
"-",
"turn",
"semantic",
"relationships"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
156
],
"text": "injected",
"tokens": [
"injected"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
167
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "FEA",
"offsets": [
176,
177
],
"text": "interesting insights",
"tokens": [
"interesting",
"insights"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
175
],
"text": "providing",
"tokens": [
"providing"
]
}
},
{
"arguments": [
{
"argument_type": "TriedComponent",
"nugget_type": "FEA",
"offsets": [
192,
193,
194
],
"text": "cater synthetic videos",
"tokens": [
"cater",
"synthetic",
"videos"
]
},
{
"argument_type": "TriedComponent",
"nugget_type": "FEA",
"offsets": [
200,
201,
202,
203
],
"text": "10 - round dialogues",
"tokens": [
"10",
"-",
"round",
"dialogues"
]
},
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
213
],
"text": "dialogues",
"tokens": [
"dialogues"
]
},
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
216,
217,
218,
219
],
"text": "question - answer pairs",
"tokens": [
"question",
"-",
"answer",
"pairs"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
208
],
"text": "resulting",
"tokens": [
"resulting"
]
}
}
] |
[
"a",
"video",
"-",
"grounded",
"dialogue",
"system",
"is",
"required",
"to",
"understand",
"both",
"dialogue",
",",
"which",
"contains",
"semantic",
"dependencies",
"from",
"turn",
"to",
"turn",
",",
"and",
"video",
",",
"which",
"contains",
"visual",
"cues",
"of",
"spatial",
"and",
"temporal",
"scene",
"variations",
".",
"building",
"such",
"dialogue",
"systems",
"is",
"a",
"challenging",
"problem",
",",
"involving",
"various",
"reasoning",
"types",
"on",
"both",
"visual",
"and",
"language",
"inputs",
".",
"existing",
"benchmarks",
"do",
"not",
"have",
"enough",
"annotations",
"to",
"thoroughly",
"analyze",
"dialogue",
"systems",
"and",
"understand",
"their",
"capabilities",
"and",
"limitations",
"in",
"isolation",
".",
"these",
"benchmarks",
"are",
"also",
"not",
"explicitly",
"designed",
"to",
"minimise",
"biases",
"that",
"models",
"can",
"exploit",
"without",
"actual",
"reasoning",
".",
"to",
"address",
"these",
"limitations",
",",
"in",
"this",
"paper",
",",
"we",
"present",
"dvd",
",",
"a",
"diagnostic",
"dataset",
"for",
"video",
"-",
"grounded",
"dialogue",
".",
"the",
"dataset",
"is",
"designed",
"to",
"contain",
"minimal",
"biases",
"and",
"has",
"detailed",
"annotations",
"for",
"the",
"different",
"types",
"of",
"reasoning",
"over",
"the",
"spatio",
"-",
"temporal",
"space",
"of",
"video",
".",
"dialogues",
"are",
"synthesized",
"over",
"multiple",
"question",
"turns",
",",
"each",
"of",
"which",
"is",
"injected",
"with",
"a",
"set",
"of",
"cross",
"-",
"turn",
"semantic",
"relationships",
".",
"we",
"use",
"dvd",
"to",
"analyze",
"existing",
"approaches",
",",
"providing",
"interesting",
"insights",
"into",
"their",
"abilities",
"and",
"limitations",
".",
"in",
"total",
",",
"dvd",
"is",
"built",
"from",
"11k",
"cater",
"synthetic",
"videos",
"and",
"contains",
"10",
"instances",
"of",
"10",
"-",
"round",
"dialogues",
"for",
"each",
"video",
",",
"resulting",
"in",
"more",
"than",
"100k",
"dialogues",
"and",
"1m",
"question",
"-",
"answer",
"pairs",
".",
"our",
"code",
"and",
"dataset",
"are",
"publicly",
"available",
"."
] |
ACL
|
MATE-KD: Masked Adversarial TExt, a Companion to Knowledge Distillation
|
The advent of large pre-trained language models has given rise to rapid progress in the field of Natural Language Processing (NLP). While the performance of these models on standard benchmarks has scaled with size, compression techniques such as knowledge distillation have been key in making them practical. We present MATE-KD, a novel text-based adversarial training algorithm which improves the performance of knowledge distillation. MATE-KD first trains a masked language model-based generator to perturb text by maximizing the divergence between teacher and student logits. Then using knowledge distillation a student is trained on both the original and the perturbed training samples. We evaluate our algorithm, using BERT-based models, on the GLUE benchmark and demonstrate that MATE-KD outperforms competitive adversarial learning and data augmentation baselines. On the GLUE test set our 6 layer RoBERTa based model outperforms BERT-large.
|
bcf2a5086a3b7ab9ae680289f38dad5f
| 2,021
|
[
"the advent of large pre - trained language models has given rise to rapid progress in the field of natural language processing ( nlp ) .",
"while the performance of these models on standard benchmarks has scaled with size , compression techniques such as knowledge distillation have been key in making them practical .",
"we present mate - kd , a novel text - based adversarial training algorithm which improves the performance of knowledge distillation .",
"mate - kd first trains a masked language model - based generator to perturb text by maximizing the divergence between teacher and student logits .",
"then using knowledge distillation a student is trained on both the original and the perturbed training samples .",
"we evaluate our algorithm , using bert - based models , on the glue benchmark and demonstrate that mate - kd outperforms competitive adversarial learning and data augmentation baselines .",
"on the glue test set our 6 layer roberta based model outperforms bert - large ."
] |
[
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
19,
20,
21
],
"text": "natural language processing",
"tokens": [
"natural",
"language",
"processing"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
11
],
"text": "rise",
"tokens": [
"rise"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
54
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
69
],
"text": "improves",
"tokens": [
"improves"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
62,
63,
64,
65,
66,
67
],
"text": "text - based adversarial training algorithm",
"tokens": [
"text",
"-",
"based",
"adversarial",
"training",
"algorithm"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
55
],
"text": "present",
"tokens": [
"present"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
71,
72,
73,
74
],
"text": "performance of knowledge distillation",
"tokens": [
"performance",
"of",
"knowledge",
"distillation"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
69
],
"text": "improves",
"tokens": [
"improves"
]
}
},
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
89
],
"text": "perturb",
"tokens": [
"perturb"
]
},
{
"argument_type": "BaseComponent",
"nugget_type": "MOD",
"offsets": [
82,
83,
84,
85,
86,
87
],
"text": "masked language model - based generator",
"tokens": [
"masked",
"language",
"model",
"-",
"based",
"generator"
]
},
{
"argument_type": "TriedComponent",
"nugget_type": "FEA",
"offsets": [
94,
95,
96,
97,
98,
99
],
"text": "divergence between teacher and student logits",
"tokens": [
"divergence",
"between",
"teacher",
"and",
"student",
"logits"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
92
],
"text": "maximizing",
"tokens": [
"maximizing"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "FEA",
"offsets": [
90
],
"text": "text",
"tokens": [
"text"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
89
],
"text": "perturb",
"tokens": [
"perturb"
]
}
},
{
"arguments": [
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
102,
103,
104
],
"text": "using knowledge distillation",
"tokens": [
"using",
"knowledge",
"distillation"
]
},
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
106
],
"text": "student",
"tokens": [
"student"
]
},
{
"argument_type": "TriedComponent",
"nugget_type": "FEA",
"offsets": [
112,
116,
117
],
"text": "original training samples",
"tokens": [
"original",
"training",
"samples"
]
},
{
"argument_type": "TriedComponent",
"nugget_type": "FEA",
"offsets": [
115,
116,
117
],
"text": "perturbed training samples",
"tokens": [
"perturbed",
"training",
"samples"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
108
],
"text": "trained",
"tokens": [
"trained"
]
}
},
{
"arguments": [
{
"argument_type": "Arg1",
"nugget_type": "APP",
"offsets": [
137,
138,
139
],
"text": "mate - kd",
"tokens": [
"mate",
"-",
"kd"
]
},
{
"argument_type": "Result",
"nugget_type": "STR",
"offsets": [
140
],
"text": "outperforms",
"tokens": [
"outperforms"
]
},
{
"argument_type": "Arg1",
"nugget_type": "APP",
"offsets": [
141,
142,
143
],
"text": "competitive adversarial learning",
"tokens": [
"competitive",
"adversarial",
"learning"
]
}
],
"event_type": "CMP",
"trigger": {
"offsets": [
140
],
"text": "outperforms",
"tokens": [
"outperforms"
]
}
},
{
"arguments": [
{
"argument_type": "Arg1",
"nugget_type": "APP",
"offsets": [
154,
155,
156,
157,
158,
159
],
"text": "our 6 layer roberta based model",
"tokens": [
"our",
"6",
"layer",
"roberta",
"based",
"model"
]
},
{
"argument_type": "Result",
"nugget_type": "STR",
"offsets": [
160
],
"text": "outperforms",
"tokens": [
"outperforms"
]
},
{
"argument_type": "Arg2",
"nugget_type": "APP",
"offsets": [
161,
162,
163
],
"text": "bert - large",
"tokens": [
"bert",
"-",
"large"
]
}
],
"event_type": "CMP",
"trigger": {
"offsets": [
160
],
"text": "outperforms",
"tokens": [
"outperforms"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
119
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
125,
126,
127,
128
],
"text": "bert - based models",
"tokens": [
"bert",
"-",
"based",
"models"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
120
],
"text": "evaluate",
"tokens": [
"evaluate"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
130,
131,
132,
133
],
"text": "on the glue benchmark",
"tokens": [
"on",
"the",
"glue",
"benchmark"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
124
],
"text": "using",
"tokens": [
"using"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "APP",
"offsets": [
62,
63,
64,
65,
66,
67
],
"text": "text - based adversarial training algorithm",
"tokens": [
"text",
"-",
"based",
"adversarial",
"training",
"algorithm"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
120
],
"text": "evaluate",
"tokens": [
"evaluate"
]
}
},
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "E-CMP",
"offsets": [
140
],
"text": "outperforms",
"tokens": [
"outperforms"
]
}
],
"event_type": "FIN",
"trigger": {
"offsets": [
135
],
"text": "demonstrate",
"tokens": [
"demonstrate"
]
}
}
] |
[
"the",
"advent",
"of",
"large",
"pre",
"-",
"trained",
"language",
"models",
"has",
"given",
"rise",
"to",
"rapid",
"progress",
"in",
"the",
"field",
"of",
"natural",
"language",
"processing",
"(",
"nlp",
")",
".",
"while",
"the",
"performance",
"of",
"these",
"models",
"on",
"standard",
"benchmarks",
"has",
"scaled",
"with",
"size",
",",
"compression",
"techniques",
"such",
"as",
"knowledge",
"distillation",
"have",
"been",
"key",
"in",
"making",
"them",
"practical",
".",
"we",
"present",
"mate",
"-",
"kd",
",",
"a",
"novel",
"text",
"-",
"based",
"adversarial",
"training",
"algorithm",
"which",
"improves",
"the",
"performance",
"of",
"knowledge",
"distillation",
".",
"mate",
"-",
"kd",
"first",
"trains",
"a",
"masked",
"language",
"model",
"-",
"based",
"generator",
"to",
"perturb",
"text",
"by",
"maximizing",
"the",
"divergence",
"between",
"teacher",
"and",
"student",
"logits",
".",
"then",
"using",
"knowledge",
"distillation",
"a",
"student",
"is",
"trained",
"on",
"both",
"the",
"original",
"and",
"the",
"perturbed",
"training",
"samples",
".",
"we",
"evaluate",
"our",
"algorithm",
",",
"using",
"bert",
"-",
"based",
"models",
",",
"on",
"the",
"glue",
"benchmark",
"and",
"demonstrate",
"that",
"mate",
"-",
"kd",
"outperforms",
"competitive",
"adversarial",
"learning",
"and",
"data",
"augmentation",
"baselines",
".",
"on",
"the",
"glue",
"test",
"set",
"our",
"6",
"layer",
"roberta",
"based",
"model",
"outperforms",
"bert",
"-",
"large",
"."
] |
ACL
|
An Automated Framework for Fast Cognate Detection and Bayesian Phylogenetic Inference in Computational Historical Linguistics
|
We present a fully automated workflow for phylogenetic reconstruction on large datasets, consisting of two novel methods, one for fast detection of cognates and one for fast Bayesian phylogenetic inference. Our results show that the methods take less than a few minutes to process language families that have so far required large amounts of time and computational power. Moreover, the cognates and the trees inferred from the method are quite close, both to gold standard cognate judgments and to expert language family trees. Given its speed and ease of application, our framework is specifically useful for the exploration of very large datasets in historical linguistics.
|
db2fff29a55036937a41cdace0266be9
| 2,019
|
[
"we present a fully automated workflow for phylogenetic reconstruction on large datasets , consisting of two novel methods , one for fast detection of cognates and one for fast bayesian phylogenetic inference .",
"our results show that the methods take less than a few minutes to process language families that have so far required large amounts of time and computational power .",
"moreover , the cognates and the trees inferred from the method are quite close , both to gold standard cognate judgments and to expert language family trees .",
"given its speed and ease of application , our framework is specifically useful for the exploration of very large datasets in historical linguistics ."
] |
[
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
0
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
3,
4,
5
],
"text": "fully automated workflow",
"tokens": [
"fully",
"automated",
"workflow"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
7,
8
],
"text": "phylogenetic reconstruction",
"tokens": [
"phylogenetic",
"reconstruction"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
1
],
"text": "present",
"tokens": [
"present"
]
}
},
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "E-FAC",
"offsets": [
39
],
"text": "take",
"tokens": [
"take"
]
}
],
"event_type": "FIN",
"trigger": {
"offsets": [
35
],
"text": "show",
"tokens": [
"show"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
15,
16,
17
],
"text": "two novel methods",
"tokens": [
"two",
"novel",
"methods"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
46
],
"text": "process",
"tokens": [
"process"
]
},
{
"argument_type": "Object",
"nugget_type": "STR",
"offsets": [
40,
41,
42,
43,
44
],
"text": "less than a few minutes",
"tokens": [
"less",
"than",
"a",
"few",
"minutes"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
39
],
"text": "take",
"tokens": [
"take"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
47,
48
],
"text": "language families",
"tokens": [
"language",
"families"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
46
],
"text": "process",
"tokens": [
"process"
]
}
},
{
"arguments": [
{
"argument_type": "Object",
"nugget_type": "TAK",
"offsets": [
79,
80,
81,
82
],
"text": "gold standard cognate judgments",
"tokens": [
"gold",
"standard",
"cognate",
"judgments"
]
},
{
"argument_type": "Object",
"nugget_type": "TAK",
"offsets": [
85,
86,
87,
88
],
"text": "expert language family trees",
"tokens": [
"expert",
"language",
"family",
"trees"
]
},
{
"argument_type": "Subject",
"nugget_type": "FEA",
"offsets": [
65
],
"text": "cognates",
"tokens": [
"cognates"
]
},
{
"argument_type": "Subject",
"nugget_type": "FEA",
"offsets": [
68
],
"text": "trees",
"tokens": [
"trees"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
74,
75
],
"text": "quite close",
"tokens": [
"quite",
"close"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
3,
4,
5
],
"text": "fully automated workflow",
"tokens": [
"fully",
"automated",
"workflow"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
110,
111,
112
],
"text": "in historical linguistics",
"tokens": [
"in",
"historical",
"linguistics"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
105,
106,
107,
108,
109
],
"text": "exploration of very large datasets",
"tokens": [
"exploration",
"of",
"very",
"large",
"datasets"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
101,
102
],
"text": "specifically useful",
"tokens": [
"specifically",
"useful"
]
}
}
] |
[
"we",
"present",
"a",
"fully",
"automated",
"workflow",
"for",
"phylogenetic",
"reconstruction",
"on",
"large",
"datasets",
",",
"consisting",
"of",
"two",
"novel",
"methods",
",",
"one",
"for",
"fast",
"detection",
"of",
"cognates",
"and",
"one",
"for",
"fast",
"bayesian",
"phylogenetic",
"inference",
".",
"our",
"results",
"show",
"that",
"the",
"methods",
"take",
"less",
"than",
"a",
"few",
"minutes",
"to",
"process",
"language",
"families",
"that",
"have",
"so",
"far",
"required",
"large",
"amounts",
"of",
"time",
"and",
"computational",
"power",
".",
"moreover",
",",
"the",
"cognates",
"and",
"the",
"trees",
"inferred",
"from",
"the",
"method",
"are",
"quite",
"close",
",",
"both",
"to",
"gold",
"standard",
"cognate",
"judgments",
"and",
"to",
"expert",
"language",
"family",
"trees",
".",
"given",
"its",
"speed",
"and",
"ease",
"of",
"application",
",",
"our",
"framework",
"is",
"specifically",
"useful",
"for",
"the",
"exploration",
"of",
"very",
"large",
"datasets",
"in",
"historical",
"linguistics",
"."
] |
ACL
|
Faithful or Extractive? On Mitigating the Faithfulness-Abstractiveness Trade-off in Abstractive Summarization
|
Despite recent progress in abstractive summarization, systems still suffer from faithfulness errors. While prior work has proposed models that improve faithfulness, it is unclear whether the improvement comes from an increased level of extractiveness of the model outputs as one naive way to improve faithfulness is to make summarization models more extractive. In this work, we present a framework for evaluating the effective faithfulness of summarization systems, by generating a faithfulness-abstractiveness trade-off curve that serves as a control at different operating points on the abstractiveness spectrum. We then show that the Maximum Likelihood Estimation (MLE) baseline as well as recently proposed methods for improving faithfulness, fail to consistently improve over the control at the same level of abstractiveness. Finally, we learn a selector to identify the most faithful and abstractive summary for a given document, and show that this system can attain higher faithfulness scores in human evaluations while being more abstractive than the baseline system on two datasets. Moreover, we show that our system is able to achieve a better faithfulness-abstractiveness trade-off than the control at the same level of abstractiveness.
|
959fbecee82a093efd41a9a4608a4728
| 2,022
|
[
"despite recent progress in abstractive summarization , systems still suffer from faithfulness errors .",
"while prior work has proposed models that improve faithfulness , it is unclear whether the improvement comes from an increased level of extractiveness of the model outputs as one naive way to improve faithfulness is to make summarization models more extractive .",
"in this work , we present a framework for evaluating the effective faithfulness of summarization systems , by generating a faithfulness - abstractiveness trade - off curve that serves as a control at different operating points on the abstractiveness spectrum .",
"we then show that the maximum likelihood estimation ( mle ) baseline as well as recently proposed methods for improving faithfulness , fail to consistently improve over the control at the same level of abstractiveness .",
"finally , we learn a selector to identify the most faithful and abstractive summary for a given document , and show that this system can attain higher faithfulness scores in human evaluations while being more abstractive than the baseline system on two datasets .",
"moreover , we show that our system is able to achieve a better faithfulness - abstractiveness trade - off than the control at the same level of abstractiveness ."
] |
[
{
"arguments": [
{
"argument_type": "Concern",
"nugget_type": "APP",
"offsets": [
7
],
"text": "systems",
"tokens": [
"systems"
]
},
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
11,
12
],
"text": "faithfulness errors",
"tokens": [
"faithfulness",
"errors"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
9
],
"text": "suffer",
"tokens": [
"suffer"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
60
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
63
],
"text": "framework",
"tokens": [
"framework"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
65
],
"text": "evaluating",
"tokens": [
"evaluating"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
61
],
"text": "present",
"tokens": [
"present"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
67,
68,
69,
70,
71
],
"text": "effective faithfulness of summarization systems",
"tokens": [
"effective",
"faithfulness",
"of",
"summarization",
"systems"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
65
],
"text": "evaluating",
"tokens": [
"evaluating"
]
}
},
{
"arguments": [
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
76,
77,
78,
79,
80,
81,
82
],
"text": "faithfulness - abstractiveness trade - off curve",
"tokens": [
"faithfulness",
"-",
"abstractiveness",
"trade",
"-",
"off",
"curve"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
87,
88,
89,
90,
91,
92,
93,
94,
95
],
"text": "control at different operating points on the abstractiveness spectrum",
"tokens": [
"control",
"at",
"different",
"operating",
"points",
"on",
"the",
"abstractiveness",
"spectrum"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
74
],
"text": "generating",
"tokens": [
"generating"
]
}
},
{
"arguments": [
{
"argument_type": "Arg1",
"nugget_type": "APP",
"offsets": [
102,
103,
104,
105,
106,
107,
108
],
"text": "maximum likelihood estimation ( mle ) baseline",
"tokens": [
"maximum",
"likelihood",
"estimation",
"(",
"mle",
")",
"baseline"
]
},
{
"argument_type": "Arg1",
"nugget_type": "APP",
"offsets": [
112,
113,
114
],
"text": "recently proposed methods",
"tokens": [
"recently",
"proposed",
"methods"
]
},
{
"argument_type": "Arg2",
"nugget_type": "APP",
"offsets": [
125
],
"text": "control",
"tokens": [
"control"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
126,
127,
128,
129,
130,
131
],
"text": "at the same level of abstractiveness",
"tokens": [
"at",
"the",
"same",
"level",
"of",
"abstractiveness"
]
},
{
"argument_type": "Result",
"nugget_type": "WEA",
"offsets": [
119
],
"text": "fail",
"tokens": [
"fail"
]
}
],
"event_type": "CMP",
"trigger": {
"offsets": [
122
],
"text": "improve",
"tokens": [
"improve"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
135
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "MOD",
"offsets": [
138
],
"text": "selector",
"tokens": [
"selector"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
140
],
"text": "identify",
"tokens": [
"identify"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
136
],
"text": "learn",
"tokens": [
"learn"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
142,
143,
144,
145,
146
],
"text": "most faithful and abstractive summary",
"tokens": [
"most",
"faithful",
"and",
"abstractive",
"summary"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
147,
148,
149,
150
],
"text": "for a given document",
"tokens": [
"for",
"a",
"given",
"document"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
140
],
"text": "identify",
"tokens": [
"identify"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "TAK",
"offsets": [
159,
160,
161
],
"text": "higher faithfulness scores",
"tokens": [
"higher",
"faithfulness",
"scores"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
163,
164
],
"text": "human evaluations",
"tokens": [
"human",
"evaluations"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
158
],
"text": "attain",
"tokens": [
"attain"
]
}
},
{
"arguments": [
{
"argument_type": "Arg2",
"nugget_type": "APP",
"offsets": [
171,
172
],
"text": "baseline system",
"tokens": [
"baseline",
"system"
]
},
{
"argument_type": "Dataset",
"nugget_type": "DST",
"offsets": [
174,
175
],
"text": "two datasets",
"tokens": [
"two",
"datasets"
]
}
],
"event_type": "CMP",
"trigger": {
"offsets": [
167,
168
],
"text": "more abstractive",
"tokens": [
"more",
"abstractive"
]
}
},
{
"arguments": [
{
"argument_type": "Finder",
"nugget_type": "OG",
"offsets": [
179
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "E-CMP",
"offsets": [
187
],
"text": "achieve",
"tokens": [
"achieve"
]
}
],
"event_type": "FIN",
"trigger": {
"offsets": [
180
],
"text": "show",
"tokens": [
"show"
]
}
},
{
"arguments": [
{
"argument_type": "Result",
"nugget_type": "STR",
"offsets": [
189,
190,
191,
192,
193,
194,
195
],
"text": "better faithfulness - abstractiveness trade - off",
"tokens": [
"better",
"faithfulness",
"-",
"abstractiveness",
"trade",
"-",
"off"
]
},
{
"argument_type": "Arg2",
"nugget_type": "APP",
"offsets": [
198
],
"text": "control",
"tokens": [
"control"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
199,
200,
201,
202,
203,
204
],
"text": "at the same level of abstractiveness",
"tokens": [
"at",
"the",
"same",
"level",
"of",
"abstractiveness"
]
}
],
"event_type": "CMP",
"trigger": {
"offsets": [
187
],
"text": "achieve",
"tokens": [
"achieve"
]
}
},
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
4,
5
],
"text": "abstractive summarization",
"tokens": [
"abstractive",
"summarization"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
2
],
"text": "progress",
"tokens": [
"progress"
]
}
},
{
"arguments": [
{
"argument_type": "Finder",
"nugget_type": "OG",
"offsets": [
97
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "E-CMP",
"offsets": [
122
],
"text": "improve",
"tokens": [
"improve"
]
}
],
"event_type": "FIN",
"trigger": {
"offsets": [
99
],
"text": "show",
"tokens": [
"show"
]
}
},
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "E-CMP",
"offsets": [
167,
168
],
"text": "more abstractive",
"tokens": [
"more",
"abstractive"
]
}
],
"event_type": "FIN",
"trigger": {
"offsets": [
153
],
"text": "show",
"tokens": [
"show"
]
}
}
] |
[
"despite",
"recent",
"progress",
"in",
"abstractive",
"summarization",
",",
"systems",
"still",
"suffer",
"from",
"faithfulness",
"errors",
".",
"while",
"prior",
"work",
"has",
"proposed",
"models",
"that",
"improve",
"faithfulness",
",",
"it",
"is",
"unclear",
"whether",
"the",
"improvement",
"comes",
"from",
"an",
"increased",
"level",
"of",
"extractiveness",
"of",
"the",
"model",
"outputs",
"as",
"one",
"naive",
"way",
"to",
"improve",
"faithfulness",
"is",
"to",
"make",
"summarization",
"models",
"more",
"extractive",
".",
"in",
"this",
"work",
",",
"we",
"present",
"a",
"framework",
"for",
"evaluating",
"the",
"effective",
"faithfulness",
"of",
"summarization",
"systems",
",",
"by",
"generating",
"a",
"faithfulness",
"-",
"abstractiveness",
"trade",
"-",
"off",
"curve",
"that",
"serves",
"as",
"a",
"control",
"at",
"different",
"operating",
"points",
"on",
"the",
"abstractiveness",
"spectrum",
".",
"we",
"then",
"show",
"that",
"the",
"maximum",
"likelihood",
"estimation",
"(",
"mle",
")",
"baseline",
"as",
"well",
"as",
"recently",
"proposed",
"methods",
"for",
"improving",
"faithfulness",
",",
"fail",
"to",
"consistently",
"improve",
"over",
"the",
"control",
"at",
"the",
"same",
"level",
"of",
"abstractiveness",
".",
"finally",
",",
"we",
"learn",
"a",
"selector",
"to",
"identify",
"the",
"most",
"faithful",
"and",
"abstractive",
"summary",
"for",
"a",
"given",
"document",
",",
"and",
"show",
"that",
"this",
"system",
"can",
"attain",
"higher",
"faithfulness",
"scores",
"in",
"human",
"evaluations",
"while",
"being",
"more",
"abstractive",
"than",
"the",
"baseline",
"system",
"on",
"two",
"datasets",
".",
"moreover",
",",
"we",
"show",
"that",
"our",
"system",
"is",
"able",
"to",
"achieve",
"a",
"better",
"faithfulness",
"-",
"abstractiveness",
"trade",
"-",
"off",
"than",
"the",
"control",
"at",
"the",
"same",
"level",
"of",
"abstractiveness",
"."
] |
ACL
|
Enhancing the generalization for Intent Classification and Out-of-Domain Detection in SLU
|
Intent classification is a major task in spoken language understanding (SLU). Since most models are built with pre-collected in-domain (IND) training utterances, their ability to detect unsupported out-of-domain (OOD) utterances has a critical effect in practical use. Recent works have shown that using extra data and labels can improve the OOD detection performance, yet it could be costly to collect such data. This paper proposes to train a model with only IND data while supporting both IND intent classification and OOD detection. Our method designs a novel domain-regularized module (DRM) to reduce the overconfident phenomenon of a vanilla classifier, achieving a better generalization in both cases. Besides, DRM can be used as a drop-in replacement for the last layer in any neural network-based intent classifier, providing a low-cost strategy for a significant improvement. The evaluation on four datasets shows that our method built on BERT and RoBERTa models achieves state-of-the-art performance against existing approaches and the strong baselines we created for the comparisons.
|
bbe87249393bc09725f2b0dcfda04997
| 2,021
|
[
"intent classification is a major task in spoken language understanding ( slu ) .",
"since most models are built with pre - collected in - domain ( ind ) training utterances , their ability to detect unsupported out - of - domain ( ood ) utterances has a critical effect in practical use .",
"recent works have shown that using extra data and labels can improve the ood detection performance , yet it could be costly to collect such data .",
"this paper proposes to train a model with only ind data while supporting both ind intent classification and ood detection .",
"our method designs a novel domain - regularized module ( drm ) to reduce the overconfident phenomenon of a vanilla classifier , achieving a better generalization in both cases .",
"besides , drm can be used as a drop - in replacement for the last layer in any neural network - based intent classifier , providing a low - cost strategy for a significant improvement .",
"the evaluation on four datasets shows that our method built on bert and roberta models achieves state - of - the - art performance against existing approaches and the strong baselines we created for the comparisons ."
] |
[
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
0,
1
],
"text": "intent classification",
"tokens": [
"intent",
"classification"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
5
],
"text": "task",
"tokens": [
"task"
]
}
},
{
"arguments": [
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
75
],
"text": "costly",
"tokens": [
"costly"
]
},
{
"argument_type": "Concern",
"nugget_type": "FEA",
"offsets": [
60,
61
],
"text": "extra data",
"tokens": [
"extra",
"data"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
75
],
"text": "costly",
"tokens": [
"costly"
]
}
},
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
87,
88,
89,
90,
91
],
"text": "model with only ind data",
"tokens": [
"model",
"with",
"only",
"ind",
"data"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
93,
94,
95,
96,
97,
98,
99,
100
],
"text": "supporting both ind intent classification and ood detection",
"tokens": [
"supporting",
"both",
"ind",
"intent",
"classification",
"and",
"ood",
"detection"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
85
],
"text": "train",
"tokens": [
"train"
]
}
},
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "MOD",
"offsets": [
106,
107,
108,
109,
110
],
"text": "novel domain - regularized module",
"tokens": [
"novel",
"domain",
"-",
"regularized",
"module"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
115
],
"text": "reduce",
"tokens": [
"reduce"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
104
],
"text": "designs",
"tokens": [
"designs"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
117,
118
],
"text": "overconfident phenomenon",
"tokens": [
"overconfident",
"phenomenon"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
115
],
"text": "reduce",
"tokens": [
"reduce"
]
}
},
{
"arguments": [
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
128,
129,
130
],
"text": "in both cases",
"tokens": [
"in",
"both",
"cases"
]
},
{
"argument_type": "Object",
"nugget_type": "TAK",
"offsets": [
126,
127
],
"text": "better generalization",
"tokens": [
"better",
"generalization"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
124
],
"text": "achieving",
"tokens": [
"achieving"
]
}
},
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "MOD",
"offsets": [
146,
147,
148,
149,
150,
151,
152,
153,
154,
155
],
"text": "last layer in any neural network - based intent classifier",
"tokens": [
"last",
"layer",
"in",
"any",
"neural",
"network",
"-",
"based",
"intent",
"classifier"
]
},
{
"argument_type": "BaseComponent",
"nugget_type": "MOD",
"offsets": [
107,
108,
109,
110
],
"text": "domain - regularized module",
"tokens": [
"domain",
"-",
"regularized",
"module"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
137
],
"text": "used",
"tokens": [
"used"
]
}
},
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "E-CMP",
"offsets": [
183
],
"text": "achieves",
"tokens": [
"achieves"
]
}
],
"event_type": "FIN",
"trigger": {
"offsets": [
173
],
"text": "shows",
"tokens": [
"shows"
]
}
},
{
"arguments": [
{
"argument_type": "Arg2",
"nugget_type": "APP",
"offsets": [
193,
194
],
"text": "existing approaches",
"tokens": [
"existing",
"approaches"
]
},
{
"argument_type": "Arg1",
"nugget_type": "APP",
"offsets": [
176,
177,
178,
179,
180,
181,
182
],
"text": "method built on bert and roberta models",
"tokens": [
"method",
"built",
"on",
"bert",
"and",
"roberta",
"models"
]
},
{
"argument_type": "Arg2",
"nugget_type": "APP",
"offsets": [
197,
198
],
"text": "strong baselines",
"tokens": [
"strong",
"baselines"
]
},
{
"argument_type": "Result",
"nugget_type": "STR",
"offsets": [
184,
185,
186,
187,
188,
189,
190,
191
],
"text": "state - of - the - art performance",
"tokens": [
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"performance"
]
}
],
"event_type": "CMP",
"trigger": {
"offsets": [
183
],
"text": "achieves",
"tokens": [
"achieves"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
99,
100
],
"text": "ood detection",
"tokens": [
"ood",
"detection"
]
},
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
95,
96,
97
],
"text": "ind intent classification",
"tokens": [
"ind",
"intent",
"classification"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
93
],
"text": "supporting",
"tokens": [
"supporting"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
15,
16
],
"text": "most models",
"tokens": [
"most",
"models"
]
},
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30
],
"text": "pre - collected in - domain ( ind ) training utterances",
"tokens": [
"pre",
"-",
"collected",
"in",
"-",
"domain",
"(",
"ind",
")",
"training",
"utterances"
]
}
],
"event_type": "RWS",
"trigger": {
"offsets": [
18
],
"text": "built",
"tokens": [
"built"
]
}
}
] |
[
"intent",
"classification",
"is",
"a",
"major",
"task",
"in",
"spoken",
"language",
"understanding",
"(",
"slu",
")",
".",
"since",
"most",
"models",
"are",
"built",
"with",
"pre",
"-",
"collected",
"in",
"-",
"domain",
"(",
"ind",
")",
"training",
"utterances",
",",
"their",
"ability",
"to",
"detect",
"unsupported",
"out",
"-",
"of",
"-",
"domain",
"(",
"ood",
")",
"utterances",
"has",
"a",
"critical",
"effect",
"in",
"practical",
"use",
".",
"recent",
"works",
"have",
"shown",
"that",
"using",
"extra",
"data",
"and",
"labels",
"can",
"improve",
"the",
"ood",
"detection",
"performance",
",",
"yet",
"it",
"could",
"be",
"costly",
"to",
"collect",
"such",
"data",
".",
"this",
"paper",
"proposes",
"to",
"train",
"a",
"model",
"with",
"only",
"ind",
"data",
"while",
"supporting",
"both",
"ind",
"intent",
"classification",
"and",
"ood",
"detection",
".",
"our",
"method",
"designs",
"a",
"novel",
"domain",
"-",
"regularized",
"module",
"(",
"drm",
")",
"to",
"reduce",
"the",
"overconfident",
"phenomenon",
"of",
"a",
"vanilla",
"classifier",
",",
"achieving",
"a",
"better",
"generalization",
"in",
"both",
"cases",
".",
"besides",
",",
"drm",
"can",
"be",
"used",
"as",
"a",
"drop",
"-",
"in",
"replacement",
"for",
"the",
"last",
"layer",
"in",
"any",
"neural",
"network",
"-",
"based",
"intent",
"classifier",
",",
"providing",
"a",
"low",
"-",
"cost",
"strategy",
"for",
"a",
"significant",
"improvement",
".",
"the",
"evaluation",
"on",
"four",
"datasets",
"shows",
"that",
"our",
"method",
"built",
"on",
"bert",
"and",
"roberta",
"models",
"achieves",
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"performance",
"against",
"existing",
"approaches",
"and",
"the",
"strong",
"baselines",
"we",
"created",
"for",
"the",
"comparisons",
"."
] |
ACL
|
PromDA: Prompt-based Data Augmentation for Low-Resource NLU Tasks
|
This paper focuses on the Data Augmentation for low-resource Natural Language Understanding (NLU) tasks. We propose Prompt-based Data Augmentation model (PromDA) which only trains small-scale Soft Prompt (i.e., a set of trainable vectors) in the frozen Pre-trained Language Models (PLMs). This avoids human effort in collecting unlabeled in-domain data and maintains the quality of generated synthetic data. In addition, PromDA generates synthetic data via two different views and filters out the low-quality data using NLU models. Experiments on four benchmarks show that synthetic data produced by PromDA successfully boost up the performance of NLU models which consistently outperform several competitive baseline models, including a state-of-the-art semi-supervised model using unlabeled in-domain data. The synthetic data from PromDA are also complementary with unlabeled in-domain data. The NLU models can be further improved when they are combined for training.
|
04e5995f7999d5daad821408248f8262
| 2,022
|
[
"this paper focuses on the data augmentation for low - resource natural language understanding ( nlu ) tasks .",
"we propose prompt - based data augmentation model ( promda ) which only trains small - scale soft prompt ( i . e . , a set of trainable vectors ) in the frozen pre - trained language models ( plms ) .",
"this avoids human effort in collecting unlabeled in - domain data and maintains the quality of generated synthetic data .",
"in addition , promda generates synthetic data via two different views and filters out the low - quality data using nlu models .",
"experiments on four benchmarks show that synthetic data produced by promda successfully boost up the performance of nlu models which consistently outperform several competitive baseline models , including a state - of - the - art semi - supervised model using unlabeled in - domain data .",
"the synthetic data from promda are also complementary with unlabeled in - domain data .",
"the nlu models can be further improved when they are combined for training ."
] |
[
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
8,
9,
10,
11,
12,
13,
17
],
"text": "low - resource natural language understanding ( nlu ) tasks",
"tokens": [
"low",
"-",
"resource",
"natural",
"language",
"understanding",
"tasks"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
2
],
"text": "focuses",
"tokens": [
"focuses"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
19
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
21,
22,
23,
24,
25,
26
],
"text": "prompt - based data augmentation model",
"tokens": [
"prompt",
"-",
"based",
"data",
"augmentation",
"model"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
20
],
"text": "propose",
"tokens": [
"propose"
]
}
},
{
"arguments": [
{
"argument_type": "TriedComponent",
"nugget_type": "FEA",
"offsets": [
33,
34,
35,
36,
37
],
"text": "small - scale soft prompt",
"tokens": [
"small",
"-",
"scale",
"soft",
"prompt"
]
},
{
"argument_type": "BaseComponent",
"nugget_type": "APP",
"offsets": [
52,
53,
54,
55,
56,
57
],
"text": "frozen pre - trained language models",
"tokens": [
"frozen",
"pre",
"-",
"trained",
"language",
"models"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
32
],
"text": "trains",
"tokens": [
"trains"
]
}
},
{
"arguments": [
{
"argument_type": "Object",
"nugget_type": "TAK",
"offsets": [
64,
65
],
"text": "human effort",
"tokens": [
"human",
"effort"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
67,
68
],
"text": "collecting unlabeled",
"tokens": [
"collecting",
"unlabeled"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
69,
70,
71,
72
],
"text": "in - domain data",
"tokens": [
"in",
"-",
"domain",
"data"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
63
],
"text": "avoids",
"tokens": [
"avoids"
]
}
},
{
"arguments": [
{
"argument_type": "Object",
"nugget_type": "TAK",
"offsets": [
76,
77,
78,
79,
80
],
"text": "quality of generated synthetic data",
"tokens": [
"quality",
"of",
"generated",
"synthetic",
"data"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
74
],
"text": "maintains",
"tokens": [
"maintains"
]
}
},
{
"arguments": [
{
"argument_type": "TriedComponent",
"nugget_type": "DST",
"offsets": [
87,
88
],
"text": "synthetic data",
"tokens": [
"synthetic",
"data"
]
},
{
"argument_type": "BaseComponent",
"nugget_type": "APP",
"offsets": [
21,
22,
23,
24,
25,
26
],
"text": "prompt - based data augmentation model",
"tokens": [
"prompt",
"-",
"based",
"data",
"augmentation",
"model"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
89,
90,
91,
92
],
"text": "via two different views",
"tokens": [
"via",
"two",
"different",
"views"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
86
],
"text": "generates",
"tokens": [
"generates"
]
}
},
{
"arguments": [
{
"argument_type": "TriedComponent",
"nugget_type": "DST",
"offsets": [
97,
98,
99,
100
],
"text": "low - quality data",
"tokens": [
"low",
"-",
"quality",
"data"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
101,
11,
12,
13,
103
],
"text": "using nlu models",
"tokens": [
"using",
"natural",
"language",
"understanding",
"models"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
94,
95
],
"text": "filters out",
"tokens": [
"filters",
"out"
]
}
},
{
"arguments": [
{
"argument_type": "Object",
"nugget_type": "TAK",
"offsets": [
120,
121,
11,
12,
13,
123
],
"text": "performance of nlu models",
"tokens": [
"performance",
"of",
"natural",
"language",
"understanding",
"models"
]
},
{
"argument_type": "Subject",
"nugget_type": "DST",
"offsets": [
111,
112
],
"text": "synthetic data",
"tokens": [
"synthetic",
"data"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
113,
114,
115
],
"text": "produced by promda",
"tokens": [
"produced",
"by",
"promda"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
117
],
"text": "boost",
"tokens": [
"boost"
]
}
},
{
"arguments": [
{
"argument_type": "Arg2",
"nugget_type": "APP",
"offsets": [
128,
129,
130
],
"text": "competitive baseline models",
"tokens": [
"competitive",
"baseline",
"models"
]
},
{
"argument_type": "Result",
"nugget_type": "STR",
"offsets": [
126
],
"text": "outperform",
"tokens": [
"outperform"
]
},
{
"argument_type": "Extent",
"nugget_type": "DEG",
"offsets": [
125
],
"text": "consistently",
"tokens": [
"consistently"
]
},
{
"argument_type": "Arg1",
"nugget_type": "DST",
"offsets": [
111,
112
],
"text": "synthetic data",
"tokens": [
"synthetic",
"data"
]
}
],
"event_type": "CMP",
"trigger": {
"offsets": [
126
],
"text": "outperform",
"tokens": [
"outperform"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "DST",
"offsets": [
153,
154
],
"text": "synthetic data",
"tokens": [
"synthetic",
"data"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
159
],
"text": "complementary",
"tokens": [
"complementary"
]
}
},
{
"arguments": [
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
174,
153,
154,
160,
161,
162,
163,
164,
165,
176,
177,
178,
179
],
"text": "when they are combined for training",
"tokens": [
"when",
"synthetic",
"data",
"with",
"unlabeled",
"in",
"-",
"domain",
"data",
"are",
"combined",
"for",
"training"
]
},
{
"argument_type": "Object",
"nugget_type": "APP",
"offsets": [
11,
12,
13,
169
],
"text": "nlu models",
"tokens": [
"natural",
"language",
"understanding",
"models"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
173
],
"text": "improved",
"tokens": [
"improved"
]
}
},
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "E-FAC",
"offsets": [
117
],
"text": "boost",
"tokens": [
"boost"
]
},
{
"argument_type": "Content",
"nugget_type": "E-CMP",
"offsets": [
126
],
"text": "outperform",
"tokens": [
"outperform"
]
}
],
"event_type": "FIN",
"trigger": {
"offsets": [
109
],
"text": "show",
"tokens": [
"show"
]
}
}
] |
[
"this",
"paper",
"focuses",
"on",
"the",
"data",
"augmentation",
"for",
"low",
"-",
"resource",
"natural",
"language",
"understanding",
"(",
"nlu",
")",
"tasks",
".",
"we",
"propose",
"prompt",
"-",
"based",
"data",
"augmentation",
"model",
"(",
"promda",
")",
"which",
"only",
"trains",
"small",
"-",
"scale",
"soft",
"prompt",
"(",
"i",
".",
"e",
".",
",",
"a",
"set",
"of",
"trainable",
"vectors",
")",
"in",
"the",
"frozen",
"pre",
"-",
"trained",
"language",
"models",
"(",
"plms",
")",
".",
"this",
"avoids",
"human",
"effort",
"in",
"collecting",
"unlabeled",
"in",
"-",
"domain",
"data",
"and",
"maintains",
"the",
"quality",
"of",
"generated",
"synthetic",
"data",
".",
"in",
"addition",
",",
"promda",
"generates",
"synthetic",
"data",
"via",
"two",
"different",
"views",
"and",
"filters",
"out",
"the",
"low",
"-",
"quality",
"data",
"using",
"nlu",
"models",
".",
"experiments",
"on",
"four",
"benchmarks",
"show",
"that",
"synthetic",
"data",
"produced",
"by",
"promda",
"successfully",
"boost",
"up",
"the",
"performance",
"of",
"nlu",
"models",
"which",
"consistently",
"outperform",
"several",
"competitive",
"baseline",
"models",
",",
"including",
"a",
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"semi",
"-",
"supervised",
"model",
"using",
"unlabeled",
"in",
"-",
"domain",
"data",
".",
"the",
"synthetic",
"data",
"from",
"promda",
"are",
"also",
"complementary",
"with",
"unlabeled",
"in",
"-",
"domain",
"data",
".",
"the",
"nlu",
"models",
"can",
"be",
"further",
"improved",
"when",
"they",
"are",
"combined",
"for",
"training",
"."
] |
ACL
|
Analyzing the Limitations of Cross-lingual Word Embedding Mappings
|
Recent research in cross-lingual word embeddings has almost exclusively focused on offline methods, which independently train word embeddings in different languages and map them to a shared space through linear transformations. While several authors have questioned the underlying isomorphism assumption, which states that word embeddings in different languages have approximately the same structure, it is not clear whether this is an inherent limitation of mapping approaches or a more general issue when learning cross-lingual embeddings. So as to answer this question, we experiment with parallel corpora, which allows us to compare offline mapping to an extension of skip-gram that jointly learns both embedding spaces. We observe that, under these ideal conditions, joint learning yields to more isomorphic embeddings, is less sensitive to hubness, and obtains stronger results in bilingual lexicon induction. We thus conclude that current mapping methods do have strong limitations, calling for further research to jointly learn cross-lingual embeddings with a weaker cross-lingual signal.
|
c84823d8450a619b600d844943a96c1e
| 2,019
|
[
"recent research in cross - lingual word embeddings has almost exclusively focused on offline methods , which independently train word embeddings in different languages and map them to a shared space through linear transformations .",
"while several authors have questioned the underlying isomorphism assumption , which states that word embeddings in different languages have approximately the same structure , it is not clear whether this is an inherent limitation of mapping approaches or a more general issue when learning cross - lingual embeddings .",
"so as to answer this question , we experiment with parallel corpora , which allows us to compare offline mapping to an extension of skip - gram that jointly learns both embedding spaces .",
"we observe that , under these ideal conditions , joint learning yields to more isomorphic embeddings , is less sensitive to hubness , and obtains stronger results in bilingual lexicon induction .",
"we thus conclude that current mapping methods do have strong limitations , calling for further research to jointly learn cross - lingual embeddings with a weaker cross - lingual signal ."
] |
[
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
3,
4,
5,
6,
7
],
"text": "cross - lingual word embeddings",
"tokens": [
"cross",
"-",
"lingual",
"word",
"embeddings"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
11
],
"text": "focused",
"tokens": [
"focused"
]
}
},
{
"arguments": [
{
"argument_type": "BaseComponent",
"nugget_type": "MOD",
"offsets": [
102,
103
],
"text": "offline mapping",
"tokens": [
"offline",
"mapping"
]
},
{
"argument_type": "BaseComponent",
"nugget_type": "MOD",
"offsets": [
106,
107,
108,
109,
110
],
"text": "extension of skip - gram",
"tokens": [
"extension",
"of",
"skip",
"-",
"gram"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
101
],
"text": "compare",
"tokens": [
"compare"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
155,
156
],
"text": "mapping methods",
"tokens": [
"mapping",
"methods"
]
},
{
"argument_type": "Object",
"nugget_type": "WEA",
"offsets": [
159,
160
],
"text": "strong limitations",
"tokens": [
"strong",
"limitations"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
158
],
"text": "have",
"tokens": [
"have"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
91
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "DST",
"offsets": [
94,
95
],
"text": "parallel corpora",
"tokens": [
"parallel",
"corpora"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
92
],
"text": "experiment",
"tokens": [
"experiment"
]
}
},
{
"arguments": [
{
"argument_type": "Finder",
"nugget_type": "OG",
"offsets": [
118
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "E-CMP",
"offsets": [
129
],
"text": "yields",
"tokens": [
"yields"
]
}
],
"event_type": "FIN",
"trigger": {
"offsets": [
119
],
"text": "observe",
"tokens": [
"observe"
]
}
},
{
"arguments": [
{
"argument_type": "Arg1",
"nugget_type": "APP",
"offsets": [
127,
128
],
"text": "joint learning",
"tokens": [
"joint",
"learning"
]
},
{
"argument_type": "Result",
"nugget_type": "STR",
"offsets": [
131
],
"text": "more",
"tokens": [
"more"
]
},
{
"argument_type": "Metrics",
"nugget_type": "TAK",
"offsets": [
132,
133
],
"text": "isomorphic embeddings",
"tokens": [
"isomorphic",
"embeddings"
]
}
],
"event_type": "CMP",
"trigger": {
"offsets": [
129
],
"text": "yields",
"tokens": [
"yields"
]
}
},
{
"arguments": [
{
"argument_type": "Arg2",
"nugget_type": "APP",
"offsets": [
139
],
"text": "hubness",
"tokens": [
"hubness"
]
},
{
"argument_type": "Arg1",
"nugget_type": "APP",
"offsets": [
127,
128
],
"text": "joint learning",
"tokens": [
"joint",
"learning"
]
},
{
"argument_type": "Result",
"nugget_type": "STR",
"offsets": [
136
],
"text": "less",
"tokens": [
"less"
]
}
],
"event_type": "CMP",
"trigger": {
"offsets": [
136
],
"text": "less",
"tokens": [
"less"
]
}
},
{
"arguments": [
{
"argument_type": "Arg1",
"nugget_type": "APP",
"offsets": [
127,
128
],
"text": "joint learning",
"tokens": [
"joint",
"learning"
]
},
{
"argument_type": "Result",
"nugget_type": "STR",
"offsets": [
143,
144
],
"text": "stronger results",
"tokens": [
"stronger",
"results"
]
},
{
"argument_type": "Metrics",
"nugget_type": "TAK",
"offsets": [
146,
147,
148
],
"text": "bilingual lexicon induction",
"tokens": [
"bilingual",
"lexicon",
"induction"
]
}
],
"event_type": "CMP",
"trigger": {
"offsets": [
142
],
"text": "obtains",
"tokens": [
"obtains"
]
}
},
{
"arguments": [
{
"argument_type": "Finder",
"nugget_type": "OG",
"offsets": [
150
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "E-FAC",
"offsets": [
158
],
"text": "have",
"tokens": [
"have"
]
}
],
"event_type": "FIN",
"trigger": {
"offsets": [
152
],
"text": "conclude",
"tokens": [
"conclude"
]
}
}
] |
[
"recent",
"research",
"in",
"cross",
"-",
"lingual",
"word",
"embeddings",
"has",
"almost",
"exclusively",
"focused",
"on",
"offline",
"methods",
",",
"which",
"independently",
"train",
"word",
"embeddings",
"in",
"different",
"languages",
"and",
"map",
"them",
"to",
"a",
"shared",
"space",
"through",
"linear",
"transformations",
".",
"while",
"several",
"authors",
"have",
"questioned",
"the",
"underlying",
"isomorphism",
"assumption",
",",
"which",
"states",
"that",
"word",
"embeddings",
"in",
"different",
"languages",
"have",
"approximately",
"the",
"same",
"structure",
",",
"it",
"is",
"not",
"clear",
"whether",
"this",
"is",
"an",
"inherent",
"limitation",
"of",
"mapping",
"approaches",
"or",
"a",
"more",
"general",
"issue",
"when",
"learning",
"cross",
"-",
"lingual",
"embeddings",
".",
"so",
"as",
"to",
"answer",
"this",
"question",
",",
"we",
"experiment",
"with",
"parallel",
"corpora",
",",
"which",
"allows",
"us",
"to",
"compare",
"offline",
"mapping",
"to",
"an",
"extension",
"of",
"skip",
"-",
"gram",
"that",
"jointly",
"learns",
"both",
"embedding",
"spaces",
".",
"we",
"observe",
"that",
",",
"under",
"these",
"ideal",
"conditions",
",",
"joint",
"learning",
"yields",
"to",
"more",
"isomorphic",
"embeddings",
",",
"is",
"less",
"sensitive",
"to",
"hubness",
",",
"and",
"obtains",
"stronger",
"results",
"in",
"bilingual",
"lexicon",
"induction",
".",
"we",
"thus",
"conclude",
"that",
"current",
"mapping",
"methods",
"do",
"have",
"strong",
"limitations",
",",
"calling",
"for",
"further",
"research",
"to",
"jointly",
"learn",
"cross",
"-",
"lingual",
"embeddings",
"with",
"a",
"weaker",
"cross",
"-",
"lingual",
"signal",
"."
] |
ACL
|
Dependency-driven Relation Extraction with Attentive Graph Convolutional Networks
|
Syntactic information, especially dependency trees, has been widely used by existing studies to improve relation extraction with better semantic guidance for analyzing the context information associated with the given entities. However, most existing studies suffer from the noise in the dependency trees, especially when they are automatically generated, so that intensively leveraging dependency information may introduce confusions to relation classification and necessary pruning is of great importance in this task. In this paper, we propose a dependency-driven approach for relation extraction with attentive graph convolutional networks (A-GCN). In this approach, an attention mechanism upon graph convolutional networks is applied to different contextual words in the dependency tree obtained from an off-the-shelf dependency parser, to distinguish the importance of different word dependencies. Consider that dependency types among words also contain important contextual guidance, which is potentially helpful for relation extraction, we also include the type information in A-GCN modeling. Experimental results on two English benchmark datasets demonstrate the effectiveness of our A-GCN, which outperforms previous studies and achieves state-of-the-art performance on both datasets.
|
09e8a58fe50453a2401747d5e9c40e18
| 2,021
|
[
"syntactic information , especially dependency trees , has been widely used by existing studies to improve relation extraction with better semantic guidance for analyzing the context information associated with the given entities .",
"however , most existing studies suffer from the noise in the dependency trees , especially when they are automatically generated , so that intensively leveraging dependency information may introduce confusions to relation classification and necessary pruning is of great importance in this task .",
"in this paper , we propose a dependency - driven approach for relation extraction with attentive graph convolutional networks ( a - gcn ) .",
"in this approach , an attention mechanism upon graph convolutional networks is applied to different contextual words in the dependency tree obtained from an off - the - shelf dependency parser , to distinguish the importance of different word dependencies .",
"consider that dependency types among words also contain important contextual guidance , which is potentially helpful for relation extraction , we also include the type information in a - gcn modeling .",
"experimental results on two english benchmark datasets demonstrate the effectiveness of our a - gcn , which outperforms previous studies and achieves state - of - the - art performance on both datasets ."
] |
[
{
"arguments": [],
"event_type": "ITT",
"trigger": {
"offsets": [
10
],
"text": "used",
"tokens": [
"used"
]
}
},
{
"arguments": [
{
"argument_type": "Concern",
"nugget_type": "APP",
"offsets": [
36,
37
],
"text": "existing studies",
"tokens": [
"existing",
"studies"
]
},
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
41
],
"text": "noise",
"tokens": [
"noise"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
42,
43,
44,
45
],
"text": "in the dependency trees",
"tokens": [
"in",
"the",
"dependency",
"trees"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
38
],
"text": "suffer",
"tokens": [
"suffer"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
81
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
84,
85,
86,
87
],
"text": "dependency - driven approach",
"tokens": [
"dependency",
"-",
"driven",
"approach"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
89,
90
],
"text": "relation extraction",
"tokens": [
"relation",
"extraction"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
82
],
"text": "propose",
"tokens": [
"propose"
]
}
},
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "E-CMP",
"offsets": [
192
],
"text": "outperforms",
"tokens": [
"outperforms"
]
}
],
"event_type": "FIN",
"trigger": {
"offsets": [
182
],
"text": "demonstrate",
"tokens": [
"demonstrate"
]
}
},
{
"arguments": [
{
"argument_type": "Arg1",
"nugget_type": "APP",
"offsets": [
187,
188,
189
],
"text": "attentive graph convolutional networks",
"tokens": [
"a",
"-",
"gcn"
]
},
{
"argument_type": "Arg2",
"nugget_type": "APP",
"offsets": [
193,
194
],
"text": "previous studies",
"tokens": [
"previous",
"studies"
]
},
{
"argument_type": "Result",
"nugget_type": "STR",
"offsets": [
192
],
"text": "outperforms",
"tokens": [
"outperforms"
]
}
],
"event_type": "CMP",
"trigger": {
"offsets": [
192
],
"text": "outperforms",
"tokens": [
"outperforms"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
187,
188,
189
],
"text": "attentive graph convolutional networks",
"tokens": [
"a",
"-",
"gcn"
]
},
{
"argument_type": "Object",
"nugget_type": "STR",
"offsets": [
197,
198,
199,
200,
201,
202,
203,
204
],
"text": "state - of - the - art performance",
"tokens": [
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"performance"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
177,
178,
179,
180,
181
],
"text": "on two english benchmark datasets",
"tokens": [
"on",
"two",
"english",
"benchmark",
"datasets"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
196
],
"text": "achieves",
"tokens": [
"achieves"
]
}
},
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "FEA",
"offsets": [
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132
],
"text": "different contextual words in the dependency tree obtained from an off - the - shelf dependency parser",
"tokens": [
"different",
"contextual",
"words",
"in",
"the",
"dependency",
"tree",
"obtained",
"from",
"an",
"off",
"-",
"the",
"-",
"shelf",
"dependency",
"parser"
]
},
{
"argument_type": "TriedComponent",
"nugget_type": "MOD",
"offsets": [
107,
108
],
"text": "attention mechanism",
"tokens": [
"attention",
"mechanism"
]
},
{
"argument_type": "BaseComponent",
"nugget_type": "APP",
"offsets": [
110,
111,
112
],
"text": "graph convolutional networks",
"tokens": [
"graph",
"convolutional",
"networks"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
135
],
"text": "distinguish",
"tokens": [
"distinguish"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
114
],
"text": "applied",
"tokens": [
"applied"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
137,
138,
139,
140,
141
],
"text": "importance of different word dependencies",
"tokens": [
"importance",
"of",
"different",
"word",
"dependencies"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
135
],
"text": "distinguish",
"tokens": [
"distinguish"
]
}
},
{
"arguments": [
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
167,
168
],
"text": "type information",
"tokens": [
"type",
"information"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
169,
170,
171,
172,
173
],
"text": "in a - gcn modeling",
"tokens": [
"in",
"a",
"-",
"gcn",
"modeling"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
165
],
"text": "include",
"tokens": [
"include"
]
}
}
] |
[
"syntactic",
"information",
",",
"especially",
"dependency",
"trees",
",",
"has",
"been",
"widely",
"used",
"by",
"existing",
"studies",
"to",
"improve",
"relation",
"extraction",
"with",
"better",
"semantic",
"guidance",
"for",
"analyzing",
"the",
"context",
"information",
"associated",
"with",
"the",
"given",
"entities",
".",
"however",
",",
"most",
"existing",
"studies",
"suffer",
"from",
"the",
"noise",
"in",
"the",
"dependency",
"trees",
",",
"especially",
"when",
"they",
"are",
"automatically",
"generated",
",",
"so",
"that",
"intensively",
"leveraging",
"dependency",
"information",
"may",
"introduce",
"confusions",
"to",
"relation",
"classification",
"and",
"necessary",
"pruning",
"is",
"of",
"great",
"importance",
"in",
"this",
"task",
".",
"in",
"this",
"paper",
",",
"we",
"propose",
"a",
"dependency",
"-",
"driven",
"approach",
"for",
"relation",
"extraction",
"with",
"attentive",
"graph",
"convolutional",
"networks",
"(",
"a",
"-",
"gcn",
")",
".",
"in",
"this",
"approach",
",",
"an",
"attention",
"mechanism",
"upon",
"graph",
"convolutional",
"networks",
"is",
"applied",
"to",
"different",
"contextual",
"words",
"in",
"the",
"dependency",
"tree",
"obtained",
"from",
"an",
"off",
"-",
"the",
"-",
"shelf",
"dependency",
"parser",
",",
"to",
"distinguish",
"the",
"importance",
"of",
"different",
"word",
"dependencies",
".",
"consider",
"that",
"dependency",
"types",
"among",
"words",
"also",
"contain",
"important",
"contextual",
"guidance",
",",
"which",
"is",
"potentially",
"helpful",
"for",
"relation",
"extraction",
",",
"we",
"also",
"include",
"the",
"type",
"information",
"in",
"a",
"-",
"gcn",
"modeling",
".",
"experimental",
"results",
"on",
"two",
"english",
"benchmark",
"datasets",
"demonstrate",
"the",
"effectiveness",
"of",
"our",
"a",
"-",
"gcn",
",",
"which",
"outperforms",
"previous",
"studies",
"and",
"achieves",
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"performance",
"on",
"both",
"datasets",
"."
] |
ACL
|
Joint Chinese Word Segmentation and Part-of-speech Tagging via Two-way Attentions of Auto-analyzed Knowledge
|
Chinese word segmentation (CWS) and part-of-speech (POS) tagging are important fundamental tasks for Chinese language processing, where joint learning of them is an effective one-step solution for both tasks. Previous studies for joint CWS and POS tagging mainly follow the character-based tagging paradigm with introducing contextual information such as n-gram features or sentential representations from recurrent neural models. However, for many cases, the joint tagging needs not only modeling from context features but also knowledge attached to them (e.g., syntactic relations among words); limited efforts have been made by existing research to meet such needs. In this paper, we propose a neural model named TwASP for joint CWS and POS tagging following the character-based sequence labeling paradigm, where a two-way attention mechanism is used to incorporate both context feature and their corresponding syntactic knowledge for each input character. Particularly, we use existing language processing toolkits to obtain the auto-analyzed syntactic knowledge for the context, and the proposed attention module can learn and benefit from them although their quality may not be perfect. Our experiments illustrate the effectiveness of the two-way attentions for joint CWS and POS tagging, where state-of-the-art performance is achieved on five benchmark datasets.
|
9609778ad9e5f0ef4d2c7c494df6a6dc
| 2,020
|
[
"chinese word segmentation ( cws ) and part - of - speech ( pos ) tagging are important fundamental tasks for chinese language processing , where joint learning of them is an effective one - step solution for both tasks .",
"previous studies for joint cws and pos tagging mainly follow the character - based tagging paradigm with introducing contextual information such as n - gram features or sentential representations from recurrent neural models .",
"however , for many cases , the joint tagging needs not only modeling from context features but also knowledge attached to them ( e . g . , syntactic relations among words ) ; limited efforts have been made by existing research to meet such needs .",
"in this paper , we propose a neural model named twasp for joint cws and pos tagging following the character - based sequence labeling paradigm , where a two - way attention mechanism is used to incorporate both context feature and their corresponding syntactic knowledge for each input character .",
"particularly , we use existing language processing toolkits to obtain the auto - analyzed syntactic knowledge for the context , and the proposed attention module can learn and benefit from them although their quality may not be perfect .",
"our experiments illustrate the effectiveness of the two - way attentions for joint cws and pos tagging , where state - of - the - art performance is achieved on five benchmark datasets ."
] |
[
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
21,
22,
23
],
"text": "chinese language processing",
"tokens": [
"chinese",
"language",
"processing"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
19
],
"text": "tasks",
"tokens": [
"tasks"
]
}
},
{
"arguments": [
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
109,
110
],
"text": "limited efforts",
"tokens": [
"limited",
"efforts"
]
},
{
"argument_type": "Concern",
"nugget_type": "APP",
"offsets": [
115,
116
],
"text": "existing research",
"tokens": [
"existing",
"research"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
118
],
"text": "meet",
"tokens": [
"meet"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
113
],
"text": "made",
"tokens": [
"made"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
126
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
132
],
"text": "twasp",
"tokens": [
"twasp"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
134,
135,
136,
137,
138
],
"text": "joint cws and pos tagging",
"tokens": [
"joint",
"cws",
"and",
"pos",
"tagging"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
127
],
"text": "propose",
"tokens": [
"propose"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
174
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
181
],
"text": "obtain",
"tokens": [
"obtain"
]
},
{
"argument_type": "Content",
"nugget_type": "MOD",
"offsets": [
176,
177,
178,
179
],
"text": "existing language processing toolkits",
"tokens": [
"existing",
"language",
"processing",
"toolkits"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
175
],
"text": "use",
"tokens": [
"use"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
183,
184,
185,
186,
187,
188,
189,
190
],
"text": "auto - analyzed syntactic knowledge for the context",
"tokens": [
"auto",
"-",
"analyzed",
"syntactic",
"knowledge",
"for",
"the",
"context"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
181
],
"text": "obtain",
"tokens": [
"obtain"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "STR",
"offsets": [
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227
],
"text": "effectiveness of the two - way attentions for joint cws and pos tagging",
"tokens": [
"effectiveness",
"of",
"the",
"two",
"-",
"way",
"attentions",
"for",
"joint",
"cws",
"and",
"pos",
"tagging"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
213
],
"text": "illustrate",
"tokens": [
"illustrate"
]
}
},
{
"arguments": [
{
"argument_type": "Dataset",
"nugget_type": "DST",
"offsets": [
241,
242,
243
],
"text": "five benchmark datasets",
"tokens": [
"five",
"benchmark",
"datasets"
]
},
{
"argument_type": "Subject",
"nugget_type": "STR",
"offsets": [
230,
231,
232,
233,
234,
235,
236,
237
],
"text": "state - of - the - art performance",
"tokens": [
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"performance"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
239
],
"text": "achieved",
"tokens": [
"achieved"
]
}
},
{
"arguments": [
{
"argument_type": "TriedComponent",
"nugget_type": "FEA",
"offsets": [
59,
60
],
"text": "contextual information",
"tokens": [
"contextual",
"information"
]
},
{
"argument_type": "BaseComponent",
"nugget_type": "APP",
"offsets": [
71,
72,
73
],
"text": "recurrent neural models",
"tokens": [
"recurrent",
"neural",
"models"
]
},
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
41,
42
],
"text": "previous studies",
"tokens": [
"previous",
"studies"
]
}
],
"event_type": "RWS",
"trigger": {
"offsets": [
58
],
"text": "introducing",
"tokens": [
"introducing"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96
],
"text": "such needs",
"tokens": [
"needs",
"not",
"only",
"modeling",
"from",
"context",
"features",
"but",
"also",
"knowledge",
"attached",
"to",
"them"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
118
],
"text": "meet",
"tokens": [
"meet"
]
}
},
{
"arguments": [
{
"argument_type": "TriedComponent",
"nugget_type": "APP",
"offsets": [
150,
151,
152,
153,
154
],
"text": "two - way attention mechanism",
"tokens": [
"two",
"-",
"way",
"attention",
"mechanism"
]
},
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
160,
161,
167,
168,
169,
170
],
"text": "context feature for each input character",
"tokens": [
"context",
"feature",
"for",
"each",
"input",
"character"
]
},
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
160,
161,
164,
165,
166,
167,
168,
169,
170
],
"text": "their corresponding syntactic knowledge for each input character",
"tokens": [
"context",
"feature",
"corresponding",
"syntactic",
"knowledge",
"for",
"each",
"input",
"character"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
158
],
"text": "incorporate",
"tokens": [
"incorporate"
]
}
},
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "MOD",
"offsets": [
194,
195,
196
],
"text": "proposed attention module",
"tokens": [
"proposed",
"attention",
"module"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
198,
199,
200
],
"text": "learn and benefit",
"tokens": [
"learn",
"and",
"benefit"
]
}
}
] |
[
"chinese",
"word",
"segmentation",
"(",
"cws",
")",
"and",
"part",
"-",
"of",
"-",
"speech",
"(",
"pos",
")",
"tagging",
"are",
"important",
"fundamental",
"tasks",
"for",
"chinese",
"language",
"processing",
",",
"where",
"joint",
"learning",
"of",
"them",
"is",
"an",
"effective",
"one",
"-",
"step",
"solution",
"for",
"both",
"tasks",
".",
"previous",
"studies",
"for",
"joint",
"cws",
"and",
"pos",
"tagging",
"mainly",
"follow",
"the",
"character",
"-",
"based",
"tagging",
"paradigm",
"with",
"introducing",
"contextual",
"information",
"such",
"as",
"n",
"-",
"gram",
"features",
"or",
"sentential",
"representations",
"from",
"recurrent",
"neural",
"models",
".",
"however",
",",
"for",
"many",
"cases",
",",
"the",
"joint",
"tagging",
"needs",
"not",
"only",
"modeling",
"from",
"context",
"features",
"but",
"also",
"knowledge",
"attached",
"to",
"them",
"(",
"e",
".",
"g",
".",
",",
"syntactic",
"relations",
"among",
"words",
")",
";",
"limited",
"efforts",
"have",
"been",
"made",
"by",
"existing",
"research",
"to",
"meet",
"such",
"needs",
".",
"in",
"this",
"paper",
",",
"we",
"propose",
"a",
"neural",
"model",
"named",
"twasp",
"for",
"joint",
"cws",
"and",
"pos",
"tagging",
"following",
"the",
"character",
"-",
"based",
"sequence",
"labeling",
"paradigm",
",",
"where",
"a",
"two",
"-",
"way",
"attention",
"mechanism",
"is",
"used",
"to",
"incorporate",
"both",
"context",
"feature",
"and",
"their",
"corresponding",
"syntactic",
"knowledge",
"for",
"each",
"input",
"character",
".",
"particularly",
",",
"we",
"use",
"existing",
"language",
"processing",
"toolkits",
"to",
"obtain",
"the",
"auto",
"-",
"analyzed",
"syntactic",
"knowledge",
"for",
"the",
"context",
",",
"and",
"the",
"proposed",
"attention",
"module",
"can",
"learn",
"and",
"benefit",
"from",
"them",
"although",
"their",
"quality",
"may",
"not",
"be",
"perfect",
".",
"our",
"experiments",
"illustrate",
"the",
"effectiveness",
"of",
"the",
"two",
"-",
"way",
"attentions",
"for",
"joint",
"cws",
"and",
"pos",
"tagging",
",",
"where",
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"performance",
"is",
"achieved",
"on",
"five",
"benchmark",
"datasets",
"."
] |
ACL
|
RankQA: Neural Question Answering with Answer Re-Ranking
|
The conventional paradigm in neural question answering (QA) for narrative content is limited to a two-stage process: first, relevant text passages are retrieved and, subsequently, a neural network for machine comprehension extracts the likeliest answer. However, both stages are largely isolated in the status quo and, hence, information from the two phases is never properly fused. In contrast, this work proposes RankQA: RankQA extends the conventional two-stage process in neural QA with a third stage that performs an additional answer re-ranking. The re-ranking leverages different features that are directly extracted from the QA pipeline, i.e., a combination of retrieval and comprehension features. While our intentionally simple design allows for an efficient, data-sparse estimation, it nevertheless outperforms more complex QA systems by a significant margin: in fact, RankQA achieves state-of-the-art performance on 3 out of 4 benchmark datasets. Furthermore, its performance is especially superior in settings where the size of the corpus is dynamic. Here the answer re-ranking provides an effective remedy against the underlying noise-information trade-off due to a variable corpus size. As a consequence, RankQA represents a novel, powerful, and thus challenging baseline for future research in content-based QA.
|
864e901c9c8268c1e32b4e85b4cdda05
| 2,019
|
[
"the conventional paradigm in neural question answering ( qa ) for narrative content is limited to a two - stage process : first , relevant text passages are retrieved and , subsequently , a neural network for machine comprehension extracts the likeliest answer .",
"however , both stages are largely isolated in the status quo and , hence , information from the two phases is never properly fused .",
"in contrast , this work proposes rankqa : rankqa extends the conventional two - stage process in neural qa with a third stage that performs an additional answer re - ranking .",
"the re - ranking leverages different features that are directly extracted from the qa pipeline , i . e . , a combination of retrieval and comprehension features .",
"while our intentionally simple design allows for an efficient , data - sparse estimation , it nevertheless outperforms more complex qa systems by a significant margin : in fact , rankqa achieves state - of - the - art performance on 3 out of 4 benchmark datasets .",
"furthermore , its performance is especially superior in settings where the size of the corpus is dynamic .",
"here the answer re - ranking provides an effective remedy against the underlying noise - information trade - off due to a variable corpus size .",
"as a consequence , rankqa represents a novel , powerful , and thus challenging baseline for future research in content - based qa ."
] |
[
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
4,
5,
6
],
"text": "neural question answering",
"tokens": [
"neural",
"question",
"answering"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
14
],
"text": "limited",
"tokens": [
"limited"
]
}
},
{
"arguments": [],
"event_type": "RWF",
"trigger": {
"offsets": [
50
],
"text": "isolated",
"tokens": [
"isolated"
]
}
},
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
75
],
"text": "rankqa",
"tokens": [
"rankqa"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
74
],
"text": "proposes",
"tokens": [
"proposes"
]
}
},
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
138,
139,
140,
141,
142,
143
],
"text": "efficient , data - sparse estimation",
"tokens": [
"efficient",
",",
"data",
"-",
"sparse",
"estimation"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
135
],
"text": "allows",
"tokens": [
"allows"
]
}
},
{
"arguments": [
{
"argument_type": "Arg1",
"nugget_type": "APP",
"offsets": [
75
],
"text": "rankqa",
"tokens": [
"rankqa"
]
},
{
"argument_type": "Result",
"nugget_type": "STR",
"offsets": [
147
],
"text": "outperforms",
"tokens": [
"outperforms"
]
},
{
"argument_type": "Extent",
"nugget_type": "DEG",
"offsets": [
154,
155
],
"text": "significant margin",
"tokens": [
"significant",
"margin"
]
}
],
"event_type": "CMP",
"trigger": {
"offsets": [
147
],
"text": "outperforms",
"tokens": [
"outperforms"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
160
],
"text": "rankqa",
"tokens": [
"rankqa"
]
},
{
"argument_type": "Extent",
"nugget_type": "DEG",
"offsets": [
162,
163,
164,
165,
166,
167,
168
],
"text": "state - of - the - art",
"tokens": [
"state",
"-",
"of",
"-",
"the",
"-",
"art"
]
},
{
"argument_type": "Object",
"nugget_type": "TAK",
"offsets": [
169
],
"text": "performance",
"tokens": [
"performance"
]
},
{
"argument_type": "Dataset",
"nugget_type": "DST",
"offsets": [
170,
171,
172,
173,
174,
175,
176
],
"text": "on 3 out of 4 benchmark datasets",
"tokens": [
"on",
"3",
"out",
"of",
"4",
"benchmark",
"datasets"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
161
],
"text": "achieves",
"tokens": [
"achieves"
]
}
},
{
"arguments": [
{
"argument_type": "Extent",
"nugget_type": "DEG",
"offsets": [
183
],
"text": "especially",
"tokens": [
"especially"
]
},
{
"argument_type": "Result",
"nugget_type": "STR",
"offsets": [
184
],
"text": "superior",
"tokens": [
"superior"
]
},
{
"argument_type": "Metrics",
"nugget_type": "TAK",
"offsets": [
181
],
"text": "performance",
"tokens": [
"performance"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
187,
188,
189,
190,
191,
192,
193,
194
],
"text": "where the size of the corpus is dynamic",
"tokens": [
"where",
"the",
"size",
"of",
"the",
"corpus",
"is",
"dynamic"
]
}
],
"event_type": "CMP",
"trigger": {
"offsets": [
184
],
"text": "superior",
"tokens": [
"superior"
]
}
}
] |
[
"the",
"conventional",
"paradigm",
"in",
"neural",
"question",
"answering",
"(",
"qa",
")",
"for",
"narrative",
"content",
"is",
"limited",
"to",
"a",
"two",
"-",
"stage",
"process",
":",
"first",
",",
"relevant",
"text",
"passages",
"are",
"retrieved",
"and",
",",
"subsequently",
",",
"a",
"neural",
"network",
"for",
"machine",
"comprehension",
"extracts",
"the",
"likeliest",
"answer",
".",
"however",
",",
"both",
"stages",
"are",
"largely",
"isolated",
"in",
"the",
"status",
"quo",
"and",
",",
"hence",
",",
"information",
"from",
"the",
"two",
"phases",
"is",
"never",
"properly",
"fused",
".",
"in",
"contrast",
",",
"this",
"work",
"proposes",
"rankqa",
":",
"rankqa",
"extends",
"the",
"conventional",
"two",
"-",
"stage",
"process",
"in",
"neural",
"qa",
"with",
"a",
"third",
"stage",
"that",
"performs",
"an",
"additional",
"answer",
"re",
"-",
"ranking",
".",
"the",
"re",
"-",
"ranking",
"leverages",
"different",
"features",
"that",
"are",
"directly",
"extracted",
"from",
"the",
"qa",
"pipeline",
",",
"i",
".",
"e",
".",
",",
"a",
"combination",
"of",
"retrieval",
"and",
"comprehension",
"features",
".",
"while",
"our",
"intentionally",
"simple",
"design",
"allows",
"for",
"an",
"efficient",
",",
"data",
"-",
"sparse",
"estimation",
",",
"it",
"nevertheless",
"outperforms",
"more",
"complex",
"qa",
"systems",
"by",
"a",
"significant",
"margin",
":",
"in",
"fact",
",",
"rankqa",
"achieves",
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"performance",
"on",
"3",
"out",
"of",
"4",
"benchmark",
"datasets",
".",
"furthermore",
",",
"its",
"performance",
"is",
"especially",
"superior",
"in",
"settings",
"where",
"the",
"size",
"of",
"the",
"corpus",
"is",
"dynamic",
".",
"here",
"the",
"answer",
"re",
"-",
"ranking",
"provides",
"an",
"effective",
"remedy",
"against",
"the",
"underlying",
"noise",
"-",
"information",
"trade",
"-",
"off",
"due",
"to",
"a",
"variable",
"corpus",
"size",
".",
"as",
"a",
"consequence",
",",
"rankqa",
"represents",
"a",
"novel",
",",
"powerful",
",",
"and",
"thus",
"challenging",
"baseline",
"for",
"future",
"research",
"in",
"content",
"-",
"based",
"qa",
"."
] |
ACL
|
Domain Adaptation in Multilingual and Multi-Domain Monolingual Settings for Complex Word Identification
|
Complex word identification (CWI) is a cornerstone process towards proper text simplification. CWI is highly dependent on context, whereas its difficulty is augmented by the scarcity of available datasets which vary greatly in terms of domains and languages. As such, it becomes increasingly more difficult to develop a robust model that generalizes across a wide array of input examples. In this paper, we propose a novel training technique for the CWI task based on domain adaptation to improve the target character and context representations. This technique addresses the problem of working with multiple domains, inasmuch as it creates a way of smoothing the differences between the explored datasets. Moreover, we also propose a similar auxiliary task, namely text simplification, that can be used to complement lexical complexity prediction. Our model obtains a boost of up to 2.42% in terms of Pearson Correlation Coefficients in contrast to vanilla training techniques, when considering the CompLex from the Lexical Complexity Prediction 2021 dataset. At the same time, we obtain an increase of 3% in Pearson scores, while considering a cross-lingual setup relying on the Complex Word Identification 2018 dataset. In addition, our model yields state-of-the-art results in terms of Mean Absolute Error.
|
0e0d6cc75f98e0e32960341f2f384171
| 2,022
|
[
"complex word identification ( cwi ) is a cornerstone process towards proper text simplification .",
"cwi is highly dependent on context , whereas its difficulty is augmented by the scarcity of available datasets which vary greatly in terms of domains and languages .",
"as such , it becomes increasingly more difficult to develop a robust model that generalizes across a wide array of input examples .",
"in this paper , we propose a novel training technique for the cwi task based on domain adaptation to improve the target character and context representations .",
"this technique addresses the problem of working with multiple domains , inasmuch as it creates a way of smoothing the differences between the explored datasets .",
"moreover , we also propose a similar auxiliary task , namely text simplification , that can be used to complement lexical complexity prediction .",
"our model obtains a boost of up to 2 . 42 % in terms of pearson correlation coefficients in contrast to vanilla training techniques , when considering the complex from the lexical complexity prediction 2021 dataset .",
"at the same time , we obtain an increase of 3 % in pearson scores , while considering a cross - lingual setup relying on the complex word identification 2018 dataset .",
"in addition , our model yields state - of - the - art results in terms of mean absolute error ."
] |
[
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
0,
1,
2
],
"text": "complex word identification",
"tokens": [
"complex",
"word",
"identification"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
8,
9
],
"text": "cornerstone process",
"tokens": [
"cornerstone",
"process"
]
}
},
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
0,
1,
2
],
"text": "complex word identification",
"tokens": [
"complex",
"word",
"identification"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
17,
18
],
"text": "highly dependent",
"tokens": [
"highly",
"dependent"
]
}
},
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
0,
1,
2,
24
],
"text": "its difficulty",
"tokens": [
"complex",
"word",
"identification",
"difficulty"
]
},
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
29,
30,
31,
32
],
"text": "scarcity of available datasets",
"tokens": [
"scarcity",
"of",
"available",
"datasets"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
26
],
"text": "augmented",
"tokens": [
"augmented"
]
}
},
{
"arguments": [
{
"argument_type": "Concern",
"nugget_type": "DST",
"offsets": [
31,
32
],
"text": "available datasets",
"tokens": [
"available",
"datasets"
]
},
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
34,
35
],
"text": "vary greatly",
"tokens": [
"vary",
"greatly"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
34,
35
],
"text": "vary greatly",
"tokens": [
"vary",
"greatly"
]
}
},
{
"arguments": [
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
49,
50
],
"text": "more difficult",
"tokens": [
"more",
"difficult"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
52
],
"text": "develop",
"tokens": [
"develop"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
47,
48
],
"text": "becomes increasingly",
"tokens": [
"becomes",
"increasingly"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "APP",
"offsets": [
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64
],
"text": "robust model that generalizes across a wide array of input examples",
"tokens": [
"robust",
"model",
"that",
"generalizes",
"across",
"a",
"wide",
"array",
"of",
"input",
"examples"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
52
],
"text": "develop",
"tokens": [
"develop"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
70
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
74,
75
],
"text": "training technique",
"tokens": [
"training",
"technique"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
0,
1,
2,
79,
80,
81,
82,
83
],
"text": "cwi task based on domain adaptation",
"tokens": [
"complex",
"word",
"identification",
"task",
"based",
"on",
"domain",
"adaptation"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
85
],
"text": "improve",
"tokens": [
"improve"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
71
],
"text": "propose",
"tokens": [
"propose"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "FEA",
"offsets": [
87,
88
],
"text": "target character",
"tokens": [
"target",
"character"
]
},
{
"argument_type": "Aim",
"nugget_type": "FEA",
"offsets": [
90,
91
],
"text": "context representations",
"tokens": [
"context",
"representations"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
85
],
"text": "improve",
"tokens": [
"improve"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
121
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
125,
126,
127
],
"text": "similar auxiliary task",
"tokens": [
"similar",
"auxiliary",
"task"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
138
],
"text": "complement",
"tokens": [
"complement"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
123
],
"text": "propose",
"tokens": [
"propose"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
139,
140,
141
],
"text": "lexical complexity prediction",
"tokens": [
"lexical",
"complexity",
"prediction"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
138
],
"text": "complement",
"tokens": [
"complement"
]
}
},
{
"arguments": [
{
"argument_type": "Arg1",
"nugget_type": "APP",
"offsets": [
143,
144
],
"text": "our model",
"tokens": [
"our",
"model"
]
},
{
"argument_type": "Result",
"nugget_type": "STR",
"offsets": [
147
],
"text": "boost",
"tokens": [
"boost"
]
},
{
"argument_type": "Extent",
"nugget_type": "DEG",
"offsets": [
149,
150,
151,
152,
153,
154
],
"text": "up to 2 . 42 %",
"tokens": [
"up",
"to",
"2",
".",
"42",
"%"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
155,
156,
157,
158,
159,
160
],
"text": "in terms of pearson correlation coefficients",
"tokens": [
"in",
"terms",
"of",
"pearson",
"correlation",
"coefficients"
]
},
{
"argument_type": "Arg2",
"nugget_type": "APP",
"offsets": [
164,
165,
166
],
"text": "vanilla training techniques",
"tokens": [
"vanilla",
"training",
"techniques"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
169,
170,
171,
172,
173,
174,
175,
176,
177,
178
],
"text": "considering the complex from the lexical complexity prediction 2021 dataset",
"tokens": [
"considering",
"the",
"complex",
"from",
"the",
"lexical",
"complexity",
"prediction",
"2021",
"dataset"
]
}
],
"event_type": "CMP",
"trigger": {
"offsets": [
145
],
"text": "obtains",
"tokens": [
"obtains"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
143,
144
],
"text": "our model",
"tokens": [
"our",
"model"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
193,
194
],
"text": "pearson scores",
"tokens": [
"pearson",
"scores"
]
},
{
"argument_type": "Extent",
"nugget_type": "DEG",
"offsets": [
190,
191
],
"text": "3 %",
"tokens": [
"3",
"%"
]
},
{
"argument_type": "Object",
"nugget_type": "TAK",
"offsets": [
188
],
"text": "increase",
"tokens": [
"increase"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210
],
"text": "considering a cross - lingual setup relying on the complex word identification 2018 dataset",
"tokens": [
"considering",
"a",
"cross",
"-",
"lingual",
"setup",
"relying",
"on",
"the",
"complex",
"word",
"identification",
"2018",
"dataset"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
186
],
"text": "obtain",
"tokens": [
"obtain"
]
}
},
{
"arguments": [
{
"argument_type": "Arg1",
"nugget_type": "APP",
"offsets": [
215,
216
],
"text": "our model",
"tokens": [
"our",
"model"
]
},
{
"argument_type": "Result",
"nugget_type": "STR",
"offsets": [
218,
219,
220,
221,
222,
223,
224
],
"text": "state - of - the - art",
"tokens": [
"state",
"-",
"of",
"-",
"the",
"-",
"art"
]
},
{
"argument_type": "Metrics",
"nugget_type": "TAK",
"offsets": [
225
],
"text": "results",
"tokens": [
"results"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
226,
227,
228,
229,
230,
231
],
"text": "in terms of mean absolute error",
"tokens": [
"in",
"terms",
"of",
"mean",
"absolute",
"error"
]
}
],
"event_type": "CMP",
"trigger": {
"offsets": [
217
],
"text": "yields",
"tokens": [
"yields"
]
}
}
] |
[
"complex",
"word",
"identification",
"(",
"cwi",
")",
"is",
"a",
"cornerstone",
"process",
"towards",
"proper",
"text",
"simplification",
".",
"cwi",
"is",
"highly",
"dependent",
"on",
"context",
",",
"whereas",
"its",
"difficulty",
"is",
"augmented",
"by",
"the",
"scarcity",
"of",
"available",
"datasets",
"which",
"vary",
"greatly",
"in",
"terms",
"of",
"domains",
"and",
"languages",
".",
"as",
"such",
",",
"it",
"becomes",
"increasingly",
"more",
"difficult",
"to",
"develop",
"a",
"robust",
"model",
"that",
"generalizes",
"across",
"a",
"wide",
"array",
"of",
"input",
"examples",
".",
"in",
"this",
"paper",
",",
"we",
"propose",
"a",
"novel",
"training",
"technique",
"for",
"the",
"cwi",
"task",
"based",
"on",
"domain",
"adaptation",
"to",
"improve",
"the",
"target",
"character",
"and",
"context",
"representations",
".",
"this",
"technique",
"addresses",
"the",
"problem",
"of",
"working",
"with",
"multiple",
"domains",
",",
"inasmuch",
"as",
"it",
"creates",
"a",
"way",
"of",
"smoothing",
"the",
"differences",
"between",
"the",
"explored",
"datasets",
".",
"moreover",
",",
"we",
"also",
"propose",
"a",
"similar",
"auxiliary",
"task",
",",
"namely",
"text",
"simplification",
",",
"that",
"can",
"be",
"used",
"to",
"complement",
"lexical",
"complexity",
"prediction",
".",
"our",
"model",
"obtains",
"a",
"boost",
"of",
"up",
"to",
"2",
".",
"42",
"%",
"in",
"terms",
"of",
"pearson",
"correlation",
"coefficients",
"in",
"contrast",
"to",
"vanilla",
"training",
"techniques",
",",
"when",
"considering",
"the",
"complex",
"from",
"the",
"lexical",
"complexity",
"prediction",
"2021",
"dataset",
".",
"at",
"the",
"same",
"time",
",",
"we",
"obtain",
"an",
"increase",
"of",
"3",
"%",
"in",
"pearson",
"scores",
",",
"while",
"considering",
"a",
"cross",
"-",
"lingual",
"setup",
"relying",
"on",
"the",
"complex",
"word",
"identification",
"2018",
"dataset",
".",
"in",
"addition",
",",
"our",
"model",
"yields",
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"results",
"in",
"terms",
"of",
"mean",
"absolute",
"error",
"."
] |
ACL
|
Substructure Distribution Projection for Zero-Shot Cross-Lingual Dependency Parsing
|
We present substructure distribution projection (SubDP), a technique that projects a distribution over structures in one domain to another, by projecting substructure distributions separately. Models for the target domain can then be trained, using the projected distributions as soft silver labels. We evaluate SubDP on zero shot cross-lingual dependency parsing, taking dependency arcs as substructures: we project the predicted dependency arc distributions in the source language(s) to target language(s), and train a target language parser on the resulting distributions. Given an English tree bank as the only source of human supervision, SubDP achieves better unlabeled attachment score than all prior work on the Universal Dependencies v2.2 (Nivre et al., 2020) test set across eight diverse target languages, as well as the best labeled attachment score on six languages. In addition, SubDP improves zero shot cross-lingual dependency parsing with very few (e.g., 50) supervised bitext pairs, across a broader range of target languages.
|
a11d23df083ec881504fcaf35594405c
| 2,022
|
[
"we present substructure distribution projection ( subdp ) , a technique that projects a distribution over structures in one domain to another , by projecting substructure distributions separately .",
"models for the target domain can then be trained , using the projected distributions as soft silver labels .",
"we evaluate subdp on zero shot cross - lingual dependency parsing , taking dependency arcs as substructures : we project the predicted dependency arc distributions in the source language ( s ) to target language ( s ) , and train a target language parser on the resulting distributions .",
"given an english tree bank as the only source of human supervision , subdp achieves better unlabeled attachment score than all prior work on the universal dependencies v2 . 2 ( nivre et al . , 2020 ) test set across eight diverse target languages , as well as the best labeled attachment score on six languages .",
"in addition , subdp improves zero shot cross - lingual dependency parsing with very few ( e . g . , 50 ) supervised bitext pairs , across a broader range of target languages ."
] |
[
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
0
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
2,
3,
4
],
"text": "substructure distribution projection",
"tokens": [
"substructure",
"distribution",
"projection"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
1
],
"text": "present",
"tokens": [
"present"
]
}
},
{
"arguments": [
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
25,
26
],
"text": "substructure distributions",
"tokens": [
"substructure",
"distributions"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
12
],
"text": "projects",
"tokens": [
"projects"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
27
],
"text": "separately",
"tokens": [
"separately"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
24
],
"text": "projecting",
"tokens": [
"projecting"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "FEA",
"offsets": [
14,
15,
16
],
"text": "distribution over structures",
"tokens": [
"distribution",
"over",
"structures"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
17,
18,
19,
20,
21
],
"text": "in one domain to another",
"tokens": [
"in",
"one",
"domain",
"to",
"another"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
12
],
"text": "projects",
"tokens": [
"projects"
]
}
},
{
"arguments": [
{
"argument_type": "TriedComponent",
"nugget_type": "FEA",
"offsets": [
69,
70,
71,
72
],
"text": "predicted dependency arc distributions",
"tokens": [
"predicted",
"dependency",
"arc",
"distributions"
]
},
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
81,
82
],
"text": "target language",
"tokens": [
"target",
"language"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
67
],
"text": "project",
"tokens": [
"project"
]
}
},
{
"arguments": [
{
"argument_type": "Arg1",
"nugget_type": "APP",
"offsets": [
2,
3,
4
],
"text": "substructure distribution projection",
"tokens": [
"substructure",
"distribution",
"projection"
]
},
{
"argument_type": "Result",
"nugget_type": "STR",
"offsets": [
113,
114,
115,
116
],
"text": "better unlabeled attachment score",
"tokens": [
"better",
"unlabeled",
"attachment",
"score"
]
},
{
"argument_type": "Arg2",
"nugget_type": "APP",
"offsets": [
118,
119,
120
],
"text": "all prior work",
"tokens": [
"all",
"prior",
"work"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
138,
139,
140,
141,
142
],
"text": "across eight diverse target languages",
"tokens": [
"across",
"eight",
"diverse",
"target",
"languages"
]
},
{
"argument_type": "Dataset",
"nugget_type": "DST",
"offsets": [
123,
124,
125,
126,
127,
136,
137
],
"text": "universal dependencies v2 . 2 ( nivre et al . , 2020 ) test set",
"tokens": [
"universal",
"dependencies",
"v2",
".",
"2",
"test",
"set"
]
}
],
"event_type": "CMP",
"trigger": {
"offsets": [
112
],
"text": "achieves",
"tokens": [
"achieves"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
2,
3,
4
],
"text": "substructure distribution projection",
"tokens": [
"substructure",
"distribution",
"projection"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
168,
169,
170,
179,
180,
181
],
"text": "with very few ( e . g . , 50 ) supervised bitext pairs",
"tokens": [
"with",
"very",
"few",
"supervised",
"bitext",
"pairs"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
183,
184,
185,
186,
187,
188,
189
],
"text": "across a broader range of target languages",
"tokens": [
"across",
"a",
"broader",
"range",
"of",
"target",
"languages"
]
},
{
"argument_type": "Object",
"nugget_type": "APP",
"offsets": [
161,
162,
163,
164,
165,
166,
167
],
"text": "zero shot cross - lingual dependency parsing",
"tokens": [
"zero",
"shot",
"cross",
"-",
"lingual",
"dependency",
"parsing"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
160
],
"text": "improves",
"tokens": [
"improves"
]
}
},
{
"arguments": [
{
"argument_type": "BaseComponent",
"nugget_type": "APP",
"offsets": [
29,
30,
31,
32,
33
],
"text": "models for the target domain",
"tokens": [
"models",
"for",
"the",
"target",
"domain"
]
},
{
"argument_type": "TriedComponent",
"nugget_type": "FEA",
"offsets": [
41,
42
],
"text": "projected distributions",
"tokens": [
"projected",
"distributions"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
43,
44,
45,
46
],
"text": "as soft silver labels",
"tokens": [
"as",
"soft",
"silver",
"labels"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
37
],
"text": "trained",
"tokens": [
"trained"
]
}
},
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
49
],
"text": "evaluate",
"tokens": [
"evaluate"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
51,
52,
53,
54,
55,
56,
57,
58
],
"text": "on zero shot cross - lingual dependency parsing",
"tokens": [
"on",
"zero",
"shot",
"cross",
"-",
"lingual",
"dependency",
"parsing"
]
},
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
61,
62
],
"text": "dependency arcs",
"tokens": [
"dependency",
"arcs"
]
},
{
"argument_type": "TriedComponent",
"nugget_type": "MOD",
"offsets": [
64
],
"text": "substructures",
"tokens": [
"substructures"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
60
],
"text": "taking",
"tokens": [
"taking"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "APP",
"offsets": [
2,
3,
4
],
"text": "subdp",
"tokens": [
"substructure",
"distribution",
"projection"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
49
],
"text": "evaluate",
"tokens": [
"evaluate"
]
}
},
{
"arguments": [
{
"argument_type": "TriedComponent",
"nugget_type": "FEA",
"offsets": [
95,
96
],
"text": "resulting distributions",
"tokens": [
"resulting",
"distributions"
]
},
{
"argument_type": "BaseComponent",
"nugget_type": "MOD",
"offsets": [
90,
91,
92
],
"text": "target language parser",
"tokens": [
"target",
"language",
"parser"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
88
],
"text": "train",
"tokens": [
"train"
]
}
},
{
"arguments": [
{
"argument_type": "Object",
"nugget_type": "STR",
"offsets": [
148,
149,
150,
151
],
"text": "best labeled attachment score",
"tokens": [
"best",
"labeled",
"attachment",
"score"
]
},
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
2,
3,
4
],
"text": "substructure distribution projection",
"tokens": [
"substructure",
"distribution",
"projection"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
112
],
"text": "achieves",
"tokens": [
"achieves"
]
}
}
] |
[
"we",
"present",
"substructure",
"distribution",
"projection",
"(",
"subdp",
")",
",",
"a",
"technique",
"that",
"projects",
"a",
"distribution",
"over",
"structures",
"in",
"one",
"domain",
"to",
"another",
",",
"by",
"projecting",
"substructure",
"distributions",
"separately",
".",
"models",
"for",
"the",
"target",
"domain",
"can",
"then",
"be",
"trained",
",",
"using",
"the",
"projected",
"distributions",
"as",
"soft",
"silver",
"labels",
".",
"we",
"evaluate",
"subdp",
"on",
"zero",
"shot",
"cross",
"-",
"lingual",
"dependency",
"parsing",
",",
"taking",
"dependency",
"arcs",
"as",
"substructures",
":",
"we",
"project",
"the",
"predicted",
"dependency",
"arc",
"distributions",
"in",
"the",
"source",
"language",
"(",
"s",
")",
"to",
"target",
"language",
"(",
"s",
")",
",",
"and",
"train",
"a",
"target",
"language",
"parser",
"on",
"the",
"resulting",
"distributions",
".",
"given",
"an",
"english",
"tree",
"bank",
"as",
"the",
"only",
"source",
"of",
"human",
"supervision",
",",
"subdp",
"achieves",
"better",
"unlabeled",
"attachment",
"score",
"than",
"all",
"prior",
"work",
"on",
"the",
"universal",
"dependencies",
"v2",
".",
"2",
"(",
"nivre",
"et",
"al",
".",
",",
"2020",
")",
"test",
"set",
"across",
"eight",
"diverse",
"target",
"languages",
",",
"as",
"well",
"as",
"the",
"best",
"labeled",
"attachment",
"score",
"on",
"six",
"languages",
".",
"in",
"addition",
",",
"subdp",
"improves",
"zero",
"shot",
"cross",
"-",
"lingual",
"dependency",
"parsing",
"with",
"very",
"few",
"(",
"e",
".",
"g",
".",
",",
"50",
")",
"supervised",
"bitext",
"pairs",
",",
"across",
"a",
"broader",
"range",
"of",
"target",
"languages",
"."
] |
ACL
|
How can NLP Help Revitalize Endangered Languages? A Case Study and Roadmap for the Cherokee Language
|
More than 43% of the languages spoken in the world are endangered, and language loss currently occurs at an accelerated rate because of globalization and neocolonialism. Saving and revitalizing endangered languages has become very important for maintaining the cultural diversity on our planet. In this work, we focus on discussing how NLP can help revitalize endangered languages. We first suggest three principles that may help NLP practitioners to foster mutual understanding and collaboration with language communities, and we discuss three ways in which NLP can potentially assist in language education. We then take Cherokee, a severely-endangered Native American language, as a case study. After reviewing the language’s history, linguistic features, and existing resources, we (in collaboration with Cherokee community members) arrive at a few meaningful ways NLP practitioners can collaborate with community partners. We suggest two approaches to enrich the Cherokee language’s resources with machine-in-the-loop processing, and discuss several NLP tools that people from the Cherokee community have shown interest in. We hope that our work serves not only to inform the NLP community about Cherokee, but also to provide inspiration for future work on endangered languages in general.
|
6043d6347900bef1481b47f957a5bdf4
| 2,022
|
[
"more than 43 % of the languages spoken in the world are endangered , and language loss currently occurs at an accelerated rate because of globalization and neocolonialism .",
"saving and revitalizing endangered languages has become very important for maintaining the cultural diversity on our planet .",
"in this work , we focus on discussing how nlp can help revitalize endangered languages .",
"we first suggest three principles that may help nlp practitioners to foster mutual understanding and collaboration with language communities , and we discuss three ways in which nlp can potentially assist in language education .",
"we then take cherokee , a severely - endangered native american language , as a case study .",
"after reviewing the language ’ s history , linguistic features , and existing resources , we ( in collaboration with cherokee community members ) arrive at a few meaningful ways nlp practitioners can collaborate with community partners .",
"we suggest two approaches to enrich the cherokee language ’ s resources with machine - in - the - loop processing , and discuss several nlp tools that people from the cherokee community have shown interest in .",
"we hope that our work serves not only to inform the nlp community about cherokee , but also to provide inspiration for future work on endangered languages in general ."
] |
[
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
41,
42
],
"text": "cultural diversity",
"tokens": [
"cultural",
"diversity"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
39
],
"text": "maintaining",
"tokens": [
"maintaining"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
51
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
56
],
"text": "nlp",
"tokens": [
"nlp"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
59
],
"text": "revitalize",
"tokens": [
"revitalize"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
54
],
"text": "discussing",
"tokens": [
"discussing"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
63
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
66,
67
],
"text": "three principles",
"tokens": [
"three",
"principles"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
74
],
"text": "foster",
"tokens": [
"foster"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
65
],
"text": "suggest",
"tokens": [
"suggest"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
84
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
86,
87
],
"text": "three ways",
"tokens": [
"three",
"ways"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
95,
96
],
"text": "language education",
"tokens": [
"language",
"education"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
85
],
"text": "discuss",
"tokens": [
"discuss"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
98
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "FEA",
"offsets": [
101
],
"text": "cherokee",
"tokens": [
"cherokee"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
111,
112,
113,
114
],
"text": "as a case study",
"tokens": [
"as",
"a",
"case",
"study"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
100
],
"text": "take",
"tokens": [
"take"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
131
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
119,
120,
121,
122
],
"text": "language ’ s history",
"tokens": [
"language",
"’",
"s",
"history"
]
},
{
"argument_type": "Content",
"nugget_type": "FEA",
"offsets": [
124,
125
],
"text": "linguistic features",
"tokens": [
"linguistic",
"features"
]
},
{
"argument_type": "Content",
"nugget_type": "FEA",
"offsets": [
128,
129
],
"text": "existing resources",
"tokens": [
"existing",
"resources"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
140
],
"text": "arrive",
"tokens": [
"arrive"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
117
],
"text": "reviewing",
"tokens": [
"reviewing"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
154
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
159
],
"text": "enrich",
"tokens": [
"enrich"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
167,
168,
169,
170,
171,
172,
173,
174
],
"text": "machine - in - the - loop processing",
"tokens": [
"machine",
"-",
"in",
"-",
"the",
"-",
"loop",
"processing"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
155
],
"text": "suggest",
"tokens": [
"suggest"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
161,
162,
163,
164,
165
],
"text": "cherokee language ’ s resources",
"tokens": [
"cherokee",
"language",
"’",
"s",
"resources"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
159
],
"text": "enrich",
"tokens": [
"enrich"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
154
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
178,
179,
180
],
"text": "several nlp tools",
"tokens": [
"several",
"nlp",
"tools"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
177
],
"text": "discuss",
"tokens": [
"discuss"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
60,
61
],
"text": "endangered languages",
"tokens": [
"endangered",
"languages"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
59
],
"text": "revitalize",
"tokens": [
"revitalize"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
75,
76
],
"text": "mutual understanding",
"tokens": [
"mutual",
"understanding"
]
},
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
75,
78
],
"text": "mutual collaboration",
"tokens": [
"mutual",
"collaboration"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
74
],
"text": "foster",
"tokens": [
"foster"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
143,
144,
145
],
"text": "few meaningful ways",
"tokens": [
"few",
"meaningful",
"ways"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
140
],
"text": "arrive",
"tokens": [
"arrive"
]
}
}
] |
[
"more",
"than",
"43",
"%",
"of",
"the",
"languages",
"spoken",
"in",
"the",
"world",
"are",
"endangered",
",",
"and",
"language",
"loss",
"currently",
"occurs",
"at",
"an",
"accelerated",
"rate",
"because",
"of",
"globalization",
"and",
"neocolonialism",
".",
"saving",
"and",
"revitalizing",
"endangered",
"languages",
"has",
"become",
"very",
"important",
"for",
"maintaining",
"the",
"cultural",
"diversity",
"on",
"our",
"planet",
".",
"in",
"this",
"work",
",",
"we",
"focus",
"on",
"discussing",
"how",
"nlp",
"can",
"help",
"revitalize",
"endangered",
"languages",
".",
"we",
"first",
"suggest",
"three",
"principles",
"that",
"may",
"help",
"nlp",
"practitioners",
"to",
"foster",
"mutual",
"understanding",
"and",
"collaboration",
"with",
"language",
"communities",
",",
"and",
"we",
"discuss",
"three",
"ways",
"in",
"which",
"nlp",
"can",
"potentially",
"assist",
"in",
"language",
"education",
".",
"we",
"then",
"take",
"cherokee",
",",
"a",
"severely",
"-",
"endangered",
"native",
"american",
"language",
",",
"as",
"a",
"case",
"study",
".",
"after",
"reviewing",
"the",
"language",
"’",
"s",
"history",
",",
"linguistic",
"features",
",",
"and",
"existing",
"resources",
",",
"we",
"(",
"in",
"collaboration",
"with",
"cherokee",
"community",
"members",
")",
"arrive",
"at",
"a",
"few",
"meaningful",
"ways",
"nlp",
"practitioners",
"can",
"collaborate",
"with",
"community",
"partners",
".",
"we",
"suggest",
"two",
"approaches",
"to",
"enrich",
"the",
"cherokee",
"language",
"’",
"s",
"resources",
"with",
"machine",
"-",
"in",
"-",
"the",
"-",
"loop",
"processing",
",",
"and",
"discuss",
"several",
"nlp",
"tools",
"that",
"people",
"from",
"the",
"cherokee",
"community",
"have",
"shown",
"interest",
"in",
".",
"we",
"hope",
"that",
"our",
"work",
"serves",
"not",
"only",
"to",
"inform",
"the",
"nlp",
"community",
"about",
"cherokee",
",",
"but",
"also",
"to",
"provide",
"inspiration",
"for",
"future",
"work",
"on",
"endangered",
"languages",
"in",
"general",
"."
] |
ACL
|
A Girl Has A Name: Detecting Authorship Obfuscation
|
Authorship attribution aims to identify the author of a text based on the stylometric analysis. Authorship obfuscation, on the other hand, aims to protect against authorship attribution by modifying a text’s style. In this paper, we evaluate the stealthiness of state-of-the-art authorship obfuscation methods under an adversarial threat model. An obfuscator is stealthy to the extent an adversary finds it challenging to detect whether or not a text modified by the obfuscator is obfuscated – a decision that is key to the adversary interested in authorship attribution. We show that the existing authorship obfuscation methods are not stealthy as their obfuscated texts can be identified with an average F1 score of 0.87. The reason for the lack of stealthiness is that these obfuscators degrade text smoothness, as ascertained by neural language models, in a detectable manner. Our results highlight the need to develop stealthy authorship obfuscation methods that can better protect the identity of an author seeking anonymity.
|
8b3ecf2416971047a43e36034187cb14
| 2,020
|
[
"authorship attribution aims to identify the author of a text based on the stylometric analysis .",
"authorship obfuscation , on the other hand , aims to protect against authorship attribution by modifying a text ’ s style .",
"in this paper , we evaluate the stealthiness of state - of - the - art authorship obfuscation methods under an adversarial threat model .",
"an obfuscator is stealthy to the extent an adversary finds it challenging to detect whether or not a text modified by the obfuscator is obfuscated – a decision that is key to the adversary interested in authorship attribution .",
"we show that the existing authorship obfuscation methods are not stealthy as their obfuscated texts can be identified with an average f1 score of 0 . 87 .",
"the reason for the lack of stealthiness is that these obfuscators degrade text smoothness , as ascertained by neural language models , in a detectable manner .",
"our results highlight the need to develop stealthy authorship obfuscation methods that can better protect the identity of an author seeking anonymity ."
] |
[
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
16,
17
],
"text": "authorship obfuscation",
"tokens": [
"authorship",
"obfuscation"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
24
],
"text": "aims",
"tokens": [
"aims"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
42
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56
],
"text": "stealthiness of state - of - the - art authorship obfuscation methods",
"tokens": [
"stealthiness",
"of",
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"authorship",
"obfuscation",
"methods"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
43
],
"text": "evaluate",
"tokens": [
"evaluate"
]
}
},
{
"arguments": [
{
"argument_type": "Finder",
"nugget_type": "OG",
"offsets": [
102
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "E-FAC",
"offsets": [
111,
112
],
"text": "not stealthy",
"tokens": [
"not",
"stealthy"
]
}
],
"event_type": "FIN",
"trigger": {
"offsets": [
103
],
"text": "show",
"tokens": [
"show"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
106,
107,
108,
109
],
"text": "existing authorship obfuscation methods",
"tokens": [
"existing",
"authorship",
"obfuscation",
"methods"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
123,
124
],
"text": "f1 score",
"tokens": [
"f1",
"score"
]
},
{
"argument_type": "Extent",
"nugget_type": "DEG",
"offsets": [
126,
127,
128
],
"text": "0 . 87",
"tokens": [
"0",
".",
"87"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
111,
112
],
"text": "not stealthy",
"tokens": [
"not",
"stealthy"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "FEA",
"offsets": [
114,
115,
116
],
"text": "their obfuscated texts",
"tokens": [
"their",
"obfuscated",
"texts"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
119
],
"text": "identified",
"tokens": [
"identified"
]
}
}
] |
[
"authorship",
"attribution",
"aims",
"to",
"identify",
"the",
"author",
"of",
"a",
"text",
"based",
"on",
"the",
"stylometric",
"analysis",
".",
"authorship",
"obfuscation",
",",
"on",
"the",
"other",
"hand",
",",
"aims",
"to",
"protect",
"against",
"authorship",
"attribution",
"by",
"modifying",
"a",
"text",
"’",
"s",
"style",
".",
"in",
"this",
"paper",
",",
"we",
"evaluate",
"the",
"stealthiness",
"of",
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"authorship",
"obfuscation",
"methods",
"under",
"an",
"adversarial",
"threat",
"model",
".",
"an",
"obfuscator",
"is",
"stealthy",
"to",
"the",
"extent",
"an",
"adversary",
"finds",
"it",
"challenging",
"to",
"detect",
"whether",
"or",
"not",
"a",
"text",
"modified",
"by",
"the",
"obfuscator",
"is",
"obfuscated",
"–",
"a",
"decision",
"that",
"is",
"key",
"to",
"the",
"adversary",
"interested",
"in",
"authorship",
"attribution",
".",
"we",
"show",
"that",
"the",
"existing",
"authorship",
"obfuscation",
"methods",
"are",
"not",
"stealthy",
"as",
"their",
"obfuscated",
"texts",
"can",
"be",
"identified",
"with",
"an",
"average",
"f1",
"score",
"of",
"0",
".",
"87",
".",
"the",
"reason",
"for",
"the",
"lack",
"of",
"stealthiness",
"is",
"that",
"these",
"obfuscators",
"degrade",
"text",
"smoothness",
",",
"as",
"ascertained",
"by",
"neural",
"language",
"models",
",",
"in",
"a",
"detectable",
"manner",
".",
"our",
"results",
"highlight",
"the",
"need",
"to",
"develop",
"stealthy",
"authorship",
"obfuscation",
"methods",
"that",
"can",
"better",
"protect",
"the",
"identity",
"of",
"an",
"author",
"seeking",
"anonymity",
"."
] |
ACL
|
Modeling Bilingual Conversational Characteristics for Neural Chat Translation
|
Neural chat translation aims to translate bilingual conversational text, which has a broad application in international exchanges and cooperation. Despite the impressive performance of sentence-level and context-aware Neural Machine Translation (NMT), there still remain challenges to translate bilingual conversational text due to its inherent characteristics such as role preference, dialogue coherence, and translation consistency. In this paper, we aim to promote the translation quality of conversational text by modeling the above properties. Specifically, we design three latent variational modules to learn the distributions of bilingual conversational characteristics. Through sampling from these learned distributions, the latent variables, tailored for role preference, dialogue coherence, and translation consistency, are incorporated into the NMT model for better translation. We evaluate our approach on the benchmark dataset BConTrasT (English<->German) and a self-collected bilingual dialogue corpus, named BMELD (English<->Chinese). Extensive experiments show that our approach notably boosts the performance over strong baselines by a large margin and significantly surpasses some state-of-the-art context-aware NMT models in terms of BLEU and TER. Additionally, we make the BMELD dataset publicly available for the research community.
|
fc8c9608ce581c909cddbde7331f7951
| 2,021
|
[
"neural chat translation aims to translate bilingual conversational text , which has a broad application in international exchanges and cooperation .",
"despite the impressive performance of sentence - level and context - aware neural machine translation ( nmt ) , there still remain challenges to translate bilingual conversational text due to its inherent characteristics such as role preference , dialogue coherence , and translation consistency .",
"in this paper , we aim to promote the translation quality of conversational text by modeling the above properties .",
"specifically , we design three latent variational modules to learn the distributions of bilingual conversational characteristics .",
"through sampling from these learned distributions , the latent variables , tailored for role preference , dialogue coherence , and translation consistency , are incorporated into the nmt model for better translation .",
"we evaluate our approach on the benchmark dataset bcontrast ( english < - > german ) and a self - collected bilingual dialogue corpus , named bmeld ( english < - > chinese ) .",
"extensive experiments show that our approach notably boosts the performance over strong baselines by a large margin and significantly surpasses some state - of - the - art context - aware nmt models in terms of bleu and ter .",
"additionally , we make the bmeld dataset publicly available for the research community ."
] |
[
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "APP",
"offsets": [
0,
1,
2
],
"text": "neural chat translation",
"tokens": [
"neural",
"chat",
"translation"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
5
],
"text": "translate",
"tokens": [
"translate"
]
}
},
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
45
],
"text": "translate",
"tokens": [
"translate"
]
},
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
43
],
"text": "challenges",
"tokens": [
"challenges"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
41,
42
],
"text": "still remain",
"tokens": [
"still",
"remain"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "FEA",
"offsets": [
46,
47,
48
],
"text": "bilingual conversational text",
"tokens": [
"bilingual",
"conversational",
"text"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
45
],
"text": "translate",
"tokens": [
"translate"
]
}
},
{
"arguments": [
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
83,
84
],
"text": "above properties",
"tokens": [
"above",
"properties"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
73
],
"text": "promote",
"tokens": [
"promote"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
81
],
"text": "modeling",
"tokens": [
"modeling"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
75,
76,
77,
78,
79
],
"text": "translation quality of conversational text",
"tokens": [
"translation",
"quality",
"of",
"conversational",
"text"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
73
],
"text": "promote",
"tokens": [
"promote"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
88
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "MOD",
"offsets": [
90,
91,
92,
93
],
"text": "three latent variational modules",
"tokens": [
"three",
"latent",
"variational",
"modules"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
95
],
"text": "learn",
"tokens": [
"learn"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
89
],
"text": "design",
"tokens": [
"design"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
97,
98,
99,
100,
101
],
"text": "distributions of bilingual conversational characteristics",
"tokens": [
"distributions",
"of",
"bilingual",
"conversational",
"characteristics"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
95
],
"text": "learn",
"tokens": [
"learn"
]
}
},
{
"arguments": [
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
104,
105,
106,
107,
108
],
"text": "sampling from these learned distributions",
"tokens": [
"sampling",
"from",
"these",
"learned",
"distributions"
]
},
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
111,
112
],
"text": "latent variables",
"tokens": [
"latent",
"variables"
]
},
{
"argument_type": "TriedComponent",
"nugget_type": "APP",
"offsets": [
33,
34,
35,
131
],
"text": "nmt model",
"tokens": [
"neural",
"machine",
"translation",
"model"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
133,
134
],
"text": "better translation",
"tokens": [
"better",
"translation"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
127
],
"text": "incorporated",
"tokens": [
"incorporated"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
136
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
138,
139
],
"text": "our approach",
"tokens": [
"our",
"approach"
]
},
{
"argument_type": "Dataset",
"nugget_type": "DST",
"offsets": [
142,
143,
144
],
"text": "benchmark dataset bcontrast",
"tokens": [
"benchmark",
"dataset",
"bcontrast"
]
},
{
"argument_type": "Dataset",
"nugget_type": "DST",
"offsets": [
154,
155,
156,
157,
158,
159
],
"text": "self - collected bilingual dialogue corpus",
"tokens": [
"self",
"-",
"collected",
"bilingual",
"dialogue",
"corpus"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
137
],
"text": "evaluate",
"tokens": [
"evaluate"
]
}
},
{
"arguments": [
{
"argument_type": "Arg1",
"nugget_type": "APP",
"offsets": [
175,
176
],
"text": "our approach",
"tokens": [
"our",
"approach"
]
},
{
"argument_type": "Result",
"nugget_type": "STR",
"offsets": [
177,
178
],
"text": "notably boosts",
"tokens": [
"notably",
"boosts"
]
},
{
"argument_type": "Metrics",
"nugget_type": "TAK",
"offsets": [
180
],
"text": "performance",
"tokens": [
"performance"
]
}
],
"event_type": "CMP",
"trigger": {
"offsets": [
177,
178
],
"text": "notably boosts",
"tokens": [
"notably",
"boosts"
]
}
},
{
"arguments": [
{
"argument_type": "Arg1",
"nugget_type": "APP",
"offsets": [
175,
176
],
"text": "our approach",
"tokens": [
"our",
"approach"
]
},
{
"argument_type": "Result",
"nugget_type": "STR",
"offsets": [
189,
190
],
"text": "significantly surpasses",
"tokens": [
"significantly",
"surpasses"
]
},
{
"argument_type": "Arg2",
"nugget_type": "APP",
"offsets": [
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
33,
34,
35,
203
],
"text": "state - of - the - art context - aware nmt models",
"tokens": [
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"context",
"-",
"aware",
"neural",
"machine",
"translation",
"models"
]
}
],
"event_type": "CMP",
"trigger": {
"offsets": [
189,
190
],
"text": "significantly surpasses",
"tokens": [
"significantly",
"surpasses"
]
}
}
] |
[
"neural",
"chat",
"translation",
"aims",
"to",
"translate",
"bilingual",
"conversational",
"text",
",",
"which",
"has",
"a",
"broad",
"application",
"in",
"international",
"exchanges",
"and",
"cooperation",
".",
"despite",
"the",
"impressive",
"performance",
"of",
"sentence",
"-",
"level",
"and",
"context",
"-",
"aware",
"neural",
"machine",
"translation",
"(",
"nmt",
")",
",",
"there",
"still",
"remain",
"challenges",
"to",
"translate",
"bilingual",
"conversational",
"text",
"due",
"to",
"its",
"inherent",
"characteristics",
"such",
"as",
"role",
"preference",
",",
"dialogue",
"coherence",
",",
"and",
"translation",
"consistency",
".",
"in",
"this",
"paper",
",",
"we",
"aim",
"to",
"promote",
"the",
"translation",
"quality",
"of",
"conversational",
"text",
"by",
"modeling",
"the",
"above",
"properties",
".",
"specifically",
",",
"we",
"design",
"three",
"latent",
"variational",
"modules",
"to",
"learn",
"the",
"distributions",
"of",
"bilingual",
"conversational",
"characteristics",
".",
"through",
"sampling",
"from",
"these",
"learned",
"distributions",
",",
"the",
"latent",
"variables",
",",
"tailored",
"for",
"role",
"preference",
",",
"dialogue",
"coherence",
",",
"and",
"translation",
"consistency",
",",
"are",
"incorporated",
"into",
"the",
"nmt",
"model",
"for",
"better",
"translation",
".",
"we",
"evaluate",
"our",
"approach",
"on",
"the",
"benchmark",
"dataset",
"bcontrast",
"(",
"english",
"<",
"-",
">",
"german",
")",
"and",
"a",
"self",
"-",
"collected",
"bilingual",
"dialogue",
"corpus",
",",
"named",
"bmeld",
"(",
"english",
"<",
"-",
">",
"chinese",
")",
".",
"extensive",
"experiments",
"show",
"that",
"our",
"approach",
"notably",
"boosts",
"the",
"performance",
"over",
"strong",
"baselines",
"by",
"a",
"large",
"margin",
"and",
"significantly",
"surpasses",
"some",
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"context",
"-",
"aware",
"nmt",
"models",
"in",
"terms",
"of",
"bleu",
"and",
"ter",
".",
"additionally",
",",
"we",
"make",
"the",
"bmeld",
"dataset",
"publicly",
"available",
"for",
"the",
"research",
"community",
"."
] |
ACL
|
Text2Event: Controllable Sequence-to-Structure Generation for End-to-end Event Extraction
|
Event extraction is challenging due to the complex structure of event records and the semantic gap between text and event. Traditional methods usually extract event records by decomposing the complex structure prediction task into multiple subtasks. In this paper, we propose Text2Event, a sequence-to-structure generation paradigm that can directly extract events from the text in an end-to-end manner. Specifically, we design a sequence-to-structure network for unified event extraction, a constrained decoding algorithm for event knowledge injection during inference, and a curriculum learning algorithm for efficient model learning. Experimental results show that, by uniformly modeling all tasks in a single model and universally predicting different labels, our method can achieve competitive performance using only record-level annotations in both supervised learning and transfer learning settings.
|
c90cef1f516e8badf22a128c561106f9
| 2,021
|
[
"event extraction is challenging due to the complex structure of event records and the semantic gap between text and event .",
"traditional methods usually extract event records by decomposing the complex structure prediction task into multiple subtasks .",
"in this paper , we propose text2event , a sequence - to - structure generation paradigm that can directly extract events from the text in an end - to - end manner .",
"specifically , we design a sequence - to - structure network for unified event extraction , a constrained decoding algorithm for event knowledge injection during inference , and a curriculum learning algorithm for efficient model learning .",
"experimental results show that , by uniformly modeling all tasks in a single model and universally predicting different labels , our method can achieve competitive performance using only record - level annotations in both supervised learning and transfer learning settings ."
] |
[
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
0,
1
],
"text": "event extraction",
"tokens": [
"event",
"extraction"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
3
],
"text": "challenging",
"tokens": [
"challenging"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
42
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
47,
48,
49,
50,
51,
52,
53
],
"text": "sequence - to - structure generation paradigm",
"tokens": [
"sequence",
"-",
"to",
"-",
"structure",
"generation",
"paradigm"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
56,
57
],
"text": "directly extract",
"tokens": [
"directly",
"extract"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
43
],
"text": "propose",
"tokens": [
"propose"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
73
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
76,
77,
78,
79,
80,
81
],
"text": "sequence - to - structure network",
"tokens": [
"sequence",
"-",
"to",
"-",
"structure",
"network"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
83,
84,
85
],
"text": "unified event extraction",
"tokens": [
"unified",
"event",
"extraction"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
74
],
"text": "design",
"tokens": [
"design"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
73
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
88,
89,
90
],
"text": "constrained decoding algorithm",
"tokens": [
"constrained",
"decoding",
"algorithm"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
92,
93,
94
],
"text": "event knowledge injection",
"tokens": [
"event",
"knowledge",
"injection"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
74
],
"text": "design",
"tokens": [
"design"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
73
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
100,
101,
102
],
"text": "curriculum learning algorithm",
"tokens": [
"curriculum",
"learning",
"algorithm"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
104,
105,
106
],
"text": "efficient model learning",
"tokens": [
"efficient",
"model",
"learning"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
74
],
"text": "design",
"tokens": [
"design"
]
}
},
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "E-FAC",
"offsets": [
131
],
"text": "achieve",
"tokens": [
"achieve"
]
}
],
"event_type": "FIN",
"trigger": {
"offsets": [
110
],
"text": "show",
"tokens": [
"show"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
129
],
"text": "method",
"tokens": [
"method"
]
},
{
"argument_type": "Object",
"nugget_type": "STR",
"offsets": [
132,
133
],
"text": "competitive performance",
"tokens": [
"competitive",
"performance"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
134,
135,
136,
137,
138,
139
],
"text": "using only record - level annotations",
"tokens": [
"using",
"only",
"record",
"-",
"level",
"annotations"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
140,
141,
142,
143,
144,
145,
146,
147
],
"text": "in both supervised learning and transfer learning settings",
"tokens": [
"in",
"both",
"supervised",
"learning",
"and",
"transfer",
"learning",
"settings"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
131
],
"text": "achieve",
"tokens": [
"achieve"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
21,
22
],
"text": "traditional methods",
"tokens": [
"traditional",
"methods"
]
},
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
30,
31,
32,
33
],
"text": "complex structure prediction task",
"tokens": [
"complex",
"structure",
"prediction",
"task"
]
},
{
"argument_type": "TriedComponent",
"nugget_type": "FEA",
"offsets": [
35,
36
],
"text": "multiple subtasks",
"tokens": [
"multiple",
"subtasks"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
24
],
"text": "extract",
"tokens": [
"extract"
]
}
],
"event_type": "RWS",
"trigger": {
"offsets": [
28
],
"text": "decomposing",
"tokens": [
"decomposing"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
25,
26
],
"text": "event records",
"tokens": [
"event",
"records"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
24
],
"text": "extract",
"tokens": [
"extract"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
58,
59,
60,
61
],
"text": "events from the text",
"tokens": [
"events",
"from",
"the",
"text"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
62,
63,
64,
65,
66,
67,
68,
69
],
"text": "in an end - to - end manner",
"tokens": [
"in",
"an",
"end",
"-",
"to",
"-",
"end",
"manner"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
56,
57
],
"text": "directly extract",
"tokens": [
"directly",
"extract"
]
}
}
] |
[
"event",
"extraction",
"is",
"challenging",
"due",
"to",
"the",
"complex",
"structure",
"of",
"event",
"records",
"and",
"the",
"semantic",
"gap",
"between",
"text",
"and",
"event",
".",
"traditional",
"methods",
"usually",
"extract",
"event",
"records",
"by",
"decomposing",
"the",
"complex",
"structure",
"prediction",
"task",
"into",
"multiple",
"subtasks",
".",
"in",
"this",
"paper",
",",
"we",
"propose",
"text2event",
",",
"a",
"sequence",
"-",
"to",
"-",
"structure",
"generation",
"paradigm",
"that",
"can",
"directly",
"extract",
"events",
"from",
"the",
"text",
"in",
"an",
"end",
"-",
"to",
"-",
"end",
"manner",
".",
"specifically",
",",
"we",
"design",
"a",
"sequence",
"-",
"to",
"-",
"structure",
"network",
"for",
"unified",
"event",
"extraction",
",",
"a",
"constrained",
"decoding",
"algorithm",
"for",
"event",
"knowledge",
"injection",
"during",
"inference",
",",
"and",
"a",
"curriculum",
"learning",
"algorithm",
"for",
"efficient",
"model",
"learning",
".",
"experimental",
"results",
"show",
"that",
",",
"by",
"uniformly",
"modeling",
"all",
"tasks",
"in",
"a",
"single",
"model",
"and",
"universally",
"predicting",
"different",
"labels",
",",
"our",
"method",
"can",
"achieve",
"competitive",
"performance",
"using",
"only",
"record",
"-",
"level",
"annotations",
"in",
"both",
"supervised",
"learning",
"and",
"transfer",
"learning",
"settings",
"."
] |
ACL
|
Dynamic Online Conversation Recommendation
|
Trending topics in social media content evolve over time, and it is therefore crucial to understand social media users and their interpersonal communications in a dynamic manner. Here we study dynamic online conversation recommendation, to help users engage in conversations that satisfy their evolving interests. While most prior work assumes static user interests, our model is able to capture the temporal aspects of user interests, and further handle future conversations that are unseen during training time. Concretely, we propose a neural architecture to exploit changes of user interactions and interests over time, to predict which discussions they are likely to enter. We conduct experiments on large-scale collections of Reddit conversations, and results on three subreddits show that our model significantly outperforms state-of-the-art models that make a static assumption of user interests. We further evaluate on handling “cold start”, and observe consistently better performance by our model when considering various degrees of sparsity of user’s chatting history and conversation contexts. Lastly, analyses on our model outputs indicate user interest change, explaining the advantage and efficacy of our approach.
|
ecbf30317158098be707681aa76603c6
| 2,020
|
[
"trending topics in social media content evolve over time , and it is therefore crucial to understand social media users and their interpersonal communications in a dynamic manner .",
"here we study dynamic online conversation recommendation , to help users engage in conversations that satisfy their evolving interests .",
"while most prior work assumes static user interests , our model is able to capture the temporal aspects of user interests , and further handle future conversations that are unseen during training time .",
"concretely , we propose a neural architecture to exploit changes of user interactions and interests over time , to predict which discussions they are likely to enter .",
"we conduct experiments on large - scale collections of reddit conversations , and results on three subreddits show that our model significantly outperforms state - of - the - art models that make a static assumption of user interests .",
"we further evaluate on handling “ cold start ” , and observe consistently better performance by our model when considering various degrees of sparsity of user ’ s chatting history and conversation contexts .",
"lastly , analyses on our model outputs indicate user interest change , explaining the advantage and efficacy of our approach ."
] |
[
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
0,
1,
2,
3,
4,
5
],
"text": "trending topics in social media content",
"tokens": [
"trending",
"topics",
"in",
"social",
"media",
"content"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
7,
8
],
"text": "over time",
"tokens": [
"over",
"time"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
6
],
"text": "evolve",
"tokens": [
"evolve"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
30
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
32,
33,
34,
35
],
"text": "dynamic online conversation recommendation",
"tokens": [
"dynamic",
"online",
"conversation",
"recommendation"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
40
],
"text": "engage",
"tokens": [
"engage"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
31
],
"text": "study",
"tokens": [
"study"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "FEA",
"offsets": [
42,
43,
44,
39,
46,
47
],
"text": "conversations that satisfy their evolving interests",
"tokens": [
"conversations",
"that",
"satisfy",
"users",
"evolving",
"interests"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
40
],
"text": "engage",
"tokens": [
"engage"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
85
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
88,
89
],
"text": "neural architecture",
"tokens": [
"neural",
"architecture"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
91
],
"text": "exploit",
"tokens": [
"exploit"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
102
],
"text": "predict",
"tokens": [
"predict"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
86
],
"text": "propose",
"tokens": [
"propose"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
92,
93,
94,
95,
96,
97
],
"text": "changes of user interactions and interests",
"tokens": [
"changes",
"of",
"user",
"interactions",
"and",
"interests"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
98,
99
],
"text": "over time",
"tokens": [
"over",
"time"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
91
],
"text": "exploit",
"tokens": [
"exploit"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
103,
104,
105,
106,
107,
108,
109
],
"text": "which discussions they are likely to enter",
"tokens": [
"which",
"discussions",
"they",
"are",
"likely",
"to",
"enter"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
102
],
"text": "predict",
"tokens": [
"predict"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
111
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
113
],
"text": "experiments",
"tokens": [
"experiments"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
114,
115,
116,
117,
118,
119,
120,
121
],
"text": "on large - scale collections of reddit conversations",
"tokens": [
"on",
"large",
"-",
"scale",
"collections",
"of",
"reddit",
"conversations"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
112
],
"text": "conduct",
"tokens": [
"conduct"
]
}
},
{
"arguments": [
{
"argument_type": "Arg1",
"nugget_type": "APP",
"offsets": [
130,
131
],
"text": "our model",
"tokens": [
"our",
"model"
]
},
{
"argument_type": "Result",
"nugget_type": "STR",
"offsets": [
132,
133
],
"text": "significantly outperforms",
"tokens": [
"significantly",
"outperforms"
]
}
],
"event_type": "CMP",
"trigger": {
"offsets": [
132,
133
],
"text": "significantly outperforms",
"tokens": [
"significantly",
"outperforms"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
151
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
155,
156,
157,
158,
159
],
"text": "handling “ cold start ”",
"tokens": [
"handling",
"“",
"cold",
"start",
"”"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
152,
153
],
"text": "further evaluate",
"tokens": [
"further",
"evaluate"
]
}
},
{
"arguments": [
{
"argument_type": "Result",
"nugget_type": "STR",
"offsets": [
163,
164
],
"text": "consistently better",
"tokens": [
"consistently",
"better"
]
},
{
"argument_type": "Metrics",
"nugget_type": "TAK",
"offsets": [
165
],
"text": "performance",
"tokens": [
"performance"
]
},
{
"argument_type": "Arg1",
"nugget_type": "APP",
"offsets": [
167,
168
],
"text": "our model",
"tokens": [
"our",
"model"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183
],
"text": "when considering various degrees of sparsity of user ’ s chatting history and conversation contexts",
"tokens": [
"when",
"considering",
"various",
"degrees",
"of",
"sparsity",
"of",
"user",
"’",
"s",
"chatting",
"history",
"and",
"conversation",
"contexts"
]
}
],
"event_type": "CMP",
"trigger": {
"offsets": [
162
],
"text": "observe",
"tokens": [
"observe"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "TAK",
"offsets": [
193,
194,
195
],
"text": "user interest change",
"tokens": [
"user",
"interest",
"change"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
192
],
"text": "indicate",
"tokens": [
"indicate"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
203,
204
],
"text": "our approach",
"tokens": [
"our",
"approach"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
199
],
"text": "advantage",
"tokens": [
"advantage"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
201
],
"text": "efficacy",
"tokens": [
"efficacy"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
197
],
"text": "explaining",
"tokens": [
"explaining"
]
}
}
] |
[
"trending",
"topics",
"in",
"social",
"media",
"content",
"evolve",
"over",
"time",
",",
"and",
"it",
"is",
"therefore",
"crucial",
"to",
"understand",
"social",
"media",
"users",
"and",
"their",
"interpersonal",
"communications",
"in",
"a",
"dynamic",
"manner",
".",
"here",
"we",
"study",
"dynamic",
"online",
"conversation",
"recommendation",
",",
"to",
"help",
"users",
"engage",
"in",
"conversations",
"that",
"satisfy",
"their",
"evolving",
"interests",
".",
"while",
"most",
"prior",
"work",
"assumes",
"static",
"user",
"interests",
",",
"our",
"model",
"is",
"able",
"to",
"capture",
"the",
"temporal",
"aspects",
"of",
"user",
"interests",
",",
"and",
"further",
"handle",
"future",
"conversations",
"that",
"are",
"unseen",
"during",
"training",
"time",
".",
"concretely",
",",
"we",
"propose",
"a",
"neural",
"architecture",
"to",
"exploit",
"changes",
"of",
"user",
"interactions",
"and",
"interests",
"over",
"time",
",",
"to",
"predict",
"which",
"discussions",
"they",
"are",
"likely",
"to",
"enter",
".",
"we",
"conduct",
"experiments",
"on",
"large",
"-",
"scale",
"collections",
"of",
"reddit",
"conversations",
",",
"and",
"results",
"on",
"three",
"subreddits",
"show",
"that",
"our",
"model",
"significantly",
"outperforms",
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"models",
"that",
"make",
"a",
"static",
"assumption",
"of",
"user",
"interests",
".",
"we",
"further",
"evaluate",
"on",
"handling",
"“",
"cold",
"start",
"”",
",",
"and",
"observe",
"consistently",
"better",
"performance",
"by",
"our",
"model",
"when",
"considering",
"various",
"degrees",
"of",
"sparsity",
"of",
"user",
"’",
"s",
"chatting",
"history",
"and",
"conversation",
"contexts",
".",
"lastly",
",",
"analyses",
"on",
"our",
"model",
"outputs",
"indicate",
"user",
"interest",
"change",
",",
"explaining",
"the",
"advantage",
"and",
"efficacy",
"of",
"our",
"approach",
"."
] |
ACL
|
CogTaskonomy: Cognitively Inspired Task Taxonomy Is Beneficial to Transfer Learning in NLP
|
Is there a principle to guide transfer learning across tasks in natural language processing (NLP)? Taxonomy (Zamir et al., 2018) finds that a structure exists among visual tasks, as a principle underlying transfer learning for them. In this paper, we propose a cognitively inspired framework, CogTaskonomy, to learn taxonomy for NLP tasks. The framework consists of Cognitive Representation Analytics (CRA) and Cognitive-Neural Mapping (CNM). The former employs Representational Similarity Analysis, which is commonly used in computational neuroscience to find a correlation between brain-activity measurement and computational modeling, to estimate task similarity with task-specific sentence representations. The latter learns to detect task relations by projecting neural representations from NLP models to cognitive signals (i.e., fMRI voxels). Experiments on 12 NLP tasks, where BERT/TinyBERT are used as the underlying models for transfer learning, demonstrate that the proposed CogTaxonomy is able to guide transfer learning, achieving performance competitive to the Analytic Hierarchy Process (Saaty, 1987) used in visual Taskonomy (Zamir et al., 2018) but without requiring exhaustive pairwise O(m2) task transferring. Analyses further discover that CNM is capable of learning model-agnostic task taxonomy.
|
ec4c7ed76f51a240c6af18bb1f3ac226
| 2,022
|
[
"is there a principle to guide transfer learning across tasks in natural language processing ( nlp ) ?",
"taxonomy ( zamir et al . , 2018 ) finds that a structure exists among visual tasks , as a principle underlying transfer learning for them .",
"in this paper , we propose a cognitively inspired framework , cogtaskonomy , to learn taxonomy for nlp tasks .",
"the framework consists of cognitive representation analytics ( cra ) and cognitive - neural mapping ( cnm ) .",
"the former employs representational similarity analysis , which is commonly used in computational neuroscience to find a correlation between brain - activity measurement and computational modeling , to estimate task similarity with task - specific sentence representations .",
"the latter learns to detect task relations by projecting neural representations from nlp models to cognitive signals ( i . e . , fmri voxels ) .",
"experiments on 12 nlp tasks , where bert / tinybert are used as the underlying models for transfer learning , demonstrate that the proposed cogtaxonomy is able to guide transfer learning , achieving performance competitive to the analytic hierarchy process ( saaty , 1987 ) used in visual taskonomy ( zamir et al . , 2018 ) but without requiring exhaustive pairwise o ( m2 ) task transferring .",
"analyses further discover that cnm is capable of learning model - agnostic task taxonomy ."
] |
[
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
6,
7,
8,
9
],
"text": "transfer learning across tasks",
"tokens": [
"transfer",
"learning",
"across",
"tasks"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
10,
11,
12,
13
],
"text": "in natural language processing",
"tokens": [
"in",
"natural",
"language",
"processing"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
5
],
"text": "guide",
"tokens": [
"guide"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
49
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
52,
53,
54
],
"text": "cognitively inspired framework",
"tokens": [
"cognitively",
"inspired",
"framework"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
59
],
"text": "learn",
"tokens": [
"learn"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
50
],
"text": "propose",
"tokens": [
"propose"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "APP",
"offsets": [
60,
61,
11,
12,
13,
63
],
"text": "taxonomy for nlp tasks",
"tokens": [
"taxonomy",
"for",
"natural",
"language",
"processing",
"tasks"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
59
],
"text": "learn",
"tokens": [
"learn"
]
}
},
{
"arguments": [
{
"argument_type": "BaseComponent",
"nugget_type": "APP",
"offsets": [
87,
88,
89
],
"text": "representational similarity analysis",
"tokens": [
"representational",
"similarity",
"analysis"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
86
],
"text": "employs",
"tokens": [
"employs"
]
}
},
{
"arguments": [
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
116,
117,
118,
119,
120
],
"text": "task - specific sentence representations",
"tokens": [
"task",
"-",
"specific",
"sentence",
"representations"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
113,
114
],
"text": "task similarity",
"tokens": [
"task",
"similarity"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
112
],
"text": "estimate",
"tokens": [
"estimate"
]
}
},
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
124,
125,
126
],
"text": "learns to detect",
"tokens": [
"learns",
"to",
"detect"
]
},
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
131,
132
],
"text": "neural representations",
"tokens": [
"neural",
"representations"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
130
],
"text": "projecting",
"tokens": [
"projecting"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "FEA",
"offsets": [
127,
128
],
"text": "task relations",
"tokens": [
"task",
"relations"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
124,
125,
126
],
"text": "learns to detect",
"tokens": [
"learns",
"to",
"detect"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
173
],
"text": "cogtaxonomy",
"tokens": [
"cogtaxonomy"
]
},
{
"argument_type": "Object",
"nugget_type": "TAK",
"offsets": [
178,
179
],
"text": "transfer learning",
"tokens": [
"transfer",
"learning"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
177
],
"text": "guide",
"tokens": [
"guide"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
173
],
"text": "cogtaxonomy",
"tokens": [
"cogtaxonomy"
]
},
{
"argument_type": "Object",
"nugget_type": "TAK",
"offsets": [
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197
],
"text": "performance competitive to the analytic hierarchy process ( saaty , 1987 ) used in visual taskonomy",
"tokens": [
"performance",
"competitive",
"to",
"the",
"analytic",
"hierarchy",
"process",
"(",
"saaty",
",",
"1987",
")",
"used",
"in",
"visual",
"taskonomy"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
207,
208,
209,
210,
211,
212,
213,
214,
215,
216
],
"text": "without requiring exhaustive pairwise o ( m2 ) task transferring",
"tokens": [
"without",
"requiring",
"exhaustive",
"pairwise",
"o",
"(",
"m2",
")",
"task",
"transferring"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
181
],
"text": "achieving",
"tokens": [
"achieving"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "MOD",
"offsets": [
76,
77,
78,
79
],
"text": "cognitive - neural mapping",
"tokens": [
"cognitive",
"-",
"neural",
"mapping"
]
},
{
"argument_type": "Object",
"nugget_type": "APP",
"offsets": [
227,
228,
229,
230,
231
],
"text": "model - agnostic task taxonomy",
"tokens": [
"model",
"-",
"agnostic",
"task",
"taxonomy"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
224,
225,
226
],
"text": "capable of learning",
"tokens": [
"capable",
"of",
"learning"
]
}
}
] |
[
"is",
"there",
"a",
"principle",
"to",
"guide",
"transfer",
"learning",
"across",
"tasks",
"in",
"natural",
"language",
"processing",
"(",
"nlp",
")",
"?",
"taxonomy",
"(",
"zamir",
"et",
"al",
".",
",",
"2018",
")",
"finds",
"that",
"a",
"structure",
"exists",
"among",
"visual",
"tasks",
",",
"as",
"a",
"principle",
"underlying",
"transfer",
"learning",
"for",
"them",
".",
"in",
"this",
"paper",
",",
"we",
"propose",
"a",
"cognitively",
"inspired",
"framework",
",",
"cogtaskonomy",
",",
"to",
"learn",
"taxonomy",
"for",
"nlp",
"tasks",
".",
"the",
"framework",
"consists",
"of",
"cognitive",
"representation",
"analytics",
"(",
"cra",
")",
"and",
"cognitive",
"-",
"neural",
"mapping",
"(",
"cnm",
")",
".",
"the",
"former",
"employs",
"representational",
"similarity",
"analysis",
",",
"which",
"is",
"commonly",
"used",
"in",
"computational",
"neuroscience",
"to",
"find",
"a",
"correlation",
"between",
"brain",
"-",
"activity",
"measurement",
"and",
"computational",
"modeling",
",",
"to",
"estimate",
"task",
"similarity",
"with",
"task",
"-",
"specific",
"sentence",
"representations",
".",
"the",
"latter",
"learns",
"to",
"detect",
"task",
"relations",
"by",
"projecting",
"neural",
"representations",
"from",
"nlp",
"models",
"to",
"cognitive",
"signals",
"(",
"i",
".",
"e",
".",
",",
"fmri",
"voxels",
")",
".",
"experiments",
"on",
"12",
"nlp",
"tasks",
",",
"where",
"bert",
"/",
"tinybert",
"are",
"used",
"as",
"the",
"underlying",
"models",
"for",
"transfer",
"learning",
",",
"demonstrate",
"that",
"the",
"proposed",
"cogtaxonomy",
"is",
"able",
"to",
"guide",
"transfer",
"learning",
",",
"achieving",
"performance",
"competitive",
"to",
"the",
"analytic",
"hierarchy",
"process",
"(",
"saaty",
",",
"1987",
")",
"used",
"in",
"visual",
"taskonomy",
"(",
"zamir",
"et",
"al",
".",
",",
"2018",
")",
"but",
"without",
"requiring",
"exhaustive",
"pairwise",
"o",
"(",
"m2",
")",
"task",
"transferring",
".",
"analyses",
"further",
"discover",
"that",
"cnm",
"is",
"capable",
"of",
"learning",
"model",
"-",
"agnostic",
"task",
"taxonomy",
"."
] |
ACL
|
A Compact and Language-Sensitive Multilingual Translation Method
|
Multilingual neural machine translation (Multi-NMT) with one encoder-decoder model has made remarkable progress due to its simple deployment. However, this multilingual translation paradigm does not make full use of language commonality and parameter sharing between encoder and decoder. Furthermore, this kind of paradigm cannot outperform the individual models trained on bilingual corpus in most cases. In this paper, we propose a compact and language-sensitive method for multilingual translation. To maximize parameter sharing, we first present a universal representor to replace both encoder and decoder models. To make the representor sensitive for specific languages, we further introduce language-sensitive embedding, attention, and discriminator with the ability to enhance model performance. We verify our methods on various translation scenarios, including one-to-many, many-to-many and zero-shot. Extensive experiments demonstrate that our proposed methods remarkably outperform strong standard multilingual translation systems on WMT and IWSLT datasets. Moreover, we find that our model is especially helpful in low-resource and zero-shot translation scenarios.
|
9b4710f5b7353775b86b4c553913c4bd
| 2,019
|
[
"multilingual neural machine translation ( multi - nmt ) with one encoder - decoder model has made remarkable progress due to its simple deployment .",
"however , this multilingual translation paradigm does not make full use of language commonality and parameter sharing between encoder and decoder .",
"furthermore , this kind of paradigm cannot outperform the individual models trained on bilingual corpus in most cases .",
"in this paper , we propose a compact and language - sensitive method for multilingual translation .",
"to maximize parameter sharing , we first present a universal representor to replace both encoder and decoder models .",
"to make the representor sensitive for specific languages , we further introduce language - sensitive embedding , attention , and discriminator with the ability to enhance model performance .",
"we verify our methods on various translation scenarios , including one - to - many , many - to - many and zero - shot .",
"extensive experiments demonstrate that our proposed methods remarkably outperform strong standard multilingual translation systems on wmt and iwslt datasets .",
"moreover , we find that our model is especially helpful in low - resource and zero - shot translation scenarios ."
] |
[
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "APP",
"offsets": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14
],
"text": "multilingual neural machine translation ( multi - nmt ) with one encoder - decoder model",
"tokens": [
"multilingual",
"neural",
"machine",
"translation",
"(",
"multi",
"-",
"nmt",
")",
"with",
"one",
"encoder",
"-",
"decoder",
"model"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
16
],
"text": "made",
"tokens": [
"made"
]
}
},
{
"arguments": [
{
"argument_type": "Concern",
"nugget_type": "APP",
"offsets": [
28,
29,
30
],
"text": "multilingual translation paradigm",
"tokens": [
"multilingual",
"translation",
"paradigm"
]
},
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
32,
33,
34,
35
],
"text": "not make full use",
"tokens": [
"not",
"make",
"full",
"use"
]
},
{
"argument_type": "Target",
"nugget_type": "STR",
"offsets": [
37,
38
],
"text": "language commonality",
"tokens": [
"language",
"commonality"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
32,
33,
34,
35
],
"text": "not make full use",
"tokens": [
"not",
"make",
"full",
"use"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
70
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
73,
74,
75,
76,
77,
78
],
"text": "compact and language - sensitive method",
"tokens": [
"compact",
"and",
"language",
"-",
"sensitive",
"method"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
80,
81
],
"text": "multilingual translation",
"tokens": [
"multilingual",
"translation"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
71
],
"text": "propose",
"tokens": [
"propose"
]
}
},
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
84
],
"text": "maximize",
"tokens": [
"maximize"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
92,
93
],
"text": "universal representor",
"tokens": [
"universal",
"representor"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
95
],
"text": "replace",
"tokens": [
"replace"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
90
],
"text": "present",
"tokens": [
"present"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
85,
86
],
"text": "parameter sharing",
"tokens": [
"parameter",
"sharing"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
84
],
"text": "maximize",
"tokens": [
"maximize"
]
}
},
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
103
],
"text": "make",
"tokens": [
"make"
]
},
{
"argument_type": "BaseComponent",
"nugget_type": "MOD",
"offsets": [
114,
115,
116,
117,
118,
119,
120,
121,
122
],
"text": "language - sensitive embedding , attention , and discriminator",
"tokens": [
"language",
"-",
"sensitive",
"embedding",
",",
"attention",
",",
"and",
"discriminator"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
127
],
"text": "enhance",
"tokens": [
"enhance"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
113
],
"text": "introduce",
"tokens": [
"introduce"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
105,
106,
107,
108,
109
],
"text": "representor sensitive for specific languages",
"tokens": [
"representor",
"sensitive",
"for",
"specific",
"languages"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
103
],
"text": "make",
"tokens": [
"make"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "APP",
"offsets": [
97,
98,
99,
100
],
"text": "encoder and decoder models",
"tokens": [
"encoder",
"and",
"decoder",
"models"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
95
],
"text": "replace",
"tokens": [
"replace"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
128,
129
],
"text": "model performance",
"tokens": [
"model",
"performance"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
127
],
"text": "enhance",
"tokens": [
"enhance"
]
}
},
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "E-CMP",
"offsets": [
165
],
"text": "outperform",
"tokens": [
"outperform"
]
}
],
"event_type": "FIN",
"trigger": {
"offsets": [
159
],
"text": "demonstrate",
"tokens": [
"demonstrate"
]
}
},
{
"arguments": [
{
"argument_type": "Result",
"nugget_type": "STR",
"offsets": [
165
],
"text": "outperform",
"tokens": [
"outperform"
]
},
{
"argument_type": "Extent",
"nugget_type": "DEG",
"offsets": [
164
],
"text": "remarkably",
"tokens": [
"remarkably"
]
},
{
"argument_type": "Arg2",
"nugget_type": "APP",
"offsets": [
167,
168,
169,
170
],
"text": "standard multilingual translation systems",
"tokens": [
"standard",
"multilingual",
"translation",
"systems"
]
},
{
"argument_type": "Result",
"nugget_type": "STR",
"offsets": [
166
],
"text": "strong",
"tokens": [
"strong"
]
},
{
"argument_type": "Arg1",
"nugget_type": "APP",
"offsets": [
73,
74,
75,
76,
77,
78
],
"text": "compact and language - sensitive method",
"tokens": [
"compact",
"and",
"language",
"-",
"sensitive",
"method"
]
},
{
"argument_type": "Dataset",
"nugget_type": "DST",
"offsets": [
172
],
"text": "wmt",
"tokens": [
"wmt"
]
}
],
"event_type": "CMP",
"trigger": {
"offsets": [
165
],
"text": "outperform",
"tokens": [
"outperform"
]
}
},
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "E-FAC",
"offsets": [
186
],
"text": "helpful",
"tokens": [
"helpful"
]
}
],
"event_type": "FIN",
"trigger": {
"offsets": [
180
],
"text": "find",
"tokens": [
"find"
]
}
},
{
"arguments": [
{
"argument_type": "Extent",
"nugget_type": "DEG",
"offsets": [
185
],
"text": "especially",
"tokens": [
"especially"
]
},
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
73,
74,
75,
76,
77,
78
],
"text": "compact and language - sensitive method",
"tokens": [
"compact",
"and",
"language",
"-",
"sensitive",
"method"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
187,
188,
189,
190,
191,
192,
193,
194,
195,
196
],
"text": "in low - resource and zero - shot translation scenarios",
"tokens": [
"in",
"low",
"-",
"resource",
"and",
"zero",
"-",
"shot",
"translation",
"scenarios"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
186
],
"text": "helpful",
"tokens": [
"helpful"
]
}
}
] |
[
"multilingual",
"neural",
"machine",
"translation",
"(",
"multi",
"-",
"nmt",
")",
"with",
"one",
"encoder",
"-",
"decoder",
"model",
"has",
"made",
"remarkable",
"progress",
"due",
"to",
"its",
"simple",
"deployment",
".",
"however",
",",
"this",
"multilingual",
"translation",
"paradigm",
"does",
"not",
"make",
"full",
"use",
"of",
"language",
"commonality",
"and",
"parameter",
"sharing",
"between",
"encoder",
"and",
"decoder",
".",
"furthermore",
",",
"this",
"kind",
"of",
"paradigm",
"cannot",
"outperform",
"the",
"individual",
"models",
"trained",
"on",
"bilingual",
"corpus",
"in",
"most",
"cases",
".",
"in",
"this",
"paper",
",",
"we",
"propose",
"a",
"compact",
"and",
"language",
"-",
"sensitive",
"method",
"for",
"multilingual",
"translation",
".",
"to",
"maximize",
"parameter",
"sharing",
",",
"we",
"first",
"present",
"a",
"universal",
"representor",
"to",
"replace",
"both",
"encoder",
"and",
"decoder",
"models",
".",
"to",
"make",
"the",
"representor",
"sensitive",
"for",
"specific",
"languages",
",",
"we",
"further",
"introduce",
"language",
"-",
"sensitive",
"embedding",
",",
"attention",
",",
"and",
"discriminator",
"with",
"the",
"ability",
"to",
"enhance",
"model",
"performance",
".",
"we",
"verify",
"our",
"methods",
"on",
"various",
"translation",
"scenarios",
",",
"including",
"one",
"-",
"to",
"-",
"many",
",",
"many",
"-",
"to",
"-",
"many",
"and",
"zero",
"-",
"shot",
".",
"extensive",
"experiments",
"demonstrate",
"that",
"our",
"proposed",
"methods",
"remarkably",
"outperform",
"strong",
"standard",
"multilingual",
"translation",
"systems",
"on",
"wmt",
"and",
"iwslt",
"datasets",
".",
"moreover",
",",
"we",
"find",
"that",
"our",
"model",
"is",
"especially",
"helpful",
"in",
"low",
"-",
"resource",
"and",
"zero",
"-",
"shot",
"translation",
"scenarios",
"."
] |
ACL
|
Learning to Relate from Captions and Bounding Boxes
|
In this work, we propose a novel approach that predicts the relationships between various entities in an image in a weakly supervised manner by relying on image captions and object bounding box annotations as the sole source of supervision. Our proposed approach uses a top-down attention mechanism to align entities in captions to objects in the image, and then leverage the syntactic structure of the captions to align the relations. We use these alignments to train a relation classification network, thereby obtaining both grounded captions and dense relationships. We demonstrate the effectiveness of our model on the Visual Genome dataset by achieving a recall@50 of 15% and recall@100 of 25% on the relationships present in the image. We also show that the model successfully predicts relations that are not present in the corresponding captions.
|
33686d9a1d7a3deafab74a274c9f44ac
| 2,019
|
[
"in this work , we propose a novel approach that predicts the relationships between various entities in an image in a weakly supervised manner by relying on image captions and object bounding box annotations as the sole source of supervision .",
"our proposed approach uses a top - down attention mechanism to align entities in captions to objects in the image , and then leverage the syntactic structure of the captions to align the relations .",
"we use these alignments to train a relation classification network , thereby obtaining both grounded captions and dense relationships .",
"we demonstrate the effectiveness of our model on the visual genome dataset by achieving a recall @ 50 of 15 % and recall @ 100 of 25 % on the relationships present in the image .",
"we also show that the model successfully predicts relations that are not present in the corresponding captions ."
] |
[
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
4
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
8
],
"text": "approach",
"tokens": [
"approach"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
10
],
"text": "predicts",
"tokens": [
"predicts"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
5
],
"text": "propose",
"tokens": [
"propose"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
4
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "FEA",
"offsets": [
27,
28
],
"text": "image captions",
"tokens": [
"image",
"captions"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
30,
31,
32,
33
],
"text": "object bounding box annotations",
"tokens": [
"object",
"bounding",
"box",
"annotations"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
10
],
"text": "predicts",
"tokens": [
"predicts"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
25
],
"text": "relying",
"tokens": [
"relying"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "FEA",
"offsets": [
12,
13,
14,
15,
16,
17,
18
],
"text": "relationships between various entities in an image",
"tokens": [
"relationships",
"between",
"various",
"entities",
"in",
"an",
"image"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
19,
20,
21,
22,
23
],
"text": "in a weakly supervised manner",
"tokens": [
"in",
"a",
"weakly",
"supervised",
"manner"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
10
],
"text": "predicts",
"tokens": [
"predicts"
]
}
},
{
"arguments": [
{
"argument_type": "BaseComponent",
"nugget_type": "MOD",
"offsets": [
46,
47,
48,
49,
50
],
"text": "top - down attention mechanism",
"tokens": [
"top",
"-",
"down",
"attention",
"mechanism"
]
},
{
"argument_type": "TriedComponent",
"nugget_type": "FEA",
"offsets": [
53,
54,
55
],
"text": "entities in captions",
"tokens": [
"entities",
"in",
"captions"
]
},
{
"argument_type": "TriedComponent",
"nugget_type": "FEA",
"offsets": [
57,
58,
59,
60
],
"text": "objects in the image",
"tokens": [
"objects",
"in",
"the",
"image"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
44
],
"text": "uses",
"tokens": [
"uses"
]
}
},
{
"arguments": [
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
66,
67,
68,
69,
70
],
"text": "syntactic structure of the captions",
"tokens": [
"syntactic",
"structure",
"of",
"the",
"captions"
]
},
{
"argument_type": "TriedComponent",
"nugget_type": "FEA",
"offsets": [
74
],
"text": "relations",
"tokens": [
"relations"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
64
],
"text": "leverage",
"tokens": [
"leverage"
]
}
},
{
"arguments": [
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
74
],
"text": "relations",
"tokens": [
"relations"
]
},
{
"argument_type": "TriedComponent",
"nugget_type": "APP",
"offsets": [
83,
84,
85
],
"text": "relation classification network",
"tokens": [
"relation",
"classification",
"network"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
88
],
"text": "obtaining",
"tokens": [
"obtaining"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
77
],
"text": "use",
"tokens": [
"use"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "FEA",
"offsets": [
90,
91
],
"text": "grounded captions",
"tokens": [
"grounded",
"captions"
]
},
{
"argument_type": "Aim",
"nugget_type": "FEA",
"offsets": [
93,
94
],
"text": "dense relationships",
"tokens": [
"dense",
"relationships"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
88
],
"text": "obtaining",
"tokens": [
"obtaining"
]
}
},
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
99
],
"text": "effectiveness",
"tokens": [
"effectiveness"
]
},
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
8
],
"text": "approach",
"tokens": [
"approach"
]
},
{
"argument_type": "Dataset",
"nugget_type": "DST",
"offsets": [
105,
106,
107
],
"text": "visual genome dataset",
"tokens": [
"visual",
"genome",
"dataset"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
97
],
"text": "demonstrate",
"tokens": [
"demonstrate"
]
}
},
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
111,
112,
113
],
"text": "recall @ 50",
"tokens": [
"recall",
"@",
"50"
]
},
{
"argument_type": "Extent",
"nugget_type": "DEG",
"offsets": [
115,
116
],
"text": "15 %",
"tokens": [
"15",
"%"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
118,
119,
120
],
"text": "recall @ 100",
"tokens": [
"recall",
"@",
"100"
]
},
{
"argument_type": "Extent",
"nugget_type": "DEG",
"offsets": [
122,
123
],
"text": "25 %",
"tokens": [
"25",
"%"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
124,
125,
126,
127,
128,
129,
130
],
"text": "on the relationships present in the image",
"tokens": [
"on",
"the",
"relationships",
"present",
"in",
"the",
"image"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
109
],
"text": "achieving",
"tokens": [
"achieving"
]
}
},
{
"arguments": [
{
"argument_type": "Finder",
"nugget_type": "OG",
"offsets": [
132
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "E-FAC",
"offsets": [
138,
139
],
"text": "successfully predicts",
"tokens": [
"successfully",
"predicts"
]
}
],
"event_type": "FIN",
"trigger": {
"offsets": [
134
],
"text": "show",
"tokens": [
"show"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
8
],
"text": "approach",
"tokens": [
"approach"
]
},
{
"argument_type": "Object",
"nugget_type": "FEA",
"offsets": [
140,
141,
142,
143,
144,
145,
146,
147,
148
],
"text": "relations that are not present in the corresponding captions",
"tokens": [
"relations",
"that",
"are",
"not",
"present",
"in",
"the",
"corresponding",
"captions"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
138,
139
],
"text": "successfully predicts",
"tokens": [
"successfully",
"predicts"
]
}
}
] |
[
"in",
"this",
"work",
",",
"we",
"propose",
"a",
"novel",
"approach",
"that",
"predicts",
"the",
"relationships",
"between",
"various",
"entities",
"in",
"an",
"image",
"in",
"a",
"weakly",
"supervised",
"manner",
"by",
"relying",
"on",
"image",
"captions",
"and",
"object",
"bounding",
"box",
"annotations",
"as",
"the",
"sole",
"source",
"of",
"supervision",
".",
"our",
"proposed",
"approach",
"uses",
"a",
"top",
"-",
"down",
"attention",
"mechanism",
"to",
"align",
"entities",
"in",
"captions",
"to",
"objects",
"in",
"the",
"image",
",",
"and",
"then",
"leverage",
"the",
"syntactic",
"structure",
"of",
"the",
"captions",
"to",
"align",
"the",
"relations",
".",
"we",
"use",
"these",
"alignments",
"to",
"train",
"a",
"relation",
"classification",
"network",
",",
"thereby",
"obtaining",
"both",
"grounded",
"captions",
"and",
"dense",
"relationships",
".",
"we",
"demonstrate",
"the",
"effectiveness",
"of",
"our",
"model",
"on",
"the",
"visual",
"genome",
"dataset",
"by",
"achieving",
"a",
"recall",
"@",
"50",
"of",
"15",
"%",
"and",
"recall",
"@",
"100",
"of",
"25",
"%",
"on",
"the",
"relationships",
"present",
"in",
"the",
"image",
".",
"we",
"also",
"show",
"that",
"the",
"model",
"successfully",
"predicts",
"relations",
"that",
"are",
"not",
"present",
"in",
"the",
"corresponding",
"captions",
"."
] |
ACL
|
DialogVED: A Pre-trained Latent Variable Encoder-Decoder Model for Dialog Response Generation
|
Dialog response generation in open domain is an important research topic where the main challenge is to generate relevant and diverse responses. In this paper, we propose a new dialog pre-training framework called DialogVED, which introduces continuous latent variables into the enhanced encoder-decoder pre-training framework to increase the relevance and diversity of responses. With the help of a large dialog corpus (Reddit), we pre-train the model using the following 4 tasks, used in training language models (LMs) and Variational Autoencoders (VAEs) literature: 1) masked language model; 2) response generation; 3) bag-of-words prediction; and 4) KL divergence reduction. We also add additional parameters to model the turn structure in dialogs to improve the performance of the pre-trained model. We conduct experiments on PersonaChat, DailyDialog, and DSTC7-AVSD benchmarks for response generation. Experimental results show that our model achieves the new state-of-the-art results on all these datasets.
|
b8ec0641db84e87cab1967e91a0b4bf7
| 2,022
|
[
"dialog response generation in open domain is an important research topic where the main challenge is to generate relevant and diverse responses .",
"in this paper , we propose a new dialog pre - training framework called dialogved , which introduces continuous latent variables into the enhanced encoder - decoder pre - training framework to increase the relevance and diversity of responses .",
"with the help of a large dialog corpus ( reddit ) , we pre - train the model using the following 4 tasks , used in training language models ( lms ) and variational autoencoders ( vaes ) literature : 1 ) masked language model ; 2 ) response generation ; 3 ) bag - of - words prediction ; and 4 ) kl divergence reduction .",
"we also add additional parameters to model the turn structure in dialogs to improve the performance of the pre - trained model .",
"we conduct experiments on personachat , dailydialog , and dstc7 - avsd benchmarks for response generation .",
"experimental results show that our model achieves the new state - of - the - art results on all these datasets ."
] |
[
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
0,
1,
2
],
"text": "dialog response generation",
"tokens": [
"dialog",
"response",
"generation"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
3,
4,
5
],
"text": "in open domain",
"tokens": [
"in",
"open",
"domain"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
8,
9,
10
],
"text": "important research topic",
"tokens": [
"important",
"research",
"topic"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
27
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
31,
32,
33,
34,
35
],
"text": "dialog pre - training framework",
"tokens": [
"dialog",
"pre",
"-",
"training",
"framework"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
28
],
"text": "propose",
"tokens": [
"propose"
]
}
},
{
"arguments": [
{
"argument_type": "TriedComponent",
"nugget_type": "FEA",
"offsets": [
41,
42,
43
],
"text": "continuous latent variables",
"tokens": [
"continuous",
"latent",
"variables"
]
},
{
"argument_type": "BaseComponent",
"nugget_type": "APP",
"offsets": [
46,
47,
48,
49,
50,
51,
52,
53
],
"text": "enhanced encoder - decoder pre - training framework",
"tokens": [
"enhanced",
"encoder",
"-",
"decoder",
"pre",
"-",
"training",
"framework"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
55
],
"text": "increase",
"tokens": [
"increase"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
40
],
"text": "introduces",
"tokens": [
"introduces"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
57,
60,
61
],
"text": "relevance of responses",
"tokens": [
"relevance",
"of",
"responses"
]
},
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
59,
60,
61
],
"text": "diversity of responses",
"tokens": [
"diversity",
"of",
"responses"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
55
],
"text": "increase",
"tokens": [
"increase"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
75
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
76,
77,
78
],
"text": "pre - train",
"tokens": [
"pre",
"-",
"train"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
87,
88,
89,
90,
91,
95,
96,
97,
101
],
"text": "used in training language models ( lms ) and variational autoencoders ( vaes ) literature",
"tokens": [
"used",
"in",
"training",
"language",
"models",
"and",
"variational",
"autoencoders",
"literature"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
105,
106,
107
],
"text": "masked language model",
"tokens": [
"masked",
"language",
"model"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
81
],
"text": "using",
"tokens": [
"using"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "APP",
"offsets": [
37
],
"text": "model",
"tokens": [
"dialogved"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
76,
77,
78
],
"text": "pre - train",
"tokens": [
"pre",
"-",
"train"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
75
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
111,
112
],
"text": "response generation",
"tokens": [
"response",
"generation"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
76,
77,
78
],
"text": "pre - train",
"tokens": [
"pre",
"-",
"train"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
87,
88,
89,
90,
91,
95,
96,
97,
101
],
"text": "used in training language models ( lms ) and variational autoencoders ( vaes ) literature",
"tokens": [
"used",
"in",
"training",
"language",
"models",
"and",
"variational",
"autoencoders",
"literature"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
81
],
"text": "using",
"tokens": [
"using"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
75
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
116,
117,
118,
119,
120,
121
],
"text": "bag - of - words prediction",
"tokens": [
"bag",
"-",
"of",
"-",
"words",
"prediction"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
76,
77,
78
],
"text": "pre - train",
"tokens": [
"pre",
"-",
"train"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
87,
88,
89,
90,
91,
95,
96,
97,
101
],
"text": "used in training language models ( lms ) and variational autoencoders ( vaes ) literature",
"tokens": [
"used",
"in",
"training",
"language",
"models",
"and",
"variational",
"autoencoders",
"literature"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
81
],
"text": "using",
"tokens": [
"using"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
75
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
126,
127,
128
],
"text": "kl divergence reduction",
"tokens": [
"kl",
"divergence",
"reduction"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
76,
77,
78
],
"text": "pre - train",
"tokens": [
"pre",
"-",
"train"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
87,
88,
89,
90,
91,
95,
96,
97,
101,
102,
103,
104,
105,
106,
107
],
"text": "used in training language models ( lms ) and variational autoencoders ( vaes ) literature : 1 ) masked language model",
"tokens": [
"used",
"in",
"training",
"language",
"models",
"and",
"variational",
"autoencoders",
"literature",
":",
"1",
")",
"masked",
"language",
"model"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
81
],
"text": "using",
"tokens": [
"using"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
130
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "FEA",
"offsets": [
133,
134
],
"text": "additional parameters",
"tokens": [
"additional",
"parameters"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
132
],
"text": "add",
"tokens": [
"add"
]
}
},
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "FEA",
"offsets": [
138,
139
],
"text": "turn structure",
"tokens": [
"turn",
"structure"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
140,
141
],
"text": "in dialogs",
"tokens": [
"in",
"dialogs"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
143
],
"text": "improve",
"tokens": [
"improve"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
136
],
"text": "model",
"tokens": [
"model"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
145,
146,
147,
148,
149,
150,
151
],
"text": "performance of the pre - trained model",
"tokens": [
"performance",
"of",
"the",
"pre",
"-",
"trained",
"model"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
143
],
"text": "improve",
"tokens": [
"improve"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
153
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
155
],
"text": "experiments",
"tokens": [
"experiments"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
167,
168
],
"text": "response generation",
"tokens": [
"response",
"generation"
]
},
{
"argument_type": "Dataset",
"nugget_type": "DST",
"offsets": [
157
],
"text": "personachat",
"tokens": [
"personachat"
]
},
{
"argument_type": "Dataset",
"nugget_type": "DST",
"offsets": [
159
],
"text": "dailydialog",
"tokens": [
"dailydialog"
]
},
{
"argument_type": "Dataset",
"nugget_type": "DST",
"offsets": [
162,
163,
164,
165
],
"text": "dstc7 - avsd benchmarks",
"tokens": [
"dstc7",
"-",
"avsd",
"benchmarks"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
154
],
"text": "conduct",
"tokens": [
"conduct"
]
}
},
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "E-FAC",
"offsets": [
176
],
"text": "achieves",
"tokens": [
"achieves"
]
}
],
"event_type": "FIN",
"trigger": {
"offsets": [
172
],
"text": "show",
"tokens": [
"show"
]
}
},
{
"arguments": [
{
"argument_type": "Object",
"nugget_type": "STR",
"offsets": [
179,
180,
181,
182,
183,
184,
185,
186
],
"text": "state - of - the - art results",
"tokens": [
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"results"
]
},
{
"argument_type": "Dataset",
"nugget_type": "DST",
"offsets": [
157
],
"text": "personachat",
"tokens": [
"personachat"
]
},
{
"argument_type": "Dataset",
"nugget_type": "DST",
"offsets": [
159
],
"text": "dailydialog",
"tokens": [
"dailydialog"
]
},
{
"argument_type": "Dataset",
"nugget_type": "DST",
"offsets": [
162,
163,
164,
165
],
"text": "dstc7 - avsd benchmarks",
"tokens": [
"dstc7",
"-",
"avsd",
"benchmarks"
]
},
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
37
],
"text": "dialogved",
"tokens": [
"dialogved"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
176
],
"text": "achieves",
"tokens": [
"achieves"
]
}
}
] |
[
"dialog",
"response",
"generation",
"in",
"open",
"domain",
"is",
"an",
"important",
"research",
"topic",
"where",
"the",
"main",
"challenge",
"is",
"to",
"generate",
"relevant",
"and",
"diverse",
"responses",
".",
"in",
"this",
"paper",
",",
"we",
"propose",
"a",
"new",
"dialog",
"pre",
"-",
"training",
"framework",
"called",
"dialogved",
",",
"which",
"introduces",
"continuous",
"latent",
"variables",
"into",
"the",
"enhanced",
"encoder",
"-",
"decoder",
"pre",
"-",
"training",
"framework",
"to",
"increase",
"the",
"relevance",
"and",
"diversity",
"of",
"responses",
".",
"with",
"the",
"help",
"of",
"a",
"large",
"dialog",
"corpus",
"(",
"reddit",
")",
",",
"we",
"pre",
"-",
"train",
"the",
"model",
"using",
"the",
"following",
"4",
"tasks",
",",
"used",
"in",
"training",
"language",
"models",
"(",
"lms",
")",
"and",
"variational",
"autoencoders",
"(",
"vaes",
")",
"literature",
":",
"1",
")",
"masked",
"language",
"model",
";",
"2",
")",
"response",
"generation",
";",
"3",
")",
"bag",
"-",
"of",
"-",
"words",
"prediction",
";",
"and",
"4",
")",
"kl",
"divergence",
"reduction",
".",
"we",
"also",
"add",
"additional",
"parameters",
"to",
"model",
"the",
"turn",
"structure",
"in",
"dialogs",
"to",
"improve",
"the",
"performance",
"of",
"the",
"pre",
"-",
"trained",
"model",
".",
"we",
"conduct",
"experiments",
"on",
"personachat",
",",
"dailydialog",
",",
"and",
"dstc7",
"-",
"avsd",
"benchmarks",
"for",
"response",
"generation",
".",
"experimental",
"results",
"show",
"that",
"our",
"model",
"achieves",
"the",
"new",
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"results",
"on",
"all",
"these",
"datasets",
"."
] |
ACL
|
Explicit Memory Tracker with Coarse-to-Fine Reasoning for Conversational Machine Reading
|
The goal of conversational machine reading is to answer user questions given a knowledge base text which may require asking clarification questions. Existing approaches are limited in their decision making due to struggles in extracting question-related rules and reasoning about them. In this paper, we present a new framework of conversational machine reading that comprises a novel Explicit Memory Tracker (EMT) to track whether conditions listed in the rule text have already been satisfied to make a decision. Moreover, our framework generates clarification questions by adopting a coarse-to-fine reasoning strategy, utilizing sentence-level entailment scores to weight token-level distributions. On the ShARC benchmark (blind, held-out) testset, EMT achieves new state-of-the-art results of 74.6% micro-averaged decision accuracy and 49.5 BLEU4. We also show that EMT is more interpretable by visualizing the entailment-oriented reasoning process as the conversation flows. Code and models are released at https://github.com/Yifan-Gao/explicit_memory_tracker.
|
2de06858f2afe9d7e0d3ea59f76846d9
| 2,020
|
[
"the goal of conversational machine reading is to answer user questions given a knowledge base text which may require asking clarification questions .",
"existing approaches are limited in their decision making due to struggles in extracting question - related rules and reasoning about them .",
"in this paper , we present a new framework of conversational machine reading that comprises a novel explicit memory tracker ( emt ) to track whether conditions listed in the rule text have already been satisfied to make a decision .",
"moreover , our framework generates clarification questions by adopting a coarse - to - fine reasoning strategy , utilizing sentence - level entailment scores to weight token - level distributions .",
"on the sharc benchmark ( blind , held - out ) testset , emt achieves new state - of - the - art results of 74 . 6 % micro - averaged decision accuracy and 49 . 5 bleu4 .",
"we also show that emt is more interpretable by visualizing the entailment - oriented reasoning process as the conversation flows .",
"code and models are released at https : / / github . com / yifan - gao / explicit _ memory _ tracker ."
] |
[
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
3,
4,
5
],
"text": "conversational machine reading",
"tokens": [
"conversational",
"machine",
"reading"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
8
],
"text": "answer",
"tokens": [
"answer"
]
}
},
{
"arguments": [
{
"argument_type": "Concern",
"nugget_type": "APP",
"offsets": [
23,
24
],
"text": "existing approaches",
"tokens": [
"existing",
"approaches"
]
},
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
26
],
"text": "limited",
"tokens": [
"limited"
]
},
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
33,
34,
35,
36,
37,
38,
39
],
"text": "struggles in extracting question - related rules",
"tokens": [
"struggles",
"in",
"extracting",
"question",
"-",
"related",
"rules"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
26
],
"text": "limited",
"tokens": [
"limited"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
49
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
53,
54,
55,
56,
57
],
"text": "framework of conversational machine reading",
"tokens": [
"framework",
"of",
"conversational",
"machine",
"reading"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
50
],
"text": "present",
"tokens": [
"present"
]
}
},
{
"arguments": [
{
"argument_type": "BaseComponent",
"nugget_type": "APP",
"offsets": [
62,
63,
64
],
"text": "explicit memory tracker",
"tokens": [
"explicit",
"memory",
"tracker"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
69
],
"text": "track",
"tokens": [
"track"
]
}
},
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
90
],
"text": "generates",
"tokens": [
"generates"
]
},
{
"argument_type": "BaseComponent",
"nugget_type": "MOD",
"offsets": [
96,
97,
98,
99,
100,
101,
102
],
"text": "coarse - to - fine reasoning strategy",
"tokens": [
"coarse",
"-",
"to",
"-",
"fine",
"reasoning",
"strategy"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
94
],
"text": "adopting",
"tokens": [
"adopting"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
91,
92
],
"text": "clarification questions",
"tokens": [
"clarification",
"questions"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
90
],
"text": "generates",
"tokens": [
"generates"
]
}
},
{
"arguments": [
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
105,
106,
107,
108,
109
],
"text": "sentence - level entailment scores",
"tokens": [
"sentence",
"-",
"level",
"entailment",
"scores"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
111,
112,
113,
114,
115
],
"text": "weight token - level distributions",
"tokens": [
"weight",
"token",
"-",
"level",
"distributions"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
104
],
"text": "utilizing",
"tokens": [
"utilizing"
]
}
},
{
"arguments": [
{
"argument_type": "Arg1",
"nugget_type": "APP",
"offsets": [
62,
63,
64
],
"text": "explicit memory tracker",
"tokens": [
"explicit",
"memory",
"tracker"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
117,
118,
119,
120,
128
],
"text": "on the sharc benchmark ( blind , held - out ) testset",
"tokens": [
"on",
"the",
"sharc",
"benchmark",
"testset"
]
},
{
"argument_type": "Arg2",
"nugget_type": "APP",
"offsets": [
132,
133,
134,
135,
136,
137,
138,
139,
140
],
"text": "new state - of - the - art results",
"tokens": [
"new",
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"results"
]
},
{
"argument_type": "Extent",
"nugget_type": "DEG",
"offsets": [
142,
143,
144,
145
],
"text": "74 . 6 %",
"tokens": [
"74",
".",
"6",
"%"
]
},
{
"argument_type": "Metrics",
"nugget_type": "TAK",
"offsets": [
146,
147,
148,
149,
150
],
"text": "micro - averaged decision accuracy",
"tokens": [
"micro",
"-",
"averaged",
"decision",
"accuracy"
]
}
],
"event_type": "CMP",
"trigger": {
"offsets": [
131
],
"text": "achieves",
"tokens": [
"achieves"
]
}
},
{
"arguments": [
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
117,
118,
119,
120,
128
],
"text": "on the sharc benchmark ( blind , held - out ) testset",
"tokens": [
"on",
"the",
"sharc",
"benchmark",
"testset"
]
},
{
"argument_type": "Arg1",
"nugget_type": "APP",
"offsets": [
62,
63,
64
],
"text": "explicit memory tracker",
"tokens": [
"explicit",
"memory",
"tracker"
]
},
{
"argument_type": "Arg2",
"nugget_type": "APP",
"offsets": [
133,
134,
135,
136,
137,
138,
139,
140
],
"text": "state - of - the - art results",
"tokens": [
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"results"
]
},
{
"argument_type": "Extent",
"nugget_type": "DEG",
"offsets": [
152,
153,
154
],
"text": "49 . 5",
"tokens": [
"49",
".",
"5"
]
},
{
"argument_type": "Metrics",
"nugget_type": "TAK",
"offsets": [
155
],
"text": "bleu4",
"tokens": [
"bleu4"
]
}
],
"event_type": "CMP",
"trigger": {
"offsets": [
131
],
"text": "achieves",
"tokens": [
"achieves"
]
}
},
{
"arguments": [
{
"argument_type": "Finder",
"nugget_type": "OG",
"offsets": [
157
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "E-FAC",
"offsets": [
163,
164
],
"text": "more interpretable",
"tokens": [
"more",
"interpretable"
]
}
],
"event_type": "FIN",
"trigger": {
"offsets": [
159
],
"text": "show",
"tokens": [
"show"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
62,
63,
64
],
"text": "explicit memory tracker",
"tokens": [
"explicit",
"memory",
"tracker"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
163,
164
],
"text": "more interpretable",
"tokens": [
"more",
"interpretable"
]
}
},
{
"arguments": [
{
"argument_type": "BaseComponent",
"nugget_type": "MOD",
"offsets": [
168,
169,
170,
171,
172
],
"text": "entailment - oriented reasoning process",
"tokens": [
"entailment",
"-",
"oriented",
"reasoning",
"process"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
175,
176
],
"text": "conversation flows",
"tokens": [
"conversation",
"flows"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
166
],
"text": "visualizing",
"tokens": [
"visualizing"
]
}
}
] |
[
"the",
"goal",
"of",
"conversational",
"machine",
"reading",
"is",
"to",
"answer",
"user",
"questions",
"given",
"a",
"knowledge",
"base",
"text",
"which",
"may",
"require",
"asking",
"clarification",
"questions",
".",
"existing",
"approaches",
"are",
"limited",
"in",
"their",
"decision",
"making",
"due",
"to",
"struggles",
"in",
"extracting",
"question",
"-",
"related",
"rules",
"and",
"reasoning",
"about",
"them",
".",
"in",
"this",
"paper",
",",
"we",
"present",
"a",
"new",
"framework",
"of",
"conversational",
"machine",
"reading",
"that",
"comprises",
"a",
"novel",
"explicit",
"memory",
"tracker",
"(",
"emt",
")",
"to",
"track",
"whether",
"conditions",
"listed",
"in",
"the",
"rule",
"text",
"have",
"already",
"been",
"satisfied",
"to",
"make",
"a",
"decision",
".",
"moreover",
",",
"our",
"framework",
"generates",
"clarification",
"questions",
"by",
"adopting",
"a",
"coarse",
"-",
"to",
"-",
"fine",
"reasoning",
"strategy",
",",
"utilizing",
"sentence",
"-",
"level",
"entailment",
"scores",
"to",
"weight",
"token",
"-",
"level",
"distributions",
".",
"on",
"the",
"sharc",
"benchmark",
"(",
"blind",
",",
"held",
"-",
"out",
")",
"testset",
",",
"emt",
"achieves",
"new",
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"results",
"of",
"74",
".",
"6",
"%",
"micro",
"-",
"averaged",
"decision",
"accuracy",
"and",
"49",
".",
"5",
"bleu4",
".",
"we",
"also",
"show",
"that",
"emt",
"is",
"more",
"interpretable",
"by",
"visualizing",
"the",
"entailment",
"-",
"oriented",
"reasoning",
"process",
"as",
"the",
"conversation",
"flows",
".",
"code",
"and",
"models",
"are",
"released",
"at",
"https",
":",
"/",
"/",
"github",
".",
"com",
"/",
"yifan",
"-",
"gao",
"/",
"explicit",
"_",
"memory",
"_",
"tracker",
"."
] |
ACL
|
Direct Speech-to-Speech Translation With Discrete Units
|
We present a direct speech-to-speech translation (S2ST) model that translates speech from one language to speech in another language without relying on intermediate text generation. We tackle the problem by first applying a self-supervised discrete speech encoder on the target speech and then training a sequence-to-sequence speech-to-unit translation (S2UT) model to predict the discrete representations of the target speech. When target text transcripts are available, we design a joint speech and text training framework that enables the model to generate dual modality output (speech and text) simultaneously in the same inference pass. Experiments on the Fisher Spanish-English dataset show that the proposed framework yields improvement of 6.7 BLEU compared with a baseline direct S2ST model that predicts spectrogram features. When trained without any text transcripts, our model performance is comparable to models that predict spectrograms and are trained with text supervision, showing the potential of our system for translation between unwritten languages.
|
544c154625d31ed4ad58811790dfa509
| 2,022
|
[
"we present a direct speech - to - speech translation ( s2st ) model that translates speech from one language to speech in another language without relying on intermediate text generation .",
"we tackle the problem by first applying a self - supervised discrete speech encoder on the target speech and then training a sequence - to - sequence speech - to - unit translation ( s2ut ) model to predict the discrete representations of the target speech .",
"when target text transcripts are available , we design a joint speech and text training framework that enables the model to generate dual modality output ( speech and text ) simultaneously in the same inference pass .",
"experiments on the fisher spanish - english dataset show that the proposed framework yields improvement of 6 . 7 bleu compared with a baseline direct s2st model that predicts spectrogram features .",
"when trained without any text transcripts , our model performance is comparable to models that predict spectrograms and are trained with text supervision , showing the potential of our system for translation between unwritten languages ."
] |
[
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
0
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
3,
4,
5,
6,
7,
8,
9,
13
],
"text": "direct speech - to - speech translation ( s2st ) model",
"tokens": [
"direct",
"speech",
"-",
"to",
"-",
"speech",
"translation",
"model"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
1
],
"text": "present",
"tokens": [
"present"
]
}
},
{
"arguments": [
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
48,
49
],
"text": "target speech",
"tokens": [
"target",
"speech"
]
},
{
"argument_type": "TriedComponent",
"nugget_type": "MOD",
"offsets": [
40,
41,
42,
43,
44,
45
],
"text": "self - supervised discrete speech encoder",
"tokens": [
"self",
"-",
"supervised",
"discrete",
"speech",
"encoder"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
38
],
"text": "applying",
"tokens": [
"applying"
]
}
},
{
"arguments": [
{
"argument_type": "TriedComponent",
"nugget_type": "APP",
"offsets": [
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
68
],
"text": "sequence - to - sequence speech - to - unit translation ( s2ut ) model",
"tokens": [
"sequence",
"-",
"to",
"-",
"sequence",
"speech",
"-",
"to",
"-",
"unit",
"translation",
"model"
]
},
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
72,
73,
74,
75,
76,
77
],
"text": "discrete representations of the target speech",
"tokens": [
"discrete",
"representations",
"of",
"the",
"target",
"speech"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
70
],
"text": "predict",
"tokens": [
"predict"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
86
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
89,
90,
91,
92,
93,
94
],
"text": "joint speech and text training framework",
"tokens": [
"joint",
"speech",
"and",
"text",
"training",
"framework"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
79,
80,
81,
82,
83,
84
],
"text": "when target text transcripts are available",
"tokens": [
"when",
"target",
"text",
"transcripts",
"are",
"available"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
100
],
"text": "generate",
"tokens": [
"generate"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
87
],
"text": "design",
"tokens": [
"design"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "FEA",
"offsets": [
101,
102,
103
],
"text": "dual modality output",
"tokens": [
"dual",
"modality",
"output"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
110,
111,
112,
113,
114
],
"text": "in the same inference pass",
"tokens": [
"in",
"the",
"same",
"inference",
"pass"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
100
],
"text": "generate",
"tokens": [
"generate"
]
}
},
{
"arguments": [
{
"argument_type": "Dataset",
"nugget_type": "DST",
"offsets": [
120,
121,
122,
123
],
"text": "spanish - english dataset",
"tokens": [
"spanish",
"-",
"english",
"dataset"
]
},
{
"argument_type": "Result",
"nugget_type": "STR",
"offsets": [
130
],
"text": "improvement",
"tokens": [
"improvement"
]
},
{
"argument_type": "Extent",
"nugget_type": "DEG",
"offsets": [
132,
133,
134
],
"text": "6 . 7",
"tokens": [
"6",
".",
"7"
]
},
{
"argument_type": "Metrics",
"nugget_type": "TAK",
"offsets": [
135
],
"text": "bleu",
"tokens": [
"bleu"
]
},
{
"argument_type": "Arg2",
"nugget_type": "APP",
"offsets": [
139,
140,
141,
142
],
"text": "baseline direct s2st model",
"tokens": [
"baseline",
"direct",
"s2st",
"model"
]
},
{
"argument_type": "Arg1",
"nugget_type": "APP",
"offsets": [
89,
90,
91,
92,
93,
94
],
"text": "joint speech and text training framework",
"tokens": [
"joint",
"speech",
"and",
"text",
"training",
"framework"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
143,
144,
145,
146
],
"text": "that predicts spectrogram features",
"tokens": [
"that",
"predicts",
"spectrogram",
"features"
]
}
],
"event_type": "CMP",
"trigger": {
"offsets": [
129
],
"text": "yields",
"tokens": [
"yields"
]
}
},
{
"arguments": [
{
"argument_type": "Metrics",
"nugget_type": "TAK",
"offsets": [
157
],
"text": "performance",
"tokens": [
"performance"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
148,
149,
150,
151,
152,
153
],
"text": "when trained without any text transcripts",
"tokens": [
"when",
"trained",
"without",
"any",
"text",
"transcripts"
]
},
{
"argument_type": "Arg1",
"nugget_type": "APP",
"offsets": [
3,
4,
5,
6,
7,
8,
9,
13
],
"text": "direct speech - to - speech translation ( s2st ) model",
"tokens": [
"direct",
"speech",
"-",
"to",
"-",
"speech",
"translation",
"model"
]
},
{
"argument_type": "Result",
"nugget_type": "STR",
"offsets": [
159
],
"text": "comparable",
"tokens": [
"comparable"
]
},
{
"argument_type": "Arg2",
"nugget_type": "APP",
"offsets": [
161
],
"text": "models",
"tokens": [
"models"
]
}
],
"event_type": "CMP",
"trigger": {
"offsets": [
159
],
"text": "comparable",
"tokens": [
"comparable"
]
}
},
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
179
],
"text": "translation",
"tokens": [
"translation"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
180,
181,
182
],
"text": "between unwritten languages",
"tokens": [
"between",
"unwritten",
"languages"
]
},
{
"argument_type": "Object",
"nugget_type": "STR",
"offsets": [
174,
175,
176,
177
],
"text": "potential of our system",
"tokens": [
"potential",
"of",
"our",
"system"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
172
],
"text": "showing",
"tokens": [
"showing"
]
}
},
{
"arguments": [
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
16
],
"text": "speech",
"tokens": [
"speech"
]
},
{
"argument_type": "TriedComponent",
"nugget_type": "FEA",
"offsets": [
23,
24
],
"text": "another language",
"tokens": [
"another",
"language"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
25,
26,
27,
28,
29,
30
],
"text": "without relying on intermediate text generation",
"tokens": [
"without",
"relying",
"on",
"intermediate",
"text",
"generation"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
15
],
"text": "translates",
"tokens": [
"translates"
]
}
},
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "E-CMP",
"offsets": [
129
],
"text": "yields",
"tokens": [
"yields"
]
}
],
"event_type": "FIN",
"trigger": {
"offsets": [
124
],
"text": "show",
"tokens": [
"show"
]
}
}
] |
[
"we",
"present",
"a",
"direct",
"speech",
"-",
"to",
"-",
"speech",
"translation",
"(",
"s2st",
")",
"model",
"that",
"translates",
"speech",
"from",
"one",
"language",
"to",
"speech",
"in",
"another",
"language",
"without",
"relying",
"on",
"intermediate",
"text",
"generation",
".",
"we",
"tackle",
"the",
"problem",
"by",
"first",
"applying",
"a",
"self",
"-",
"supervised",
"discrete",
"speech",
"encoder",
"on",
"the",
"target",
"speech",
"and",
"then",
"training",
"a",
"sequence",
"-",
"to",
"-",
"sequence",
"speech",
"-",
"to",
"-",
"unit",
"translation",
"(",
"s2ut",
")",
"model",
"to",
"predict",
"the",
"discrete",
"representations",
"of",
"the",
"target",
"speech",
".",
"when",
"target",
"text",
"transcripts",
"are",
"available",
",",
"we",
"design",
"a",
"joint",
"speech",
"and",
"text",
"training",
"framework",
"that",
"enables",
"the",
"model",
"to",
"generate",
"dual",
"modality",
"output",
"(",
"speech",
"and",
"text",
")",
"simultaneously",
"in",
"the",
"same",
"inference",
"pass",
".",
"experiments",
"on",
"the",
"fisher",
"spanish",
"-",
"english",
"dataset",
"show",
"that",
"the",
"proposed",
"framework",
"yields",
"improvement",
"of",
"6",
".",
"7",
"bleu",
"compared",
"with",
"a",
"baseline",
"direct",
"s2st",
"model",
"that",
"predicts",
"spectrogram",
"features",
".",
"when",
"trained",
"without",
"any",
"text",
"transcripts",
",",
"our",
"model",
"performance",
"is",
"comparable",
"to",
"models",
"that",
"predict",
"spectrograms",
"and",
"are",
"trained",
"with",
"text",
"supervision",
",",
"showing",
"the",
"potential",
"of",
"our",
"system",
"for",
"translation",
"between",
"unwritten",
"languages",
"."
] |
ACL
|
Including Signed Languages in Natural Language Processing
|
Signed languages are the primary means of communication for many deaf and hard of hearing individuals. Since signed languages exhibit all the fundamental linguistic properties of natural language, we believe that tools and theories of Natural Language Processing (NLP) are crucial towards its modeling. However, existing research in Sign Language Processing (SLP) seldom attempt to explore and leverage the linguistic organization of signed languages. This position paper calls on the NLP community to include signed languages as a research area with high social and scientific impact. We first discuss the linguistic properties of signed languages to consider during their modeling. Then, we review the limitations of current SLP models and identify the open challenges to extend NLP to signed languages. Finally, we urge (1) the adoption of an efficient tokenization method; (2) the development of linguistically-informed models; (3) the collection of real-world signed language data; (4) the inclusion of local signed language communities as an active and leading voice in the direction of research.
|
433a3b42c31aed5e6f6cc4cbfcff19ef
| 2,021
|
[
"signed languages are the primary means of communication for many deaf and hard of hearing individuals .",
"since signed languages exhibit all the fundamental linguistic properties of natural language , we believe that tools and theories of natural language processing ( nlp ) are crucial towards its modeling .",
"however , existing research in sign language processing ( slp ) seldom attempt to explore and leverage the linguistic organization of signed languages .",
"this position paper calls on the nlp community to include signed languages as a research area with high social and scientific impact .",
"we first discuss the linguistic properties of signed languages to consider during their modeling .",
"then , we review the limitations of current slp models and identify the open challenges to extend nlp to signed languages .",
"finally , we urge ( 1 ) the adoption of an efficient tokenization method ; ( 2 ) the development of linguistically - informed models ; ( 3 ) the collection of real - world signed language data ; ( 4 ) the inclusion of local signed language communities as an active and leading voice in the direction of research ."
] |
[
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "FEA",
"offsets": [
0,
1
],
"text": "signed languages",
"tokens": [
"signed",
"languages"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
5
],
"text": "means",
"tokens": [
"means"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
96
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "FEA",
"offsets": [
100,
101,
102,
103,
104
],
"text": "linguistic properties of signed languages",
"tokens": [
"linguistic",
"properties",
"of",
"signed",
"languages"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
98
],
"text": "discuss",
"tokens": [
"discuss"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
113
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
116,
117,
118,
119,
120
],
"text": "limitations of current slp models",
"tokens": [
"limitations",
"of",
"current",
"slp",
"models"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
114
],
"text": "review",
"tokens": [
"review"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
113
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
124,
125
],
"text": "open challenges",
"tokens": [
"open",
"challenges"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
122
],
"text": "identify",
"tokens": [
"identify"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
135
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
144,
145,
146
],
"text": "efficient tokenization method",
"tokens": [
"efficient",
"tokenization",
"method"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
141
],
"text": "adoption",
"tokens": [
"adoption"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
135
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
152,
153,
154,
155,
156,
157
],
"text": "development of linguistically - informed models",
"tokens": [
"development",
"of",
"linguistically",
"-",
"informed",
"models"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
136
],
"text": "urge",
"tokens": [
"urge"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
135
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
163,
164,
165,
166,
167,
168,
169,
170
],
"text": "collection of real - world signed language data",
"tokens": [
"collection",
"of",
"real",
"-",
"world",
"signed",
"language",
"data"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
136
],
"text": "urge",
"tokens": [
"urge"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
135
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
176,
177,
178,
179,
180,
181
],
"text": "inclusion of local signed language communities",
"tokens": [
"inclusion",
"of",
"local",
"signed",
"language",
"communities"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
136
],
"text": "urge",
"tokens": [
"urge"
]
}
}
] |
[
"signed",
"languages",
"are",
"the",
"primary",
"means",
"of",
"communication",
"for",
"many",
"deaf",
"and",
"hard",
"of",
"hearing",
"individuals",
".",
"since",
"signed",
"languages",
"exhibit",
"all",
"the",
"fundamental",
"linguistic",
"properties",
"of",
"natural",
"language",
",",
"we",
"believe",
"that",
"tools",
"and",
"theories",
"of",
"natural",
"language",
"processing",
"(",
"nlp",
")",
"are",
"crucial",
"towards",
"its",
"modeling",
".",
"however",
",",
"existing",
"research",
"in",
"sign",
"language",
"processing",
"(",
"slp",
")",
"seldom",
"attempt",
"to",
"explore",
"and",
"leverage",
"the",
"linguistic",
"organization",
"of",
"signed",
"languages",
".",
"this",
"position",
"paper",
"calls",
"on",
"the",
"nlp",
"community",
"to",
"include",
"signed",
"languages",
"as",
"a",
"research",
"area",
"with",
"high",
"social",
"and",
"scientific",
"impact",
".",
"we",
"first",
"discuss",
"the",
"linguistic",
"properties",
"of",
"signed",
"languages",
"to",
"consider",
"during",
"their",
"modeling",
".",
"then",
",",
"we",
"review",
"the",
"limitations",
"of",
"current",
"slp",
"models",
"and",
"identify",
"the",
"open",
"challenges",
"to",
"extend",
"nlp",
"to",
"signed",
"languages",
".",
"finally",
",",
"we",
"urge",
"(",
"1",
")",
"the",
"adoption",
"of",
"an",
"efficient",
"tokenization",
"method",
";",
"(",
"2",
")",
"the",
"development",
"of",
"linguistically",
"-",
"informed",
"models",
";",
"(",
"3",
")",
"the",
"collection",
"of",
"real",
"-",
"world",
"signed",
"language",
"data",
";",
"(",
"4",
")",
"the",
"inclusion",
"of",
"local",
"signed",
"language",
"communities",
"as",
"an",
"active",
"and",
"leading",
"voice",
"in",
"the",
"direction",
"of",
"research",
"."
] |
ACL
|
Efficient Pairwise Annotation of Argument Quality
|
We present an efficient annotation framework for argument quality, a feature difficult to be measured reliably as per previous work. A stochastic transitivity model is combined with an effective sampling strategy to infer high-quality labels with low effort from crowdsourced pairwise judgments. The model’s capabilities are showcased by compiling Webis-ArgQuality-20, an argument quality corpus that comprises scores for rhetorical, logical, dialectical, and overall quality inferred from a total of 41,859 pairwise judgments among 1,271 arguments. With up to 93% cost savings, our approach significantly outperforms existing annotation procedures. Furthermore, novel insight into argument quality is provided through statistical analysis, and a new aggregation method to infer overall quality from individual quality dimensions is proposed.
|
5f248c856b0953dd91c496ad6fd0dba4
| 2,020
|
[
"we present an efficient annotation framework for argument quality , a feature difficult to be measured reliably as per previous work .",
"a stochastic transitivity model is combined with an effective sampling strategy to infer high - quality labels with low effort from crowdsourced pairwise judgments .",
"the model ’ s capabilities are showcased by compiling webis - argquality - 20 , an argument quality corpus that comprises scores for rhetorical , logical , dialectical , and overall quality inferred from a total of 41 , 859 pairwise judgments among 1 , 271 arguments .",
"with up to 93 % cost savings , our approach significantly outperforms existing annotation procedures .",
"furthermore , novel insight into argument quality is provided through statistical analysis , and a new aggregation method to infer overall quality from individual quality dimensions is proposed ."
] |
[
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
0
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
7,
8
],
"text": "argument quality",
"tokens": [
"argument",
"quality"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
3,
4,
5
],
"text": "efficient annotation framework",
"tokens": [
"efficient",
"annotation",
"framework"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
1
],
"text": "present",
"tokens": [
"present"
]
}
},
{
"arguments": [
{
"argument_type": "BaseComponent",
"nugget_type": "MOD",
"offsets": [
23,
24,
25
],
"text": "stochastic transitivity model",
"tokens": [
"stochastic",
"transitivity",
"model"
]
},
{
"argument_type": "TriedComponent",
"nugget_type": "APP",
"offsets": [
31,
32
],
"text": "sampling strategy",
"tokens": [
"sampling",
"strategy"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
34
],
"text": "infer",
"tokens": [
"infer"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
27
],
"text": "combined",
"tokens": [
"combined"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "FEA",
"offsets": [
35,
36,
37,
38
],
"text": "high - quality labels",
"tokens": [
"high",
"-",
"quality",
"labels"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
39,
40,
41
],
"text": "with low effort",
"tokens": [
"with",
"low",
"effort"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
34
],
"text": "infer",
"tokens": [
"infer"
]
}
},
{
"arguments": [
{
"argument_type": "Arg2",
"nugget_type": "APP",
"offsets": [
107,
108,
109
],
"text": "existing annotation procedures",
"tokens": [
"existing",
"annotation",
"procedures"
]
},
{
"argument_type": "Extent",
"nugget_type": "DEG",
"offsets": [
105
],
"text": "significantly",
"tokens": [
"significantly"
]
},
{
"argument_type": "Metrics",
"nugget_type": "TAK",
"offsets": [
100,
101
],
"text": "cost savings",
"tokens": [
"cost",
"savings"
]
},
{
"argument_type": "Arg1",
"nugget_type": "APP",
"offsets": [
3,
4,
5
],
"text": "efficient annotation framework",
"tokens": [
"efficient",
"annotation",
"framework"
]
},
{
"argument_type": "Extent",
"nugget_type": "DEG",
"offsets": [
98,
99
],
"text": "93 %",
"tokens": [
"93",
"%"
]
}
],
"event_type": "CMP",
"trigger": {
"offsets": [
106
],
"text": "outperforms",
"tokens": [
"outperforms"
]
}
},
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
126,
127,
128
],
"text": "new aggregation method",
"tokens": [
"new",
"aggregation",
"method"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
138
],
"text": "proposed",
"tokens": [
"proposed"
]
}
},
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
131,
132
],
"text": "overall quality",
"tokens": [
"overall",
"quality"
]
},
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
134,
135,
136
],
"text": "individual quality dimensions",
"tokens": [
"individual",
"quality",
"dimensions"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
130
],
"text": "infer",
"tokens": [
"infer"
]
}
}
] |
[
"we",
"present",
"an",
"efficient",
"annotation",
"framework",
"for",
"argument",
"quality",
",",
"a",
"feature",
"difficult",
"to",
"be",
"measured",
"reliably",
"as",
"per",
"previous",
"work",
".",
"a",
"stochastic",
"transitivity",
"model",
"is",
"combined",
"with",
"an",
"effective",
"sampling",
"strategy",
"to",
"infer",
"high",
"-",
"quality",
"labels",
"with",
"low",
"effort",
"from",
"crowdsourced",
"pairwise",
"judgments",
".",
"the",
"model",
"’",
"s",
"capabilities",
"are",
"showcased",
"by",
"compiling",
"webis",
"-",
"argquality",
"-",
"20",
",",
"an",
"argument",
"quality",
"corpus",
"that",
"comprises",
"scores",
"for",
"rhetorical",
",",
"logical",
",",
"dialectical",
",",
"and",
"overall",
"quality",
"inferred",
"from",
"a",
"total",
"of",
"41",
",",
"859",
"pairwise",
"judgments",
"among",
"1",
",",
"271",
"arguments",
".",
"with",
"up",
"to",
"93",
"%",
"cost",
"savings",
",",
"our",
"approach",
"significantly",
"outperforms",
"existing",
"annotation",
"procedures",
".",
"furthermore",
",",
"novel",
"insight",
"into",
"argument",
"quality",
"is",
"provided",
"through",
"statistical",
"analysis",
",",
"and",
"a",
"new",
"aggregation",
"method",
"to",
"infer",
"overall",
"quality",
"from",
"individual",
"quality",
"dimensions",
"is",
"proposed",
"."
] |
ACL
|
Continual Quality Estimation with Online Bayesian Meta-Learning
|
Most current quality estimation (QE) models for machine translation are trained and evaluated in a static setting where training and test data are assumed to be from a fixed distribution. However, in real-life settings, the test data that a deployed QE model would be exposed to may differ from its training data. In particular, training samples are often labelled by one or a small set of annotators, whose perceptions of translation quality and needs may differ substantially from those of end-users, who will employ predictions in practice. To address this challenge, we propose an online Bayesian meta-learning framework for the continuous training of QE models that is able to adapt them to the needs of different users, while being robust to distributional shifts in training and test data. Experiments on data with varying number of users and language characteristics validate the effectiveness of the proposed approach.
|
56ed5b589528f051fca41634021c6cfa
| 2,021
|
[
"most current quality estimation ( qe ) models for machine translation are trained and evaluated in a static setting where training and test data are assumed to be from a fixed distribution .",
"however , in real - life settings , the test data that a deployed qe model would be exposed to may differ from its training data .",
"in particular , training samples are often labelled by one or a small set of annotators , whose perceptions of translation quality and needs may differ substantially from those of end - users , who will employ predictions in practice .",
"to address this challenge , we propose an online bayesian meta - learning framework for the continuous training of qe models that is able to adapt them to the needs of different users , while being robust to distributional shifts in training and test data .",
"experiments on data with varying number of users and language characteristics validate the effectiveness of the proposed approach ."
] |
[
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
9,
10
],
"text": "machine translation",
"tokens": [
"machine",
"translation"
]
},
{
"argument_type": "BaseComponent",
"nugget_type": "APP",
"offsets": [
2,
3,
7
],
"text": "quality estimation ( qe ) models",
"tokens": [
"quality",
"estimation",
"models"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31
],
"text": "in a static setting where training and test data are assumed to be from a fixed distribution",
"tokens": [
"in",
"a",
"static",
"setting",
"where",
"training",
"and",
"test",
"data",
"are",
"assumed",
"to",
"be",
"from",
"a",
"fixed",
"distribution"
]
}
],
"event_type": "RWS",
"trigger": {
"offsets": [
12,
13,
14
],
"text": "trained and evaluated",
"tokens": [
"trained",
"and",
"evaluated"
]
}
},
{
"arguments": [
{
"argument_type": "Concern",
"nugget_type": "DST",
"offsets": [
42,
43
],
"text": "test data",
"tokens": [
"test",
"data"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
35,
36,
37,
38,
39
],
"text": "in real - life settings",
"tokens": [
"in",
"real",
"-",
"life",
"settings"
]
},
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
54
],
"text": "differ",
"tokens": [
"differ"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
54
],
"text": "differ",
"tokens": [
"differ"
]
}
},
{
"arguments": [
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
85,
86
],
"text": "differ substantially",
"tokens": [
"differ",
"substantially"
]
},
{
"argument_type": "Concern",
"nugget_type": "FEA",
"offsets": [
78,
79,
80
],
"text": "perceptions of translation",
"tokens": [
"perceptions",
"of",
"translation"
]
},
{
"argument_type": "Concern",
"nugget_type": "FEA",
"offsets": [
80,
81
],
"text": "translation quality",
"tokens": [
"translation",
"quality"
]
},
{
"argument_type": "Concern",
"nugget_type": "FEA",
"offsets": [
80,
83
],
"text": "translation needs",
"tokens": [
"translation",
"needs"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
84
],
"text": "may",
"tokens": [
"may"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
106
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
109,
110,
111,
112,
113,
114
],
"text": "online bayesian meta - learning framework",
"tokens": [
"online",
"bayesian",
"meta",
"-",
"learning",
"framework"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
102
],
"text": "address",
"tokens": [
"address"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
107
],
"text": "propose",
"tokens": [
"propose"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
103,
104
],
"text": "this challenge",
"tokens": [
"this",
"challenge"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
102
],
"text": "address",
"tokens": [
"address"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
106
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
109,
110,
111,
112,
113,
114
],
"text": "online bayesian meta - learning framework",
"tokens": [
"online",
"bayesian",
"meta",
"-",
"learning",
"framework"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
117,
118
],
"text": "continuous training",
"tokens": [
"continuous",
"training"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
107
],
"text": "propose",
"tokens": [
"propose"
]
}
},
{
"arguments": [
{
"argument_type": "Dataset",
"nugget_type": "DST",
"offsets": [
149,
150,
151,
152,
153,
154,
155,
156,
157
],
"text": "data with varying number of users and language characteristics",
"tokens": [
"data",
"with",
"varying",
"number",
"of",
"users",
"and",
"language",
"characteristics"
]
},
{
"argument_type": "Subject",
"nugget_type": "TAK",
"offsets": [
160,
161,
162,
109,
110,
111,
112,
113,
114
],
"text": "effectiveness of the proposed approach",
"tokens": [
"effectiveness",
"of",
"the",
"online",
"bayesian",
"meta",
"-",
"learning",
"framework"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
158
],
"text": "validate",
"tokens": [
"validate"
]
}
}
] |
[
"most",
"current",
"quality",
"estimation",
"(",
"qe",
")",
"models",
"for",
"machine",
"translation",
"are",
"trained",
"and",
"evaluated",
"in",
"a",
"static",
"setting",
"where",
"training",
"and",
"test",
"data",
"are",
"assumed",
"to",
"be",
"from",
"a",
"fixed",
"distribution",
".",
"however",
",",
"in",
"real",
"-",
"life",
"settings",
",",
"the",
"test",
"data",
"that",
"a",
"deployed",
"qe",
"model",
"would",
"be",
"exposed",
"to",
"may",
"differ",
"from",
"its",
"training",
"data",
".",
"in",
"particular",
",",
"training",
"samples",
"are",
"often",
"labelled",
"by",
"one",
"or",
"a",
"small",
"set",
"of",
"annotators",
",",
"whose",
"perceptions",
"of",
"translation",
"quality",
"and",
"needs",
"may",
"differ",
"substantially",
"from",
"those",
"of",
"end",
"-",
"users",
",",
"who",
"will",
"employ",
"predictions",
"in",
"practice",
".",
"to",
"address",
"this",
"challenge",
",",
"we",
"propose",
"an",
"online",
"bayesian",
"meta",
"-",
"learning",
"framework",
"for",
"the",
"continuous",
"training",
"of",
"qe",
"models",
"that",
"is",
"able",
"to",
"adapt",
"them",
"to",
"the",
"needs",
"of",
"different",
"users",
",",
"while",
"being",
"robust",
"to",
"distributional",
"shifts",
"in",
"training",
"and",
"test",
"data",
".",
"experiments",
"on",
"data",
"with",
"varying",
"number",
"of",
"users",
"and",
"language",
"characteristics",
"validate",
"the",
"effectiveness",
"of",
"the",
"proposed",
"approach",
"."
] |
ACL
|
ENGINE: Energy-Based Inference Networks for Non-Autoregressive Machine Translation
|
We propose to train a non-autoregressive machine translation model to minimize the energy defined by a pretrained autoregressive model. In particular, we view our non-autoregressive translation system as an inference network (Tu and Gimpel, 2018) trained to minimize the autoregressive teacher energy. This contrasts with the popular approach of training a non-autoregressive model on a distilled corpus consisting of the beam-searched outputs of such a teacher model. Our approach, which we call ENGINE (ENerGy-based Inference NEtworks), achieves state-of-the-art non-autoregressive results on the IWSLT 2014 DE-EN and WMT 2016 RO-EN datasets, approaching the performance of autoregressive models.
|
730bcbe874c7361af52dc1484da356ef
| 2,020
|
[
"we propose to train a non - autoregressive machine translation model to minimize the energy defined by a pretrained autoregressive model .",
"in particular , we view our non - autoregressive translation system as an inference network ( tu and gimpel , 2018 ) trained to minimize the autoregressive teacher energy .",
"this contrasts with the popular approach of training a non - autoregressive model on a distilled corpus consisting of the beam - searched outputs of such a teacher model .",
"our approach , which we call engine ( energy - based inference networks ) , achieves state - of - the - art non - autoregressive results on the iwslt 2014 de - en and wmt 2016 ro - en datasets , approaching the performance of autoregressive models ."
] |
[
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
0
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
5,
6,
7,
8,
9,
10
],
"text": "non - autoregressive machine translation model",
"tokens": [
"non",
"-",
"autoregressive",
"machine",
"translation",
"model"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
12
],
"text": "minimize",
"tokens": [
"minimize"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
3
],
"text": "train",
"tokens": [
"train"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
14
],
"text": "energy",
"tokens": [
"energy"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
12
],
"text": "minimize",
"tokens": [
"minimize"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
25
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
28,
29,
30,
31,
32,
33,
34,
35,
36
],
"text": "non - autoregressive translation system as an inference network",
"tokens": [
"non",
"-",
"autoregressive",
"translation",
"system",
"as",
"an",
"inference",
"network"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
26
],
"text": "view",
"tokens": [
"view"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
83
],
"text": "approach",
"tokens": [
"approach"
]
},
{
"argument_type": "Object",
"nugget_type": "TAK",
"offsets": [
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108
],
"text": "state - of - the - art non - autoregressive results",
"tokens": [
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"non",
"-",
"autoregressive",
"results"
]
},
{
"argument_type": "Dataset",
"nugget_type": "DST",
"offsets": [
109,
110,
111,
112,
113,
114,
115,
122
],
"text": "on the iwslt 2014 de - en datasets",
"tokens": [
"on",
"the",
"iwslt",
"2014",
"de",
"-",
"en",
"datasets"
]
},
{
"argument_type": "Dataset",
"nugget_type": "DST",
"offsets": [
117,
118,
119,
120,
121,
122
],
"text": "wmt 2016 ro - en datasets",
"tokens": [
"wmt",
"2016",
"ro",
"-",
"en",
"datasets"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
97
],
"text": "achieves",
"tokens": [
"achieves"
]
}
},
{
"arguments": [
{
"argument_type": "Arg2",
"nugget_type": "APP",
"offsets": [
128,
129
],
"text": "autoregressive models",
"tokens": [
"autoregressive",
"models"
]
},
{
"argument_type": "Metrics",
"nugget_type": "TAK",
"offsets": [
126
],
"text": "performance",
"tokens": [
"performance"
]
},
{
"argument_type": "Arg1",
"nugget_type": "APP",
"offsets": [
83
],
"text": "approach",
"tokens": [
"approach"
]
}
],
"event_type": "CMP",
"trigger": {
"offsets": [
124
],
"text": "approaching",
"tokens": [
"approaching"
]
}
}
] |
[
"we",
"propose",
"to",
"train",
"a",
"non",
"-",
"autoregressive",
"machine",
"translation",
"model",
"to",
"minimize",
"the",
"energy",
"defined",
"by",
"a",
"pretrained",
"autoregressive",
"model",
".",
"in",
"particular",
",",
"we",
"view",
"our",
"non",
"-",
"autoregressive",
"translation",
"system",
"as",
"an",
"inference",
"network",
"(",
"tu",
"and",
"gimpel",
",",
"2018",
")",
"trained",
"to",
"minimize",
"the",
"autoregressive",
"teacher",
"energy",
".",
"this",
"contrasts",
"with",
"the",
"popular",
"approach",
"of",
"training",
"a",
"non",
"-",
"autoregressive",
"model",
"on",
"a",
"distilled",
"corpus",
"consisting",
"of",
"the",
"beam",
"-",
"searched",
"outputs",
"of",
"such",
"a",
"teacher",
"model",
".",
"our",
"approach",
",",
"which",
"we",
"call",
"engine",
"(",
"energy",
"-",
"based",
"inference",
"networks",
")",
",",
"achieves",
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"non",
"-",
"autoregressive",
"results",
"on",
"the",
"iwslt",
"2014",
"de",
"-",
"en",
"and",
"wmt",
"2016",
"ro",
"-",
"en",
"datasets",
",",
"approaching",
"the",
"performance",
"of",
"autoregressive",
"models",
"."
] |
ACL
|
Addressing Semantic Drift in Generative Question Answering with Auxiliary Extraction
|
Recently, question answering (QA) based on machine reading comprehension has become popular. This work focuses on generative QA which aims to generate an abstractive answer to a given question instead of extracting an answer span from a provided passage. Generative QA often suffers from two critical problems: (1) summarizing content irrelevant to a given question, (2) drifting away from a correct answer during generation. In this paper, we address these problems by a novel Rationale-Enriched Answer Generator (REAG), which incorporates an extractive mechanism into a generative model. Specifically, we add an extraction task on the encoder to obtain the rationale for an answer, which is the most relevant piece of text in an input document to a given question. Based on the extracted rationale and original input, the decoder is expected to generate an answer with high confidence. We jointly train REAG on the MS MARCO QA+NLG task and the experimental results show that REAG improves the quality and semantic accuracy of answers over baseline models.
|
c74f7ba287b1c1241b617c8ba1c5d951
| 2,021
|
[
"recently , question answering ( qa ) based on machine reading comprehension has become popular .",
"this work focuses on generative qa which aims to generate an abstractive answer to a given question instead of extracting an answer span from a provided passage .",
"generative qa often suffers from two critical problems : ( 1 ) summarizing content irrelevant to a given question , ( 2 ) drifting away from a correct answer during generation .",
"in this paper , we address these problems by a novel rationale - enriched answer generator ( reag ) , which incorporates an extractive mechanism into a generative model .",
"specifically , we add an extraction task on the encoder to obtain the rationale for an answer , which is the most relevant piece of text in an input document to a given question .",
"based on the extracted rationale and original input , the decoder is expected to generate an answer with high confidence .",
"we jointly train reag on the ms marco qa + nlg task and the experimental results show that reag improves the quality and semantic accuracy of answers over baseline models ."
] |
[
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
2,
3,
7,
8,
9,
10,
11
],
"text": "question answering ( qa ) based on machine reading comprehension",
"tokens": [
"question",
"answering",
"based",
"on",
"machine",
"reading",
"comprehension"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
13
],
"text": "become",
"tokens": [
"become"
]
}
},
{
"arguments": [
{
"argument_type": "BaseComponent",
"nugget_type": "APP",
"offsets": [
103,
104
],
"text": "generative model",
"tokens": [
"generative",
"model"
]
},
{
"argument_type": "TriedComponent",
"nugget_type": "APP",
"offsets": [
99,
100
],
"text": "extractive mechanism",
"tokens": [
"extractive",
"mechanism"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
97
],
"text": "incorporates",
"tokens": [
"incorporates"
]
}
},
{
"arguments": [
{
"argument_type": "TriedComponent",
"nugget_type": "FEA",
"offsets": [
111,
112
],
"text": "extraction task",
"tokens": [
"extraction",
"task"
]
},
{
"argument_type": "BaseComponent",
"nugget_type": "MOD",
"offsets": [
115
],
"text": "encoder",
"tokens": [
"encoder"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
117
],
"text": "obtain",
"tokens": [
"obtain"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
109
],
"text": "add",
"tokens": [
"add"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "FEA",
"offsets": [
119,
120,
121,
122
],
"text": "rationale for an answer",
"tokens": [
"rationale",
"for",
"an",
"answer"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
117
],
"text": "obtain",
"tokens": [
"obtain"
]
}
},
{
"arguments": [
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
151
],
"text": "decoder",
"tokens": [
"decoder"
]
},
{
"argument_type": "TriedComponent",
"nugget_type": "FEA",
"offsets": [
157,
158,
159,
160
],
"text": "answer with high confidence",
"tokens": [
"answer",
"with",
"high",
"confidence"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
141,
142,
143,
144,
145,
146,
147,
148
],
"text": "based on the extracted rationale and original input",
"tokens": [
"based",
"on",
"the",
"extracted",
"rationale",
"and",
"original",
"input"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
155
],
"text": "generate",
"tokens": [
"generate"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
162
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "MOD",
"offsets": [
87,
88,
89,
90,
91
],
"text": "rationale - enriched answer generator",
"tokens": [
"rationale",
"-",
"enriched",
"answer",
"generator"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
168,
169,
170,
171,
172,
173
],
"text": "ms marco qa + nlg task",
"tokens": [
"ms",
"marco",
"qa",
"+",
"nlg",
"task"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
164
],
"text": "train",
"tokens": [
"train"
]
}
},
{
"arguments": [
{
"argument_type": "Arg1",
"nugget_type": "MOD",
"offsets": [
87,
88,
89,
90,
91
],
"text": "rationale - enriched answer generator",
"tokens": [
"rationale",
"-",
"enriched",
"answer",
"generator"
]
},
{
"argument_type": "Arg2",
"nugget_type": "APP",
"offsets": [
190,
191
],
"text": "baseline models",
"tokens": [
"baseline",
"models"
]
},
{
"argument_type": "Result",
"nugget_type": "STR",
"offsets": [
181
],
"text": "improves",
"tokens": [
"improves"
]
},
{
"argument_type": "Metrics",
"nugget_type": "TAK",
"offsets": [
183,
187,
188
],
"text": "quality of answers",
"tokens": [
"quality",
"of",
"answers"
]
}
],
"event_type": "CMP",
"trigger": {
"offsets": [
181
],
"text": "improves",
"tokens": [
"improves"
]
}
},
{
"arguments": [
{
"argument_type": "Concern",
"nugget_type": "FEA",
"offsets": [
44,
45
],
"text": "generative qa",
"tokens": [
"generative",
"qa"
]
},
{
"argument_type": "Extent",
"nugget_type": "DEG",
"offsets": [
46
],
"text": "often",
"tokens": [
"often"
]
},
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
57,
58
],
"text": "content irrelevant",
"tokens": [
"content",
"irrelevant"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
59,
60,
61,
62
],
"text": "to a given question",
"tokens": [
"to",
"a",
"given",
"question"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
56
],
"text": "summarizing",
"tokens": [
"summarizing"
]
}
},
{
"arguments": [
{
"argument_type": "Concern",
"nugget_type": "FEA",
"offsets": [
44,
45
],
"text": "generative qa",
"tokens": [
"generative",
"qa"
]
},
{
"argument_type": "Extent",
"nugget_type": "DEG",
"offsets": [
46
],
"text": "often",
"tokens": [
"often"
]
},
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
67,
68
],
"text": "drifting away",
"tokens": [
"drifting",
"away"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
71,
72
],
"text": "correct answer",
"tokens": [
"correct",
"answer"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
67,
68
],
"text": "drifting away",
"tokens": [
"drifting",
"away"
]
}
},
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "E-CMP",
"offsets": [
181
],
"text": "improves",
"tokens": [
"improves"
]
}
],
"event_type": "FIN",
"trigger": {
"offsets": [
178
],
"text": "show",
"tokens": [
"show"
]
}
}
] |
[
"recently",
",",
"question",
"answering",
"(",
"qa",
")",
"based",
"on",
"machine",
"reading",
"comprehension",
"has",
"become",
"popular",
".",
"this",
"work",
"focuses",
"on",
"generative",
"qa",
"which",
"aims",
"to",
"generate",
"an",
"abstractive",
"answer",
"to",
"a",
"given",
"question",
"instead",
"of",
"extracting",
"an",
"answer",
"span",
"from",
"a",
"provided",
"passage",
".",
"generative",
"qa",
"often",
"suffers",
"from",
"two",
"critical",
"problems",
":",
"(",
"1",
")",
"summarizing",
"content",
"irrelevant",
"to",
"a",
"given",
"question",
",",
"(",
"2",
")",
"drifting",
"away",
"from",
"a",
"correct",
"answer",
"during",
"generation",
".",
"in",
"this",
"paper",
",",
"we",
"address",
"these",
"problems",
"by",
"a",
"novel",
"rationale",
"-",
"enriched",
"answer",
"generator",
"(",
"reag",
")",
",",
"which",
"incorporates",
"an",
"extractive",
"mechanism",
"into",
"a",
"generative",
"model",
".",
"specifically",
",",
"we",
"add",
"an",
"extraction",
"task",
"on",
"the",
"encoder",
"to",
"obtain",
"the",
"rationale",
"for",
"an",
"answer",
",",
"which",
"is",
"the",
"most",
"relevant",
"piece",
"of",
"text",
"in",
"an",
"input",
"document",
"to",
"a",
"given",
"question",
".",
"based",
"on",
"the",
"extracted",
"rationale",
"and",
"original",
"input",
",",
"the",
"decoder",
"is",
"expected",
"to",
"generate",
"an",
"answer",
"with",
"high",
"confidence",
".",
"we",
"jointly",
"train",
"reag",
"on",
"the",
"ms",
"marco",
"qa",
"+",
"nlg",
"task",
"and",
"the",
"experimental",
"results",
"show",
"that",
"reag",
"improves",
"the",
"quality",
"and",
"semantic",
"accuracy",
"of",
"answers",
"over",
"baseline",
"models",
"."
] |
ACL
|
Know What You Don’t Know: Modeling a Pragmatic Speaker that Refers to Objects of Unknown Categories
|
Zero-shot learning in Language & Vision is the task of correctly labelling (or naming) objects of novel categories. Another strand of work in L&V aims at pragmatically informative rather than “correct” object descriptions, e.g. in reference games. We combine these lines of research and model zero-shot reference games, where a speaker needs to successfully refer to a novel object in an image. Inspired by models of “rational speech acts”, we extend a neural generator to become a pragmatic speaker reasoning about uncertain object categories. As a result of this reasoning, the generator produces fewer nouns and names of distractor categories as compared to a literal speaker. We show that this conversational strategy for dealing with novel objects often improves communicative success, in terms of resolution accuracy of an automatic listener.
|
f1ac64d5e4f2dd8a0196b27e6acb2406
| 2,019
|
[
"zero - shot learning in language & vision is the task of correctly labelling ( or naming ) objects of novel categories .",
"another strand of work in l & v aims at pragmatically informative rather than “ correct ” object descriptions , e . g . in reference games .",
"we combine these lines of research and model zero - shot reference games , where a speaker needs to successfully refer to a novel object in an image .",
"inspired by models of “ rational speech acts ” , we extend a neural generator to become a pragmatic speaker reasoning about uncertain object categories .",
"as a result of this reasoning , the generator produces fewer nouns and names of distractor categories as compared to a literal speaker .",
"we show that this conversational strategy for dealing with novel objects often improves communicative success , in terms of resolution accuracy of an automatic listener ."
] |
[
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
0,
1,
2,
3,
4,
5,
6,
7
],
"text": "zero - shot learning in language & vision",
"tokens": [
"zero",
"-",
"shot",
"learning",
"in",
"language",
"&",
"vision"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
10
],
"text": "task",
"tokens": [
"task"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
51
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
56
],
"text": "research",
"tokens": [
"research"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
52
],
"text": "combine",
"tokens": [
"combine"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
51
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
59,
60,
61,
62,
63
],
"text": "zero - shot reference games",
"tokens": [
"zero",
"-",
"shot",
"reference",
"games"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
58
],
"text": "model",
"tokens": [
"model"
]
}
},
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
100
],
"text": "reasoning",
"tokens": [
"reasoning"
]
},
{
"argument_type": "BaseComponent",
"nugget_type": "MOD",
"offsets": [
93,
94
],
"text": "neural generator",
"tokens": [
"neural",
"generator"
]
},
{
"argument_type": "TriedComponent",
"nugget_type": "MOD",
"offsets": [
98,
99
],
"text": "pragmatic speaker",
"tokens": [
"pragmatic",
"speaker"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
91
],
"text": "extend",
"tokens": [
"extend"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "FEA",
"offsets": [
102,
103,
104
],
"text": "uncertain object categories",
"tokens": [
"uncertain",
"object",
"categories"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
100
],
"text": "reasoning",
"tokens": [
"reasoning"
]
}
},
{
"arguments": [
{
"argument_type": "Arg1",
"nugget_type": "MOD",
"offsets": [
114
],
"text": "generator",
"tokens": [
"generator"
]
},
{
"argument_type": "Arg2",
"nugget_type": "MOD",
"offsets": [
127,
128
],
"text": "literal speaker",
"tokens": [
"literal",
"speaker"
]
},
{
"argument_type": "Result",
"nugget_type": "STR",
"offsets": [
116
],
"text": "fewer",
"tokens": [
"fewer"
]
},
{
"argument_type": "Metrics",
"nugget_type": "TAK",
"offsets": [
117,
120,
121,
122
],
"text": "nouns of distractor categories",
"tokens": [
"nouns",
"of",
"distractor",
"categories"
]
},
{
"argument_type": "Metrics",
"nugget_type": "TAK",
"offsets": [
119,
120,
121,
122
],
"text": "names of distractor categories",
"tokens": [
"names",
"of",
"distractor",
"categories"
]
}
],
"event_type": "CMP",
"trigger": {
"offsets": [
115
],
"text": "produces",
"tokens": [
"produces"
]
}
},
{
"arguments": [
{
"argument_type": "Finder",
"nugget_type": "OG",
"offsets": [
130
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "E-FAC",
"offsets": [
142
],
"text": "improves",
"tokens": [
"improves"
]
}
],
"event_type": "FIN",
"trigger": {
"offsets": [
131
],
"text": "show",
"tokens": [
"show"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
134,
135
],
"text": "conversational strategy",
"tokens": [
"conversational",
"strategy"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
146,
147,
148,
149,
150,
151,
152,
153,
154
],
"text": "in terms of resolution accuracy of an automatic listener",
"tokens": [
"in",
"terms",
"of",
"resolution",
"accuracy",
"of",
"an",
"automatic",
"listener"
]
},
{
"argument_type": "Object",
"nugget_type": "TAK",
"offsets": [
143,
144
],
"text": "communicative success",
"tokens": [
"communicative",
"success"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
142
],
"text": "improves",
"tokens": [
"improves"
]
}
}
] |
[
"zero",
"-",
"shot",
"learning",
"in",
"language",
"&",
"vision",
"is",
"the",
"task",
"of",
"correctly",
"labelling",
"(",
"or",
"naming",
")",
"objects",
"of",
"novel",
"categories",
".",
"another",
"strand",
"of",
"work",
"in",
"l",
"&",
"v",
"aims",
"at",
"pragmatically",
"informative",
"rather",
"than",
"“",
"correct",
"”",
"object",
"descriptions",
",",
"e",
".",
"g",
".",
"in",
"reference",
"games",
".",
"we",
"combine",
"these",
"lines",
"of",
"research",
"and",
"model",
"zero",
"-",
"shot",
"reference",
"games",
",",
"where",
"a",
"speaker",
"needs",
"to",
"successfully",
"refer",
"to",
"a",
"novel",
"object",
"in",
"an",
"image",
".",
"inspired",
"by",
"models",
"of",
"“",
"rational",
"speech",
"acts",
"”",
",",
"we",
"extend",
"a",
"neural",
"generator",
"to",
"become",
"a",
"pragmatic",
"speaker",
"reasoning",
"about",
"uncertain",
"object",
"categories",
".",
"as",
"a",
"result",
"of",
"this",
"reasoning",
",",
"the",
"generator",
"produces",
"fewer",
"nouns",
"and",
"names",
"of",
"distractor",
"categories",
"as",
"compared",
"to",
"a",
"literal",
"speaker",
".",
"we",
"show",
"that",
"this",
"conversational",
"strategy",
"for",
"dealing",
"with",
"novel",
"objects",
"often",
"improves",
"communicative",
"success",
",",
"in",
"terms",
"of",
"resolution",
"accuracy",
"of",
"an",
"automatic",
"listener",
"."
] |
ACL
|
Tailor: Generating and Perturbing Text with Semantic Controls
|
Controlled text perturbation is useful for evaluating and improving model generalizability. However, current techniques rely on training a model for every target perturbation, which is expensive and hard to generalize. We present Tailor, a semantically-controlled text generation system. Tailor builds on a pretrained seq2seq model and produces textual outputs conditioned on control codes derived from semantic representations. We craft a set of operations to modify the control codes, which in turn steer generation towards targeted attributes. These operations can be further composed into higher-level ones, allowing for flexible perturbation strategies. We demonstrate the effectiveness of these perturbations in multiple applications. First, we use Tailor to automatically create high-quality contrast sets for four distinct natural language processing (NLP) tasks. These contrast sets contain fewer spurious artifacts and are complementary to manually annotated ones in their lexical diversity. Second, we show that Tailor perturbations can improve model generalization through data augmentation. Perturbing just ∼2% of training data leads to a 5.8-point gain on an NLI challenge set measuring reliance on syntactic heuristics.
|
22fa010c9faa07e916eb31b141e9ef24
| 2,022
|
[
"controlled text perturbation is useful for evaluating and improving model generalizability .",
"however , current techniques rely on training a model for every target perturbation , which is expensive and hard to generalize .",
"we present tailor , a semantically - controlled text generation system .",
"tailor builds on a pretrained seq2seq model and produces textual outputs conditioned on control codes derived from semantic representations .",
"we craft a set of operations to modify the control codes , which in turn steer generation towards targeted attributes .",
"these operations can be further composed into higher - level ones , allowing for flexible perturbation strategies .",
"we demonstrate the effectiveness of these perturbations in multiple applications .",
"first , we use tailor to automatically create high - quality contrast sets for four distinct natural language processing ( nlp ) tasks .",
"these contrast sets contain fewer spurious artifacts and are complementary to manually annotated ones in their lexical diversity .",
"second , we show that tailor perturbations can improve model generalization through data augmentation .",
"perturbing just [UNK] % of training data leads to a 5 . 8 - point gain on an nli challenge set measuring reliance on syntactic heuristics ."
] |
[
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
0,
1,
2
],
"text": "controlled text perturbation",
"tokens": [
"controlled",
"text",
"perturbation"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
4
],
"text": "useful",
"tokens": [
"useful"
]
}
},
{
"arguments": [
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
28
],
"text": "expensive",
"tokens": [
"expensive"
]
},
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
30,
31,
32
],
"text": "hard to generalize",
"tokens": [
"hard",
"to",
"generalize"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
18
],
"text": "training",
"tokens": [
"training"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
28
],
"text": "expensive",
"tokens": [
"expensive"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
34
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
39,
40,
41,
42,
43,
44
],
"text": "semantically - controlled text generation system",
"tokens": [
"semantically",
"-",
"controlled",
"text",
"generation",
"system"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
35
],
"text": "present",
"tokens": [
"present"
]
}
},
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
54
],
"text": "produces",
"tokens": [
"produces"
]
},
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
63,
64
],
"text": "semantic representations",
"tokens": [
"semantic",
"representations"
]
},
{
"argument_type": "TriedComponent",
"nugget_type": "FEA",
"offsets": [
59,
60
],
"text": "control codes",
"tokens": [
"control",
"codes"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
61
],
"text": "derived",
"tokens": [
"derived"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "FEA",
"offsets": [
55,
56
],
"text": "textual outputs",
"tokens": [
"textual",
"outputs"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
54
],
"text": "produces",
"tokens": [
"produces"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
66
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
73
],
"text": "modify",
"tokens": [
"modify"
]
},
{
"argument_type": "Content",
"nugget_type": "MOD",
"offsets": [
69,
70,
71
],
"text": "set of operations",
"tokens": [
"set",
"of",
"operations"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
81
],
"text": "steer",
"tokens": [
"steer"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
67
],
"text": "craft",
"tokens": [
"craft"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "FEA",
"offsets": [
75,
76
],
"text": "control codes",
"tokens": [
"control",
"codes"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
73
],
"text": "modify",
"tokens": [
"modify"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
82,
83,
84,
85
],
"text": "generation towards targeted attributes",
"tokens": [
"generation",
"towards",
"targeted",
"attributes"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
81
],
"text": "steer",
"tokens": [
"steer"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "TAK",
"offsets": [
108,
109,
110,
111
],
"text": "effectiveness of these perturbations",
"tokens": [
"effectiveness",
"of",
"these",
"perturbations"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
112,
113,
114
],
"text": "in multiple applications",
"tokens": [
"in",
"multiple",
"applications"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
106
],
"text": "demonstrate",
"tokens": [
"demonstrate"
]
}
},
{
"arguments": [
{
"argument_type": "Finder",
"nugget_type": "OG",
"offsets": [
161
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "E-FAC",
"offsets": [
167
],
"text": "improve",
"tokens": [
"improve"
]
}
],
"event_type": "FIN",
"trigger": {
"offsets": [
162
],
"text": "show",
"tokens": [
"show"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
164,
165
],
"text": "tailor perturbations",
"tokens": [
"tailor",
"perturbations"
]
},
{
"argument_type": "Object",
"nugget_type": "TAK",
"offsets": [
168,
169
],
"text": "model generalization",
"tokens": [
"model",
"generalization"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
170,
171,
172
],
"text": "through data augmentation",
"tokens": [
"through",
"data",
"augmentation"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
167
],
"text": "improve",
"tokens": [
"improve"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "STR",
"offsets": [
184,
185,
186,
187,
188,
189
],
"text": "5 . 8 - point gain",
"tokens": [
"5",
".",
"8",
"-",
"point",
"gain"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
174,
175,
176,
177,
178,
179,
180
],
"text": "perturbing just [UNK] % of training data",
"tokens": [
"perturbing",
"just",
"[UNK]",
"%",
"of",
"training",
"data"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
195
],
"text": "measuring",
"tokens": [
"measuring"
]
},
{
"argument_type": "Dataset",
"nugget_type": "DST",
"offsets": [
192,
193,
194
],
"text": "nli challenge set",
"tokens": [
"nli",
"challenge",
"set"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
181
],
"text": "leads",
"tokens": [
"leads"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "APP",
"offsets": [
20
],
"text": "model",
"tokens": [
"model"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
18
],
"text": "training",
"tokens": [
"training"
]
}
},
{
"arguments": [
{
"argument_type": "TriedComponent",
"nugget_type": "APP",
"offsets": [
50,
51,
52
],
"text": "pretrained seq2seq model",
"tokens": [
"pretrained",
"seq2seq",
"model"
]
},
{
"argument_type": "Target",
"nugget_type": "APP",
"offsets": [
46
],
"text": "tailor",
"tokens": [
"tailor"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
47
],
"text": "builds",
"tokens": [
"builds"
]
}
},
{
"arguments": [
{
"argument_type": "BaseComponent",
"nugget_type": "MOD",
"offsets": [
87,
88
],
"text": "these operations",
"tokens": [
"these",
"operations"
]
},
{
"argument_type": "TriedComponent",
"nugget_type": "MOD",
"offsets": [
94,
95,
96,
97
],
"text": "higher - level ones",
"tokens": [
"higher",
"-",
"level",
"ones"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
92
],
"text": "composed",
"tokens": [
"composed"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
118
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
120
],
"text": "tailor",
"tokens": [
"tailor"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
122,
123
],
"text": "automatically create",
"tokens": [
"automatically",
"create"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
119
],
"text": "use",
"tokens": [
"use"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "DST",
"offsets": [
124,
125,
126,
127,
128
],
"text": "high - quality contrast sets",
"tokens": [
"high",
"-",
"quality",
"contrast",
"sets"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
129,
130,
131,
132,
133,
134,
138
],
"text": "for four distinct natural language processing ( nlp ) tasks",
"tokens": [
"for",
"four",
"distinct",
"natural",
"language",
"processing",
"tasks"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
122,
123
],
"text": "automatically create",
"tokens": [
"automatically",
"create"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
196,
197,
198,
199
],
"text": "reliance on syntactic heuristics",
"tokens": [
"reliance",
"on",
"syntactic",
"heuristics"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
195
],
"text": "measuring",
"tokens": [
"measuring"
]
}
}
] |
[
"controlled",
"text",
"perturbation",
"is",
"useful",
"for",
"evaluating",
"and",
"improving",
"model",
"generalizability",
".",
"however",
",",
"current",
"techniques",
"rely",
"on",
"training",
"a",
"model",
"for",
"every",
"target",
"perturbation",
",",
"which",
"is",
"expensive",
"and",
"hard",
"to",
"generalize",
".",
"we",
"present",
"tailor",
",",
"a",
"semantically",
"-",
"controlled",
"text",
"generation",
"system",
".",
"tailor",
"builds",
"on",
"a",
"pretrained",
"seq2seq",
"model",
"and",
"produces",
"textual",
"outputs",
"conditioned",
"on",
"control",
"codes",
"derived",
"from",
"semantic",
"representations",
".",
"we",
"craft",
"a",
"set",
"of",
"operations",
"to",
"modify",
"the",
"control",
"codes",
",",
"which",
"in",
"turn",
"steer",
"generation",
"towards",
"targeted",
"attributes",
".",
"these",
"operations",
"can",
"be",
"further",
"composed",
"into",
"higher",
"-",
"level",
"ones",
",",
"allowing",
"for",
"flexible",
"perturbation",
"strategies",
".",
"we",
"demonstrate",
"the",
"effectiveness",
"of",
"these",
"perturbations",
"in",
"multiple",
"applications",
".",
"first",
",",
"we",
"use",
"tailor",
"to",
"automatically",
"create",
"high",
"-",
"quality",
"contrast",
"sets",
"for",
"four",
"distinct",
"natural",
"language",
"processing",
"(",
"nlp",
")",
"tasks",
".",
"these",
"contrast",
"sets",
"contain",
"fewer",
"spurious",
"artifacts",
"and",
"are",
"complementary",
"to",
"manually",
"annotated",
"ones",
"in",
"their",
"lexical",
"diversity",
".",
"second",
",",
"we",
"show",
"that",
"tailor",
"perturbations",
"can",
"improve",
"model",
"generalization",
"through",
"data",
"augmentation",
".",
"perturbing",
"just",
"[UNK]",
"%",
"of",
"training",
"data",
"leads",
"to",
"a",
"5",
".",
"8",
"-",
"point",
"gain",
"on",
"an",
"nli",
"challenge",
"set",
"measuring",
"reliance",
"on",
"syntactic",
"heuristics",
"."
] |
ACL
|
Consistency Regularization for Cross-Lingual Fine-Tuning
|
Fine-tuning pre-trained cross-lingual language models can transfer task-specific supervision from one language to the others. In this work, we propose to improve cross-lingual fine-tuning with consistency regularization. Specifically, we use example consistency regularization to penalize the prediction sensitivity to four types of data augmentations, i.e., subword sampling, Gaussian noise, code-switch substitution, and machine translation. In addition, we employ model consistency to regularize the models trained with two augmented versions of the same training set. Experimental results on the XTREME benchmark show that our method significantly improves cross-lingual fine-tuning across various tasks, including text classification, question answering, and sequence labeling.
|
647148450d21774b0bcb735b3d85dbe0
| 2,021
|
[
"fine - tuning pre - trained cross - lingual language models can transfer task - specific supervision from one language to the others .",
"in this work , we propose to improve cross - lingual fine - tuning with consistency regularization .",
"specifically , we use example consistency regularization to penalize the prediction sensitivity to four types of data augmentations , i . e . , subword sampling , gaussian noise , code - switch substitution , and machine translation .",
"in addition , we employ model consistency to regularize the models trained with two augmented versions of the same training set .",
"experimental results on the xtreme benchmark show that our method significantly improves cross - lingual fine - tuning across various tasks , including text classification , question answering , and sequence labeling ."
] |
[
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
28
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
39,
40
],
"text": "consistency regularization",
"tokens": [
"consistency",
"regularization"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
31
],
"text": "improve",
"tokens": [
"improve"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
29
],
"text": "propose",
"tokens": [
"propose"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
32,
33,
34,
35,
36,
37
],
"text": "cross - lingual fine - tuning",
"tokens": [
"cross",
"-",
"lingual",
"fine",
"-",
"tuning"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
31
],
"text": "improve",
"tokens": [
"improve"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
44
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
46,
47,
48
],
"text": "example consistency regularization",
"tokens": [
"example",
"consistency",
"regularization"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
50
],
"text": "penalize",
"tokens": [
"penalize"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
45
],
"text": "use",
"tokens": [
"use"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
52,
53
],
"text": "prediction sensitivity",
"tokens": [
"prediction",
"sensitivity"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
54,
55,
56,
57,
58,
59
],
"text": "to four types of data augmentations",
"tokens": [
"to",
"four",
"types",
"of",
"data",
"augmentations"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
50
],
"text": "penalize",
"tokens": [
"penalize"
]
}
},
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "E-FAC",
"offsets": [
113,
114
],
"text": "significantly improves",
"tokens": [
"significantly",
"improves"
]
}
],
"event_type": "FIN",
"trigger": {
"offsets": [
109
],
"text": "show",
"tokens": [
"show"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
39,
40
],
"text": "consistency regularization",
"tokens": [
"consistency",
"regularization"
]
},
{
"argument_type": "Object",
"nugget_type": "TAK",
"offsets": [
115,
116,
117,
118,
119,
120
],
"text": "cross - lingual fine - tuning",
"tokens": [
"cross",
"-",
"lingual",
"fine",
"-",
"tuning"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
121,
122,
123
],
"text": "across various tasks",
"tokens": [
"across",
"various",
"tasks"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
113,
114
],
"text": "significantly improves",
"tokens": [
"significantly",
"improves"
]
}
},
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "APP",
"offsets": [
3,
4,
5,
6,
7,
8,
9,
10
],
"text": "pre - trained cross - lingual language models",
"tokens": [
"pre",
"-",
"trained",
"cross",
"-",
"lingual",
"language",
"models"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
12
],
"text": "transfer",
"tokens": [
"transfer"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
84
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
86,
87
],
"text": "model consistency",
"tokens": [
"model",
"consistency"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
89
],
"text": "regularize",
"tokens": [
"regularize"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
85
],
"text": "employ",
"tokens": [
"employ"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "APP",
"offsets": [
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101
],
"text": "models trained with two augmented versions of the same training set",
"tokens": [
"models",
"trained",
"with",
"two",
"augmented",
"versions",
"of",
"the",
"same",
"training",
"set"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
89
],
"text": "regularize",
"tokens": [
"regularize"
]
}
}
] |
[
"fine",
"-",
"tuning",
"pre",
"-",
"trained",
"cross",
"-",
"lingual",
"language",
"models",
"can",
"transfer",
"task",
"-",
"specific",
"supervision",
"from",
"one",
"language",
"to",
"the",
"others",
".",
"in",
"this",
"work",
",",
"we",
"propose",
"to",
"improve",
"cross",
"-",
"lingual",
"fine",
"-",
"tuning",
"with",
"consistency",
"regularization",
".",
"specifically",
",",
"we",
"use",
"example",
"consistency",
"regularization",
"to",
"penalize",
"the",
"prediction",
"sensitivity",
"to",
"four",
"types",
"of",
"data",
"augmentations",
",",
"i",
".",
"e",
".",
",",
"subword",
"sampling",
",",
"gaussian",
"noise",
",",
"code",
"-",
"switch",
"substitution",
",",
"and",
"machine",
"translation",
".",
"in",
"addition",
",",
"we",
"employ",
"model",
"consistency",
"to",
"regularize",
"the",
"models",
"trained",
"with",
"two",
"augmented",
"versions",
"of",
"the",
"same",
"training",
"set",
".",
"experimental",
"results",
"on",
"the",
"xtreme",
"benchmark",
"show",
"that",
"our",
"method",
"significantly",
"improves",
"cross",
"-",
"lingual",
"fine",
"-",
"tuning",
"across",
"various",
"tasks",
",",
"including",
"text",
"classification",
",",
"question",
"answering",
",",
"and",
"sequence",
"labeling",
"."
] |
ACL
|
Topic-Aware Neural Keyphrase Generation for Social Media Language
|
A huge volume of user-generated content is daily produced on social media. To facilitate automatic language understanding, we study keyphrase prediction, distilling salient information from massive posts. While most existing methods extract words from source posts to form keyphrases, we propose a sequence-to-sequence (seq2seq) based neural keyphrase generation framework, enabling absent keyphrases to be created. Moreover, our model, being topic-aware, allows joint modeling of corpus-level latent topic representations, which helps alleviate data sparsity widely exhibited in social media language. Experiments on three datasets collected from English and Chinese social media platforms show that our model significantly outperforms both extraction and generation models without exploiting latent topics. Further discussions show that our model learns meaningful topics, which interprets its superiority in social media keyphrase generation.
|
37aa646905c5702ad317a48e369db8c9
| 2,019
|
[
"a huge volume of user - generated content is daily produced on social media .",
"to facilitate automatic language understanding , we study keyphrase prediction , distilling salient information from massive posts .",
"while most existing methods extract words from source posts to form keyphrases , we propose a sequence - to - sequence ( seq2seq ) based neural keyphrase generation framework , enabling absent keyphrases to be created .",
"moreover , our model , being topic - aware , allows joint modeling of corpus - level latent topic representations , which helps alleviate data sparsity widely exhibited in social media language .",
"experiments on three datasets collected from english and chinese social media platforms show that our model significantly outperforms both extraction and generation models without exploiting latent topics .",
"further discussions show that our model learns meaningful topics , which interprets its superiority in social media keyphrase generation ."
] |
[
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
16
],
"text": "facilitate",
"tokens": [
"facilitate"
]
},
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
21
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
23,
24
],
"text": "keyphrase prediction",
"tokens": [
"keyphrase",
"prediction"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
22
],
"text": "study",
"tokens": [
"study"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
46
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
58,
59,
60,
61
],
"text": "neural keyphrase generation framework",
"tokens": [
"neural",
"keyphrase",
"generation",
"framework"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
47
],
"text": "propose",
"tokens": [
"propose"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
34,
35,
36
],
"text": "most existing methods",
"tokens": [
"most",
"existing",
"methods"
]
},
{
"argument_type": "TriedComponent",
"nugget_type": "FEA",
"offsets": [
38
],
"text": "words",
"tokens": [
"words"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
43
],
"text": "form",
"tokens": [
"form"
]
},
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
40,
41
],
"text": "source posts",
"tokens": [
"source",
"posts"
]
}
],
"event_type": "RWS",
"trigger": {
"offsets": [
37
],
"text": "extract",
"tokens": [
"extract"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
44
],
"text": "keyphrases",
"tokens": [
"keyphrases"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
43
],
"text": "form",
"tokens": [
"form"
]
}
},
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
81,
82,
83,
84,
85,
86,
87,
88,
89
],
"text": "joint modeling of corpus - level latent topic representations",
"tokens": [
"joint",
"modeling",
"of",
"corpus",
"-",
"level",
"latent",
"topic",
"representations"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
80
],
"text": "allows",
"tokens": [
"allows"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
17,
18,
19
],
"text": "automatic language understanding",
"tokens": [
"automatic",
"language",
"understanding"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
16
],
"text": "facilitate",
"tokens": [
"facilitate"
]
}
},
{
"arguments": [
{
"argument_type": "Arg1",
"nugget_type": "APP",
"offsets": [
58,
59,
60,
61
],
"text": "neural keyphrase generation framework",
"tokens": [
"neural",
"keyphrase",
"generation",
"framework"
]
},
{
"argument_type": "Extent",
"nugget_type": "DEG",
"offsets": [
119
],
"text": "significantly",
"tokens": [
"significantly"
]
},
{
"argument_type": "Result",
"nugget_type": "STR",
"offsets": [
120
],
"text": "outperforms",
"tokens": [
"outperforms"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
126,
127,
128,
129
],
"text": "without exploiting latent topics",
"tokens": [
"without",
"exploiting",
"latent",
"topics"
]
},
{
"argument_type": "Arg2",
"nugget_type": "APP",
"offsets": [
122,
125
],
"text": "extraction models",
"tokens": [
"extraction",
"models"
]
}
],
"event_type": "CMP",
"trigger": {
"offsets": [
120
],
"text": "outperforms",
"tokens": [
"outperforms"
]
}
},
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "E-FAC",
"offsets": [
137
],
"text": "learns",
"tokens": [
"learns"
]
}
],
"event_type": "FIN",
"trigger": {
"offsets": [
133
],
"text": "show",
"tokens": [
"show"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
58,
59,
60,
61
],
"text": "neural keyphrase generation framework",
"tokens": [
"neural",
"keyphrase",
"generation",
"framework"
]
},
{
"argument_type": "Object",
"nugget_type": "TAK",
"offsets": [
138,
139
],
"text": "meaningful topics",
"tokens": [
"meaningful",
"topics"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
137
],
"text": "learns",
"tokens": [
"learns"
]
}
}
] |
[
"a",
"huge",
"volume",
"of",
"user",
"-",
"generated",
"content",
"is",
"daily",
"produced",
"on",
"social",
"media",
".",
"to",
"facilitate",
"automatic",
"language",
"understanding",
",",
"we",
"study",
"keyphrase",
"prediction",
",",
"distilling",
"salient",
"information",
"from",
"massive",
"posts",
".",
"while",
"most",
"existing",
"methods",
"extract",
"words",
"from",
"source",
"posts",
"to",
"form",
"keyphrases",
",",
"we",
"propose",
"a",
"sequence",
"-",
"to",
"-",
"sequence",
"(",
"seq2seq",
")",
"based",
"neural",
"keyphrase",
"generation",
"framework",
",",
"enabling",
"absent",
"keyphrases",
"to",
"be",
"created",
".",
"moreover",
",",
"our",
"model",
",",
"being",
"topic",
"-",
"aware",
",",
"allows",
"joint",
"modeling",
"of",
"corpus",
"-",
"level",
"latent",
"topic",
"representations",
",",
"which",
"helps",
"alleviate",
"data",
"sparsity",
"widely",
"exhibited",
"in",
"social",
"media",
"language",
".",
"experiments",
"on",
"three",
"datasets",
"collected",
"from",
"english",
"and",
"chinese",
"social",
"media",
"platforms",
"show",
"that",
"our",
"model",
"significantly",
"outperforms",
"both",
"extraction",
"and",
"generation",
"models",
"without",
"exploiting",
"latent",
"topics",
".",
"further",
"discussions",
"show",
"that",
"our",
"model",
"learns",
"meaningful",
"topics",
",",
"which",
"interprets",
"its",
"superiority",
"in",
"social",
"media",
"keyphrase",
"generation",
"."
] |
ACL
|
Evaluating Explainable AI: Which Algorithmic Explanations Help Users Predict Model Behavior?
|
Algorithmic approaches to interpreting machine learning models have proliferated in recent years. We carry out human subject tests that are the first of their kind to isolate the effect of algorithmic explanations on a key aspect of model interpretability, simulatability, while avoiding important confounding experimental factors. A model is simulatable when a person can predict its behavior on new inputs. Through two kinds of simulation tests involving text and tabular data, we evaluate five explanations methods: (1) LIME, (2) Anchor, (3) Decision Boundary, (4) a Prototype model, and (5) a Composite approach that combines explanations from each method. Clear evidence of method effectiveness is found in very few cases: LIME improves simulatability in tabular classification, and our Prototype method is effective in counterfactual simulation tests. We also collect subjective ratings of explanations, but we do not find that ratings are predictive of how helpful explanations are. Our results provide the first reliable and comprehensive estimates of how explanations influence simulatability across a variety of explanation methods and data domains. We show that (1) we need to be careful about the metrics we use to evaluate explanation methods, and (2) there is significant room for improvement in current methods.
|
ad87f47dec93949d2daef7453217258c
| 2,020
|
[
"algorithmic approaches to interpreting machine learning models have proliferated in recent years .",
"we carry out human subject tests that are the first of their kind to isolate the effect of algorithmic explanations on a key aspect of model interpretability , simulatability , while avoiding important confounding experimental factors .",
"a model is simulatable when a person can predict its behavior on new inputs .",
"through two kinds of simulation tests involving text and tabular data , we evaluate five explanations methods : ( 1 ) lime , ( 2 ) anchor , ( 3 ) decision boundary , ( 4 ) a prototype model , and ( 5 ) a composite approach that combines explanations from each method .",
"clear evidence of method effectiveness is found in very few cases : lime improves simulatability in tabular classification , and our prototype method is effective in counterfactual simulation tests .",
"we also collect subjective ratings of explanations , but we do not find that ratings are predictive of how helpful explanations are .",
"our results provide the first reliable and comprehensive estimates of how explanations influence simulatability across a variety of explanation methods and data domains .",
"we show that ( 1 ) we need to be careful about the metrics we use to evaluate explanation methods , and ( 2 ) there is significant room for improvement in current methods ."
] |
[
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "APP",
"offsets": [
4,
5,
6
],
"text": "machine learning models",
"tokens": [
"machine",
"learning",
"models"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
8
],
"text": "proliferated",
"tokens": [
"proliferated"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
13
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
16,
17,
18
],
"text": "human subject tests",
"tokens": [
"human",
"subject",
"tests"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
27
],
"text": "isolate",
"tokens": [
"isolate"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
44
],
"text": "avoiding",
"tokens": [
"avoiding"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
14,
15
],
"text": "carry out",
"tokens": [
"carry",
"out"
]
}
},
{
"arguments": [
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75
],
"text": "through two kinds of simulation tests involving text and tabular data",
"tokens": [
"through",
"two",
"kinds",
"of",
"simulation",
"tests",
"involving",
"text",
"and",
"tabular",
"data"
]
},
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
77
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
86
],
"text": "lime",
"tokens": [
"lime"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
91
],
"text": "anchor",
"tokens": [
"anchor"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
96,
97
],
"text": "decision boundary",
"tokens": [
"decision",
"boundary"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
103,
104
],
"text": "prototype model",
"tokens": [
"prototype",
"model"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
111,
112
],
"text": "composite approach",
"tokens": [
"composite",
"approach"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
78
],
"text": "evaluate",
"tokens": [
"evaluate"
]
}
},
{
"arguments": [
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
127,
128,
129,
130
],
"text": "in very few cases",
"tokens": [
"in",
"very",
"few",
"cases"
]
},
{
"argument_type": "Subject",
"nugget_type": "TAK",
"offsets": [
120,
121,
122,
123,
124
],
"text": "clear evidence of method effectiveness",
"tokens": [
"clear",
"evidence",
"of",
"method",
"effectiveness"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
126
],
"text": "found",
"tokens": [
"found"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
132
],
"text": "lime",
"tokens": [
"lime"
]
},
{
"argument_type": "Object",
"nugget_type": "TAK",
"offsets": [
134,
135,
136,
137
],
"text": "simulatability in tabular classification",
"tokens": [
"simulatability",
"in",
"tabular",
"classification"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
133
],
"text": "improves",
"tokens": [
"improves"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
141,
142
],
"text": "prototype method",
"tokens": [
"prototype",
"method"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
146,
147,
148
],
"text": "counterfactual simulation tests",
"tokens": [
"counterfactual",
"simulation",
"tests"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
144
],
"text": "effective",
"tokens": [
"effective"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
150
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "FEA",
"offsets": [
153,
154,
155,
156
],
"text": "subjective ratings of explanations",
"tokens": [
"subjective",
"ratings",
"of",
"explanations"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
152
],
"text": "collect",
"tokens": [
"collect"
]
}
},
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
184
],
"text": "explanations",
"tokens": [
"explanations"
]
},
{
"argument_type": "Subject",
"nugget_type": "STR",
"offsets": [
178,
181
],
"text": "reliable estimates",
"tokens": [
"reliable",
"estimates"
]
},
{
"argument_type": "Subject",
"nugget_type": "STR",
"offsets": [
180,
181
],
"text": "comprehensive estimates",
"tokens": [
"comprehensive",
"estimates"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
175
],
"text": "provide",
"tokens": [
"provide"
]
}
},
{
"arguments": [
{
"argument_type": "Finder",
"nugget_type": "OG",
"offsets": [
197
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "E-FAC",
"offsets": [
224,
225
],
"text": "significant room",
"tokens": [
"significant",
"room"
]
}
],
"event_type": "FIN",
"trigger": {
"offsets": [
198
],
"text": "show",
"tokens": [
"show"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "TAK",
"offsets": [
227,
228,
229,
230
],
"text": "improvement in current methods",
"tokens": [
"improvement",
"in",
"current",
"methods"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
224,
225
],
"text": "significant room",
"tokens": [
"significant",
"room"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
29,
30,
31,
32
],
"text": "effect of algorithmic explanations",
"tokens": [
"effect",
"of",
"algorithmic",
"explanations"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
33,
34,
35,
36,
37,
38,
39
],
"text": "on a key aspect of model interpretability",
"tokens": [
"on",
"a",
"key",
"aspect",
"of",
"model",
"interpretability"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
27
],
"text": "isolate",
"tokens": [
"isolate"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "FEA",
"offsets": [
45,
46,
47,
48
],
"text": "important confounding experimental factors",
"tokens": [
"important",
"confounding",
"experimental",
"factors"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
44
],
"text": "avoiding",
"tokens": [
"avoiding"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
185,
186
],
"text": "influence simulatability",
"tokens": [
"influence",
"simulatability"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
187,
188,
189,
190,
191,
192,
193,
194,
195
],
"text": "across a variety of explanation methods and data domains",
"tokens": [
"across",
"a",
"variety",
"of",
"explanation",
"methods",
"and",
"data",
"domains"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
184
],
"text": "explanations",
"tokens": [
"explanations"
]
}
}
] |
[
"algorithmic",
"approaches",
"to",
"interpreting",
"machine",
"learning",
"models",
"have",
"proliferated",
"in",
"recent",
"years",
".",
"we",
"carry",
"out",
"human",
"subject",
"tests",
"that",
"are",
"the",
"first",
"of",
"their",
"kind",
"to",
"isolate",
"the",
"effect",
"of",
"algorithmic",
"explanations",
"on",
"a",
"key",
"aspect",
"of",
"model",
"interpretability",
",",
"simulatability",
",",
"while",
"avoiding",
"important",
"confounding",
"experimental",
"factors",
".",
"a",
"model",
"is",
"simulatable",
"when",
"a",
"person",
"can",
"predict",
"its",
"behavior",
"on",
"new",
"inputs",
".",
"through",
"two",
"kinds",
"of",
"simulation",
"tests",
"involving",
"text",
"and",
"tabular",
"data",
",",
"we",
"evaluate",
"five",
"explanations",
"methods",
":",
"(",
"1",
")",
"lime",
",",
"(",
"2",
")",
"anchor",
",",
"(",
"3",
")",
"decision",
"boundary",
",",
"(",
"4",
")",
"a",
"prototype",
"model",
",",
"and",
"(",
"5",
")",
"a",
"composite",
"approach",
"that",
"combines",
"explanations",
"from",
"each",
"method",
".",
"clear",
"evidence",
"of",
"method",
"effectiveness",
"is",
"found",
"in",
"very",
"few",
"cases",
":",
"lime",
"improves",
"simulatability",
"in",
"tabular",
"classification",
",",
"and",
"our",
"prototype",
"method",
"is",
"effective",
"in",
"counterfactual",
"simulation",
"tests",
".",
"we",
"also",
"collect",
"subjective",
"ratings",
"of",
"explanations",
",",
"but",
"we",
"do",
"not",
"find",
"that",
"ratings",
"are",
"predictive",
"of",
"how",
"helpful",
"explanations",
"are",
".",
"our",
"results",
"provide",
"the",
"first",
"reliable",
"and",
"comprehensive",
"estimates",
"of",
"how",
"explanations",
"influence",
"simulatability",
"across",
"a",
"variety",
"of",
"explanation",
"methods",
"and",
"data",
"domains",
".",
"we",
"show",
"that",
"(",
"1",
")",
"we",
"need",
"to",
"be",
"careful",
"about",
"the",
"metrics",
"we",
"use",
"to",
"evaluate",
"explanation",
"methods",
",",
"and",
"(",
"2",
")",
"there",
"is",
"significant",
"room",
"for",
"improvement",
"in",
"current",
"methods",
"."
] |
ACL
|
OpenHands: Making Sign Language Recognition Accessible with Pose-based Pretrained Models across Languages
|
AI technologies for Natural Languages have made tremendous progress recently. However, commensurate progress has not been made on Sign Languages, in particular, in recognizing signs as individual words or as complete sentences. We introduce OpenHands, a library where we take four key ideas from the NLP community for low-resource languages and apply them to sign languages for word-level recognition. First, we propose using pose extracted through pretrained models as the standard modality of data in this work to reduce training time and enable efficient inference, and we release standardized pose datasets for different existing sign language datasets. Second, we train and release checkpoints of 4 pose-based isolated sign language recognition models across 6 languages (American, Argentinian, Chinese, Greek, Indian, and Turkish), providing baselines and ready checkpoints for deployment. Third, to address the lack of labelled data, we propose self-supervised pretraining on unlabelled data. We curate and release the largest pose-based pretraining dataset on Indian Sign Language (Indian-SL). Fourth, we compare different pretraining strategies and for the first time establish that pretraining is effective for sign language recognition by demonstrating (a) improved fine-tuning performance especially in low-resource settings, and (b) high crosslingual transfer from Indian-SL to few other sign languages. We open-source all models and datasets in OpenHands with a hope that it makes research in sign languages reproducible and more accessible.
|
a851cd507cecec30fe7af3c5e81e11f2
| 2,022
|
[
"ai technologies for natural languages have made tremendous progress recently .",
"however , commensurate progress has not been made on sign languages , in particular , in recognizing signs as individual words or as complete sentences .",
"we introduce openhands , a library where we take four key ideas from the nlp community for low - resource languages and apply them to sign languages for word - level recognition .",
"first , we propose using pose extracted through pretrained models as the standard modality of data in this work to reduce training time and enable efficient inference , and we release standardized pose datasets for different existing sign language datasets .",
"second , we train and release checkpoints of 4 pose - based isolated sign language recognition models across 6 languages ( american , argentinian , chinese , greek , indian , and turkish ) , providing baselines and ready checkpoints for deployment .",
"third , to address the lack of labelled data , we propose self - supervised pretraining on unlabelled data .",
"we curate and release the largest pose - based pretraining dataset on indian sign language ( indian - sl ) .",
"fourth , we compare different pretraining strategies and for the first time establish that pretraining is effective for sign language recognition by demonstrating ( a ) improved fine - tuning performance especially in low - resource settings , and ( b ) high crosslingual transfer from indian - sl to few other sign languages .",
"we open - source all models and datasets in openhands with a hope that it makes research in sign languages reproducible and more accessible ."
] |
[
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
3,
4
],
"text": "natural languages",
"tokens": [
"natural",
"languages"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
6
],
"text": "made",
"tokens": [
"made"
]
}
},
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
20,
21
],
"text": "sign languages",
"tokens": [
"sign",
"languages"
]
},
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
13,
14
],
"text": "commensurate progress",
"tokens": [
"commensurate",
"progress"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
16,
17,
18
],
"text": "not been made",
"tokens": [
"not",
"been",
"made"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
37
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "DST",
"offsets": [
39
],
"text": "openhands",
"tokens": [
"openhands"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
38
],
"text": "introduce",
"tokens": [
"introduce"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
44
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
46,
47,
48
],
"text": "four key ideas",
"tokens": [
"four",
"key",
"ideas"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
49,
50,
51,
52
],
"text": "from the nlp community",
"tokens": [
"from",
"the",
"nlp",
"community"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
54,
55,
56,
57
],
"text": "low - resource languages",
"tokens": [
"low",
"-",
"resource",
"languages"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
45
],
"text": "take",
"tokens": [
"take"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
44
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
46,
47,
48
],
"text": "four key ideas",
"tokens": [
"four",
"key",
"ideas"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
62,
63,
64,
65,
66,
67,
68
],
"text": "sign languages for word - level recognition",
"tokens": [
"sign",
"languages",
"for",
"word",
"-",
"level",
"recognition"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
59
],
"text": "apply",
"tokens": [
"apply"
]
}
},
{
"arguments": [
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
82,
83,
84,
85
],
"text": "standard modality of data",
"tokens": [
"standard",
"modality",
"of",
"data"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
90
],
"text": "reduce",
"tokens": [
"reduce"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
94
],
"text": "enable",
"tokens": [
"enable"
]
},
{
"argument_type": "TriedComponent",
"nugget_type": "FEA",
"offsets": [
75,
76,
77,
78,
79
],
"text": "pose extracted through pretrained models",
"tokens": [
"pose",
"extracted",
"through",
"pretrained",
"models"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
74
],
"text": "using",
"tokens": [
"using"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
91,
92
],
"text": "training time",
"tokens": [
"training",
"time"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
90
],
"text": "reduce",
"tokens": [
"reduce"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
95,
96
],
"text": "efficient inference",
"tokens": [
"efficient",
"inference"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
94
],
"text": "enable",
"tokens": [
"enable"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
99
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "DST",
"offsets": [
101,
102,
103
],
"text": "standardized pose datasets",
"tokens": [
"standardized",
"pose",
"datasets"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
100
],
"text": "release",
"tokens": [
"release"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
113
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
117
],
"text": "checkpoints",
"tokens": [
"checkpoints"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
118,
119,
120,
121,
122,
123,
124,
125,
126,
127
],
"text": "of 4 pose - based isolated sign language recognition models",
"tokens": [
"of",
"4",
"pose",
"-",
"based",
"isolated",
"sign",
"language",
"recognition",
"models"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
128,
129,
130
],
"text": "across 6 languages",
"tokens": [
"across",
"6",
"languages"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
146
],
"text": "providing",
"tokens": [
"providing"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
114,
115,
116
],
"text": "train and release",
"tokens": [
"train",
"and",
"release"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
164
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
166,
167,
168,
169,
170,
171,
172
],
"text": "self - supervised pretraining on unlabelled data",
"tokens": [
"self",
"-",
"supervised",
"pretraining",
"on",
"unlabelled",
"data"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
157
],
"text": "address",
"tokens": [
"address"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
165
],
"text": "propose",
"tokens": [
"propose"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "WEA",
"offsets": [
159,
160,
161,
162
],
"text": "lack of labelled data",
"tokens": [
"lack",
"of",
"labelled",
"data"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
157
],
"text": "address",
"tokens": [
"address"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
174
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "DST",
"offsets": [
179,
180,
181,
182,
183,
184,
185,
186,
187,
188
],
"text": "largest pose - based pretraining dataset on indian sign language",
"tokens": [
"largest",
"pose",
"-",
"based",
"pretraining",
"dataset",
"on",
"indian",
"sign",
"language"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
175,
176,
177
],
"text": "curate and release",
"tokens": [
"curate",
"and",
"release"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
197
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
199,
200,
201
],
"text": "different pretraining strategies",
"tokens": [
"different",
"pretraining",
"strategies"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
198
],
"text": "compare",
"tokens": [
"compare"
]
}
},
{
"arguments": [
{
"argument_type": "Finder",
"nugget_type": "OG",
"offsets": [
197
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "E-FAC",
"offsets": [
211
],
"text": "effective",
"tokens": [
"effective"
]
}
],
"event_type": "FIN",
"trigger": {
"offsets": [
207
],
"text": "establish",
"tokens": [
"establish"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
209
],
"text": "pretraining",
"tokens": [
"pretraining"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
213,
214,
215
],
"text": "sign language recognition",
"tokens": [
"sign",
"language",
"recognition"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
211
],
"text": "effective",
"tokens": [
"effective"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
209
],
"text": "pretraining",
"tokens": [
"pretraining"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
222,
223,
224,
225
],
"text": "fine - tuning performance",
"tokens": [
"fine",
"-",
"tuning",
"performance"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
221
],
"text": "improved",
"tokens": [
"improved"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
209
],
"text": "pretraining",
"tokens": [
"pretraining"
]
},
{
"argument_type": "Object",
"nugget_type": "STR",
"offsets": [
237,
238,
239
],
"text": "high crosslingual transfer",
"tokens": [
"high",
"crosslingual",
"transfer"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
240,
186,
187,
188,
244,
245,
246,
247,
248
],
"text": "from indian - sl to few other sign languages",
"tokens": [
"from",
"indian",
"sign",
"language",
"to",
"few",
"other",
"sign",
"languages"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
217
],
"text": "demonstrating",
"tokens": [
"demonstrating"
]
}
},
{
"arguments": [
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
151,
152
],
"text": "for deployment",
"tokens": [
"for",
"deployment"
]
},
{
"argument_type": "Aim",
"nugget_type": "APP",
"offsets": [
147
],
"text": "baselines",
"tokens": [
"baselines"
]
},
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
149,
150
],
"text": "ready checkpoints",
"tokens": [
"ready",
"checkpoints"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
146
],
"text": "providing",
"tokens": [
"providing"
]
}
}
] |
[
"ai",
"technologies",
"for",
"natural",
"languages",
"have",
"made",
"tremendous",
"progress",
"recently",
".",
"however",
",",
"commensurate",
"progress",
"has",
"not",
"been",
"made",
"on",
"sign",
"languages",
",",
"in",
"particular",
",",
"in",
"recognizing",
"signs",
"as",
"individual",
"words",
"or",
"as",
"complete",
"sentences",
".",
"we",
"introduce",
"openhands",
",",
"a",
"library",
"where",
"we",
"take",
"four",
"key",
"ideas",
"from",
"the",
"nlp",
"community",
"for",
"low",
"-",
"resource",
"languages",
"and",
"apply",
"them",
"to",
"sign",
"languages",
"for",
"word",
"-",
"level",
"recognition",
".",
"first",
",",
"we",
"propose",
"using",
"pose",
"extracted",
"through",
"pretrained",
"models",
"as",
"the",
"standard",
"modality",
"of",
"data",
"in",
"this",
"work",
"to",
"reduce",
"training",
"time",
"and",
"enable",
"efficient",
"inference",
",",
"and",
"we",
"release",
"standardized",
"pose",
"datasets",
"for",
"different",
"existing",
"sign",
"language",
"datasets",
".",
"second",
",",
"we",
"train",
"and",
"release",
"checkpoints",
"of",
"4",
"pose",
"-",
"based",
"isolated",
"sign",
"language",
"recognition",
"models",
"across",
"6",
"languages",
"(",
"american",
",",
"argentinian",
",",
"chinese",
",",
"greek",
",",
"indian",
",",
"and",
"turkish",
")",
",",
"providing",
"baselines",
"and",
"ready",
"checkpoints",
"for",
"deployment",
".",
"third",
",",
"to",
"address",
"the",
"lack",
"of",
"labelled",
"data",
",",
"we",
"propose",
"self",
"-",
"supervised",
"pretraining",
"on",
"unlabelled",
"data",
".",
"we",
"curate",
"and",
"release",
"the",
"largest",
"pose",
"-",
"based",
"pretraining",
"dataset",
"on",
"indian",
"sign",
"language",
"(",
"indian",
"-",
"sl",
")",
".",
"fourth",
",",
"we",
"compare",
"different",
"pretraining",
"strategies",
"and",
"for",
"the",
"first",
"time",
"establish",
"that",
"pretraining",
"is",
"effective",
"for",
"sign",
"language",
"recognition",
"by",
"demonstrating",
"(",
"a",
")",
"improved",
"fine",
"-",
"tuning",
"performance",
"especially",
"in",
"low",
"-",
"resource",
"settings",
",",
"and",
"(",
"b",
")",
"high",
"crosslingual",
"transfer",
"from",
"indian",
"-",
"sl",
"to",
"few",
"other",
"sign",
"languages",
".",
"we",
"open",
"-",
"source",
"all",
"models",
"and",
"datasets",
"in",
"openhands",
"with",
"a",
"hope",
"that",
"it",
"makes",
"research",
"in",
"sign",
"languages",
"reproducible",
"and",
"more",
"accessible",
"."
] |
ACL
|
CraftAssist Instruction Parsing: Semantic Parsing for a Voxel-World Assistant
|
We propose a semantic parsing dataset focused on instruction-driven communication with an agent in the game Minecraft. The dataset consists of 7K human utterances and their corresponding parses. Given proper world state, the parses can be interpreted and executed in game. We report the performance of baseline models, and analyze their successes and failures.
|
47b0133ca77dec062449ad0be546d66b
| 2,020
|
[
"we propose a semantic parsing dataset focused on instruction - driven communication with an agent in the game minecraft .",
"the dataset consists of 7k human utterances and their corresponding parses .",
"given proper world state , the parses can be interpreted and executed in game .",
"we report the performance of baseline models , and analyze their successes and failures ."
] |
[
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
0
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "DST",
"offsets": [
3,
4,
5
],
"text": "semantic parsing dataset",
"tokens": [
"semantic",
"parsing",
"dataset"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
1
],
"text": "propose",
"tokens": [
"propose"
]
}
},
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "FEA",
"offsets": [
24,
25,
26
],
"text": "7k human utterances",
"tokens": [
"7k",
"human",
"utterances"
]
},
{
"argument_type": "Content",
"nugget_type": "FEA",
"offsets": [
29,
30
],
"text": "corresponding parses",
"tokens": [
"corresponding",
"parses"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
22
],
"text": "consists",
"tokens": [
"consists"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
47
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
50,
51,
52,
53
],
"text": "performance of baseline models",
"tokens": [
"performance",
"of",
"baseline",
"models"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
48
],
"text": "report",
"tokens": [
"report"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
47
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
58
],
"text": "successes",
"tokens": [
"successes"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
60
],
"text": "failures",
"tokens": [
"failures"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
56
],
"text": "analyze",
"tokens": [
"analyze"
]
}
}
] |
[
"we",
"propose",
"a",
"semantic",
"parsing",
"dataset",
"focused",
"on",
"instruction",
"-",
"driven",
"communication",
"with",
"an",
"agent",
"in",
"the",
"game",
"minecraft",
".",
"the",
"dataset",
"consists",
"of",
"7k",
"human",
"utterances",
"and",
"their",
"corresponding",
"parses",
".",
"given",
"proper",
"world",
"state",
",",
"the",
"parses",
"can",
"be",
"interpreted",
"and",
"executed",
"in",
"game",
".",
"we",
"report",
"the",
"performance",
"of",
"baseline",
"models",
",",
"and",
"analyze",
"their",
"successes",
"and",
"failures",
"."
] |
ACL
|
Variational Graph Autoencoding as Cheap Supervision for AMR Coreference Resolution
|
Coreference resolution over semantic graphs like AMRs aims to group the graph nodes that represent the same entity. This is a crucial step for making document-level formal semantic representations. With annotated data on AMR coreference resolution, deep learning approaches have recently shown great potential for this task, yet they are usually data hunger and annotations are costly. We propose a general pretraining method using variational graph autoencoder (VGAE) for AMR coreference resolution, which can leverage any general AMR corpus and even automatically parsed AMR data. Experiments on benchmarks show that the pretraining approach achieves performance gains of up to 6% absolute F1 points. Moreover, our model significantly improves on the previous state-of-the-art model by up to 11% F1.
|
71322ca60463d3a522b2b71ef00d18b3
| 2,022
|
[
"coreference resolution over semantic graphs like amrs aims to group the graph nodes that represent the same entity .",
"this is a crucial step for making document - level formal semantic representations .",
"with annotated data on amr coreference resolution , deep learning approaches have recently shown great potential for this task , yet they are usually data hunger and annotations are costly .",
"we propose a general pretraining method using variational graph autoencoder ( vgae ) for amr coreference resolution , which can leverage any general amr corpus and even automatically parsed amr data .",
"experiments on benchmarks show that the pretraining approach achieves performance gains of up to 6 % absolute f1 points .",
"moreover , our model significantly improves on the previous state - of - the - art model by up to 11 % f1 ."
] |
[
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
26,
27,
28,
29,
30,
31
],
"text": "document - level formal semantic representations",
"tokens": [
"document",
"-",
"level",
"formal",
"semantic",
"representations"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
25
],
"text": "making",
"tokens": [
"making"
]
}
},
{
"arguments": [
{
"argument_type": "Concern",
"nugget_type": "APP",
"offsets": [
41,
42,
43
],
"text": "deep learning approaches",
"tokens": [
"deep",
"learning",
"approaches"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
57,
58
],
"text": "data hunger",
"tokens": [
"data",
"hunger"
]
}
},
{
"arguments": [
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
62
],
"text": "costly",
"tokens": [
"costly"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
62
],
"text": "costly",
"tokens": [
"costly"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
64
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
67,
68,
69
],
"text": "general pretraining method",
"tokens": [
"general",
"pretraining",
"method"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
78,
79,
80
],
"text": "amr coreference resolution",
"tokens": [
"amr",
"coreference",
"resolution"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
65
],
"text": "propose",
"tokens": [
"propose"
]
}
},
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
78,
79,
80
],
"text": "amr coreference resolution",
"tokens": [
"amr",
"coreference",
"resolution"
]
},
{
"argument_type": "TriedComponent",
"nugget_type": "MOD",
"offsets": [
71,
72,
73
],
"text": "variational graph autoencoder",
"tokens": [
"variational",
"graph",
"autoencoder"
]
},
{
"argument_type": "BaseComponent",
"nugget_type": "DST",
"offsets": [
92,
93,
94
],
"text": "parsed amr data",
"tokens": [
"parsed",
"amr",
"data"
]
},
{
"argument_type": "BaseComponent",
"nugget_type": "DST",
"offsets": [
86,
87,
88
],
"text": "general amr corpus",
"tokens": [
"general",
"amr",
"corpus"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
84
],
"text": "leverage",
"tokens": [
"leverage"
]
}
},
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "E-FAC",
"offsets": [
104
],
"text": "achieves",
"tokens": [
"achieves"
]
}
],
"event_type": "FIN",
"trigger": {
"offsets": [
99
],
"text": "show",
"tokens": [
"show"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
67,
68,
69
],
"text": "general pretraining method",
"tokens": [
"general",
"pretraining",
"method"
]
},
{
"argument_type": "Extent",
"nugget_type": "DEG",
"offsets": [
108,
109,
110,
111
],
"text": "up to 6 %",
"tokens": [
"up",
"to",
"6",
"%"
]
},
{
"argument_type": "Object",
"nugget_type": "STR",
"offsets": [
105,
106
],
"text": "performance gains",
"tokens": [
"performance",
"gains"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
112,
113,
114
],
"text": "absolute f1 points",
"tokens": [
"absolute",
"f1",
"points"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
104
],
"text": "achieves",
"tokens": [
"achieves"
]
}
},
{
"arguments": [
{
"argument_type": "Arg1",
"nugget_type": "APP",
"offsets": [
67,
68,
69
],
"text": "general pretraining method",
"tokens": [
"general",
"pretraining",
"method"
]
},
{
"argument_type": "Arg2",
"nugget_type": "APP",
"offsets": [
124,
125,
126,
127,
128,
129,
130,
131,
132
],
"text": "previous state - of - the - art model",
"tokens": [
"previous",
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"model"
]
},
{
"argument_type": "Extent",
"nugget_type": "DEG",
"offsets": [
134,
135,
136,
137
],
"text": "up to 11 %",
"tokens": [
"up",
"to",
"11",
"%"
]
},
{
"argument_type": "Metrics",
"nugget_type": "TAK",
"offsets": [
138
],
"text": "f1",
"tokens": [
"f1"
]
}
],
"event_type": "CMP",
"trigger": {
"offsets": [
120,
121
],
"text": "significantly improves",
"tokens": [
"significantly",
"improves"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
0,
1
],
"text": "coreference resolution",
"tokens": [
"coreference",
"resolution"
]
},
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
11,
12
],
"text": "graph nodes",
"tokens": [
"graph",
"nodes"
]
}
],
"event_type": "RWS",
"trigger": {
"offsets": [
9
],
"text": "group",
"tokens": [
"group"
]
}
}
] |
[
"coreference",
"resolution",
"over",
"semantic",
"graphs",
"like",
"amrs",
"aims",
"to",
"group",
"the",
"graph",
"nodes",
"that",
"represent",
"the",
"same",
"entity",
".",
"this",
"is",
"a",
"crucial",
"step",
"for",
"making",
"document",
"-",
"level",
"formal",
"semantic",
"representations",
".",
"with",
"annotated",
"data",
"on",
"amr",
"coreference",
"resolution",
",",
"deep",
"learning",
"approaches",
"have",
"recently",
"shown",
"great",
"potential",
"for",
"this",
"task",
",",
"yet",
"they",
"are",
"usually",
"data",
"hunger",
"and",
"annotations",
"are",
"costly",
".",
"we",
"propose",
"a",
"general",
"pretraining",
"method",
"using",
"variational",
"graph",
"autoencoder",
"(",
"vgae",
")",
"for",
"amr",
"coreference",
"resolution",
",",
"which",
"can",
"leverage",
"any",
"general",
"amr",
"corpus",
"and",
"even",
"automatically",
"parsed",
"amr",
"data",
".",
"experiments",
"on",
"benchmarks",
"show",
"that",
"the",
"pretraining",
"approach",
"achieves",
"performance",
"gains",
"of",
"up",
"to",
"6",
"%",
"absolute",
"f1",
"points",
".",
"moreover",
",",
"our",
"model",
"significantly",
"improves",
"on",
"the",
"previous",
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"model",
"by",
"up",
"to",
"11",
"%",
"f1",
"."
] |
ACL
|
QA-Driven Zero-shot Slot Filling with Weak Supervision Pretraining
|
Slot-filling is an essential component for building task-oriented dialog systems. In this work, we focus on the zero-shot slot-filling problem, where the model needs to predict slots and their values, given utterances from new domains without training on the target domain. Prior methods directly encode slot descriptions to generalize to unseen slot types. However, raw slot descriptions are often ambiguous and do not encode enough semantic information, limiting the models’ zero-shot capability. To address this problem, we introduce QA-driven slot filling (QASF), which extracts slot-filler spans from utterances with a span-based QA model. We use a linguistically motivated questioning strategy to turn descriptions into questions, allowing the model to generalize to unseen slot types. Moreover, our QASF model can benefit from weak supervision signals from QA pairs synthetically generated from unlabeled conversations. Our full system substantially outperforms baselines by over 5% on the SNIPS benchmark.
|
d234a4421483a33cbc549d1fd60397d6
| 2,021
|
[
"slot - filling is an essential component for building task - oriented dialog systems .",
"in this work , we focus on the zero - shot slot - filling problem , where the model needs to predict slots and their values , given utterances from new domains without training on the target domain .",
"prior methods directly encode slot descriptions to generalize to unseen slot types .",
"however , raw slot descriptions are often ambiguous and do not encode enough semantic information , limiting the models ’ zero - shot capability .",
"to address this problem , we introduce qa - driven slot filling ( qasf ) , which extracts slot - filler spans from utterances with a span - based qa model .",
"we use a linguistically motivated questioning strategy to turn descriptions into questions , allowing the model to generalize to unseen slot types .",
"moreover , our qasf model can benefit from weak supervision signals from qa pairs synthetically generated from unlabeled conversations .",
"our full system substantially outperforms baselines by over 5 % on the snips benchmark ."
] |
[
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
23,
24,
25,
26,
27,
28,
29
],
"text": "zero - shot slot - filling problem",
"tokens": [
"zero",
"-",
"shot",
"slot",
"-",
"filling",
"problem"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
20
],
"text": "focus",
"tokens": [
"focus"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
33
],
"text": "model",
"tokens": [
"model"
]
},
{
"argument_type": "TriedComponent",
"nugget_type": "FEA",
"offsets": [
37
],
"text": "slots",
"tokens": [
"slots"
]
},
{
"argument_type": "TriedComponent",
"nugget_type": "FEA",
"offsets": [
37,
40
],
"text": "their values",
"tokens": [
"slots",
"values"
]
}
],
"event_type": "RWS",
"trigger": {
"offsets": [
36
],
"text": "predict",
"tokens": [
"predict"
]
}
},
{
"arguments": [
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
58,
59
],
"text": "slot descriptions",
"tokens": [
"slot",
"descriptions"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
61
],
"text": "generalize",
"tokens": [
"generalize"
]
}
],
"event_type": "RWS",
"trigger": {
"offsets": [
57
],
"text": "encode",
"tokens": [
"encode"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "FEA",
"offsets": [
63,
64,
65
],
"text": "unseen slot types",
"tokens": [
"unseen",
"slot",
"types"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
61
],
"text": "generalize",
"tokens": [
"generalize"
]
}
},
{
"arguments": [
{
"argument_type": "Concern",
"nugget_type": "FEA",
"offsets": [
69,
70,
71
],
"text": "raw slot descriptions",
"tokens": [
"raw",
"slot",
"descriptions"
]
},
{
"argument_type": "Extent",
"nugget_type": "DEG",
"offsets": [
73
],
"text": "often",
"tokens": [
"often"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
74
],
"text": "ambiguous",
"tokens": [
"ambiguous"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
97
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
99,
100,
101,
102,
103
],
"text": "qa - driven slot filling",
"tokens": [
"qa",
"-",
"driven",
"slot",
"filling"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
98
],
"text": "introduce",
"tokens": [
"introduce"
]
}
},
{
"arguments": [
{
"argument_type": "TriedComponent",
"nugget_type": "FEA",
"offsets": [
110,
111,
112,
113
],
"text": "slot - filler spans",
"tokens": [
"slot",
"-",
"filler",
"spans"
]
},
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
115
],
"text": "utterances",
"tokens": [
"utterances"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
116,
117,
118,
119,
120,
121,
122
],
"text": "with a span - based qa model",
"tokens": [
"with",
"a",
"span",
"-",
"based",
"qa",
"model"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
109
],
"text": "extracts",
"tokens": [
"extracts"
]
}
},
{
"arguments": [
{
"argument_type": "TriedComponent",
"nugget_type": "FEA",
"offsets": [
135
],
"text": "questions",
"tokens": [
"questions"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
141,
142
],
"text": "generalize to",
"tokens": [
"generalize",
"to"
]
},
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
133
],
"text": "descriptions",
"tokens": [
"descriptions"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
132
],
"text": "turn",
"tokens": [
"turn"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "FEA",
"offsets": [
143,
144,
145
],
"text": "unseen slot types",
"tokens": [
"unseen",
"slot",
"types"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
141,
142
],
"text": "generalize to",
"tokens": [
"generalize",
"to"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
99,
100,
101,
102,
103,
151
],
"text": "qasf model",
"tokens": [
"qa",
"-",
"driven",
"slot",
"filling",
"model"
]
},
{
"argument_type": "Object",
"nugget_type": "FEA",
"offsets": [
155,
156,
157
],
"text": "weak supervision signals",
"tokens": [
"weak",
"supervision",
"signals"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
153
],
"text": "benefit",
"tokens": [
"benefit"
]
}
},
{
"arguments": [
{
"argument_type": "Extent",
"nugget_type": "DEG",
"offsets": [
174,
175,
176
],
"text": "over 5 %",
"tokens": [
"over",
"5",
"%"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
177,
178,
179,
180
],
"text": "on the snips benchmark",
"tokens": [
"on",
"the",
"snips",
"benchmark"
]
},
{
"argument_type": "Arg2",
"nugget_type": "APP",
"offsets": [
172
],
"text": "baselines",
"tokens": [
"baselines"
]
},
{
"argument_type": "Arg1",
"nugget_type": "APP",
"offsets": [
168,
169
],
"text": "full system",
"tokens": [
"full",
"system"
]
}
],
"event_type": "CMP",
"trigger": {
"offsets": [
171
],
"text": "outperforms",
"tokens": [
"outperforms"
]
}
},
{
"arguments": [
{
"argument_type": "TriedComponent",
"nugget_type": "FEA",
"offsets": [
43
],
"text": "utterances",
"tokens": [
"utterances"
]
},
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
45,
46
],
"text": "new domains",
"tokens": [
"new",
"domains"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
47,
48,
49,
50,
51,
52
],
"text": "without training on the target domain",
"tokens": [
"without",
"training",
"on",
"the",
"target",
"domain"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
42
],
"text": "given",
"tokens": [
"given"
]
}
},
{
"arguments": [
{
"argument_type": "Concern",
"nugget_type": "FEA",
"offsets": [
69,
70,
71
],
"text": "raw slot descriptions",
"tokens": [
"raw",
"slot",
"descriptions"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
79,
80,
81
],
"text": "enough semantic information",
"tokens": [
"enough",
"semantic",
"information"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
77,
78
],
"text": "not encode",
"tokens": [
"not",
"encode"
]
}
},
{
"arguments": [
{
"argument_type": "Concern",
"nugget_type": "FEA",
"offsets": [
69,
70,
71
],
"text": "raw slot descriptions",
"tokens": [
"raw",
"slot",
"descriptions"
]
},
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
83
],
"text": "limiting",
"tokens": [
"limiting"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
83
],
"text": "limiting",
"tokens": [
"limiting"
]
}
}
] |
[
"slot",
"-",
"filling",
"is",
"an",
"essential",
"component",
"for",
"building",
"task",
"-",
"oriented",
"dialog",
"systems",
".",
"in",
"this",
"work",
",",
"we",
"focus",
"on",
"the",
"zero",
"-",
"shot",
"slot",
"-",
"filling",
"problem",
",",
"where",
"the",
"model",
"needs",
"to",
"predict",
"slots",
"and",
"their",
"values",
",",
"given",
"utterances",
"from",
"new",
"domains",
"without",
"training",
"on",
"the",
"target",
"domain",
".",
"prior",
"methods",
"directly",
"encode",
"slot",
"descriptions",
"to",
"generalize",
"to",
"unseen",
"slot",
"types",
".",
"however",
",",
"raw",
"slot",
"descriptions",
"are",
"often",
"ambiguous",
"and",
"do",
"not",
"encode",
"enough",
"semantic",
"information",
",",
"limiting",
"the",
"models",
"’",
"zero",
"-",
"shot",
"capability",
".",
"to",
"address",
"this",
"problem",
",",
"we",
"introduce",
"qa",
"-",
"driven",
"slot",
"filling",
"(",
"qasf",
")",
",",
"which",
"extracts",
"slot",
"-",
"filler",
"spans",
"from",
"utterances",
"with",
"a",
"span",
"-",
"based",
"qa",
"model",
".",
"we",
"use",
"a",
"linguistically",
"motivated",
"questioning",
"strategy",
"to",
"turn",
"descriptions",
"into",
"questions",
",",
"allowing",
"the",
"model",
"to",
"generalize",
"to",
"unseen",
"slot",
"types",
".",
"moreover",
",",
"our",
"qasf",
"model",
"can",
"benefit",
"from",
"weak",
"supervision",
"signals",
"from",
"qa",
"pairs",
"synthetically",
"generated",
"from",
"unlabeled",
"conversations",
".",
"our",
"full",
"system",
"substantially",
"outperforms",
"baselines",
"by",
"over",
"5",
"%",
"on",
"the",
"snips",
"benchmark",
"."
] |
ACL
|
Document-level Event Extraction via Parallel Prediction Networks
|
Document-level event extraction (DEE) is indispensable when events are described throughout a document. We argue that sentence-level extractors are ill-suited to the DEE task where event arguments always scatter across sentences and multiple events may co-exist in a document. It is a challenging task because it requires a holistic understanding of the document and an aggregated ability to assemble arguments across multiple sentences. In this paper, we propose an end-to-end model, which can extract structured events from a document in a parallel manner. Specifically, we first introduce a document-level encoder to obtain the document-aware representations. Then, a multi-granularity non-autoregressive decoder is used to generate events in parallel. Finally, to train the entire model, a matching loss function is proposed, which can bootstrap a global optimization. The empirical results on the widely used DEE dataset show that our approach significantly outperforms current state-of-the-art methods in the challenging DEE task. Code will be available at https://github.com/HangYang-NLP/DE-PPN.
|
57ab98c6c06a45ffb3219eb340a469f4
| 2,021
|
[
"document - level event extraction ( dee ) is indispensable when events are described throughout a document .",
"we argue that sentence - level extractors are ill - suited to the dee task where event arguments always scatter across sentences and multiple events may co - exist in a document .",
"it is a challenging task because it requires a holistic understanding of the document and an aggregated ability to assemble arguments across multiple sentences .",
"in this paper , we propose an end - to - end model , which can extract structured events from a document in a parallel manner .",
"specifically , we first introduce a document - level encoder to obtain the document - aware representations .",
"then , a multi - granularity non - autoregressive decoder is used to generate events in parallel .",
"finally , to train the entire model , a matching loss function is proposed , which can bootstrap a global optimization .",
"the empirical results on the widely used dee dataset show that our approach significantly outperforms current state - of - the - art methods in the challenging dee task .",
"code will be available at https : / / github . com / hangyang - nlp / de - ppn ."
] |
[
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
0,
1,
2,
3,
4
],
"text": "document - level event extraction",
"tokens": [
"document",
"-",
"level",
"event",
"extraction"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
9
],
"text": "indispensable",
"tokens": [
"indispensable"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
80
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
83,
84,
85,
86,
87,
88
],
"text": "end - to - end model",
"tokens": [
"end",
"-",
"to",
"-",
"end",
"model"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
81
],
"text": "propose",
"tokens": [
"propose"
]
}
},
{
"arguments": [
{
"argument_type": "TriedComponent",
"nugget_type": "FEA",
"offsets": [
93,
94
],
"text": "structured events",
"tokens": [
"structured",
"events"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
98,
99,
100,
101
],
"text": "in a parallel manner",
"tokens": [
"in",
"a",
"parallel",
"manner"
]
},
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
97
],
"text": "document",
"tokens": [
"document"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
92
],
"text": "extract",
"tokens": [
"extract"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
105
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "MOD",
"offsets": [
109,
110,
111,
112
],
"text": "document - level encoder",
"tokens": [
"document",
"-",
"level",
"encoder"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
114
],
"text": "obtain",
"tokens": [
"obtain"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
107
],
"text": "introduce",
"tokens": [
"introduce"
]
}
},
{
"arguments": [
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
136,
137
],
"text": "in parallel",
"tokens": [
"in",
"parallel"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
134
],
"text": "generate",
"tokens": [
"generate"
]
},
{
"argument_type": "TriedComponent",
"nugget_type": "MOD",
"offsets": [
124,
125,
126,
127,
128,
129,
130
],
"text": "multi - granularity non - autoregressive decoder",
"tokens": [
"multi",
"-",
"granularity",
"non",
"-",
"autoregressive",
"decoder"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
132
],
"text": "used",
"tokens": [
"used"
]
}
},
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
142
],
"text": "train",
"tokens": [
"train"
]
},
{
"argument_type": "TriedComponent",
"nugget_type": "MOD",
"offsets": [
148,
149,
150
],
"text": "matching loss function",
"tokens": [
"matching",
"loss",
"function"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
152
],
"text": "proposed",
"tokens": [
"proposed"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "APP",
"offsets": [
83,
84,
85,
86,
87,
88
],
"text": "end - to - end model",
"tokens": [
"end",
"-",
"to",
"-",
"end",
"model"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
142
],
"text": "train",
"tokens": [
"train"
]
}
},
{
"arguments": [
{
"argument_type": "BaseComponent",
"nugget_type": "APP",
"offsets": [
148,
149,
150
],
"text": "matching loss function",
"tokens": [
"matching",
"loss",
"function"
]
},
{
"argument_type": "TriedComponent",
"nugget_type": "FEA",
"offsets": [
158,
159
],
"text": "global optimization",
"tokens": [
"global",
"optimization"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
156
],
"text": "bootstrap",
"tokens": [
"bootstrap"
]
}
},
{
"arguments": [
{
"argument_type": "Dataset",
"nugget_type": "DST",
"offsets": [
168,
169
],
"text": "dee dataset",
"tokens": [
"dee",
"dataset"
]
},
{
"argument_type": "Arg1",
"nugget_type": "APP",
"offsets": [
83,
84,
85,
86,
87,
88
],
"text": "end - to - end model",
"tokens": [
"end",
"-",
"to",
"-",
"end",
"model"
]
},
{
"argument_type": "Result",
"nugget_type": "STR",
"offsets": [
175
],
"text": "outperforms",
"tokens": [
"outperforms"
]
},
{
"argument_type": "Arg2",
"nugget_type": "APP",
"offsets": [
177,
178,
179,
180,
181,
182,
183,
184
],
"text": "state - of - the - art methods",
"tokens": [
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"methods"
]
}
],
"event_type": "CMP",
"trigger": {
"offsets": [
175
],
"text": "outperforms",
"tokens": [
"outperforms"
]
}
},
{
"arguments": [
{
"argument_type": "Object",
"nugget_type": "FEA",
"offsets": [
191
],
"text": "code",
"tokens": [
"code"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210
],
"text": "at https : / / github . com / hangyang - nlp / de - ppn",
"tokens": [
"at",
"https",
":",
"/",
"/",
"github",
".",
"com",
"/",
"hangyang",
"-",
"nlp",
"/",
"de",
"-",
"ppn"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
194
],
"text": "available",
"tokens": [
"available"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "FEA",
"offsets": [
116,
117,
118,
119
],
"text": "document - aware representations",
"tokens": [
"document",
"-",
"aware",
"representations"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
114
],
"text": "obtain",
"tokens": [
"obtain"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "FEA",
"offsets": [
135
],
"text": "events",
"tokens": [
"events"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
134
],
"text": "generate",
"tokens": [
"generate"
]
}
}
] |
[
"document",
"-",
"level",
"event",
"extraction",
"(",
"dee",
")",
"is",
"indispensable",
"when",
"events",
"are",
"described",
"throughout",
"a",
"document",
".",
"we",
"argue",
"that",
"sentence",
"-",
"level",
"extractors",
"are",
"ill",
"-",
"suited",
"to",
"the",
"dee",
"task",
"where",
"event",
"arguments",
"always",
"scatter",
"across",
"sentences",
"and",
"multiple",
"events",
"may",
"co",
"-",
"exist",
"in",
"a",
"document",
".",
"it",
"is",
"a",
"challenging",
"task",
"because",
"it",
"requires",
"a",
"holistic",
"understanding",
"of",
"the",
"document",
"and",
"an",
"aggregated",
"ability",
"to",
"assemble",
"arguments",
"across",
"multiple",
"sentences",
".",
"in",
"this",
"paper",
",",
"we",
"propose",
"an",
"end",
"-",
"to",
"-",
"end",
"model",
",",
"which",
"can",
"extract",
"structured",
"events",
"from",
"a",
"document",
"in",
"a",
"parallel",
"manner",
".",
"specifically",
",",
"we",
"first",
"introduce",
"a",
"document",
"-",
"level",
"encoder",
"to",
"obtain",
"the",
"document",
"-",
"aware",
"representations",
".",
"then",
",",
"a",
"multi",
"-",
"granularity",
"non",
"-",
"autoregressive",
"decoder",
"is",
"used",
"to",
"generate",
"events",
"in",
"parallel",
".",
"finally",
",",
"to",
"train",
"the",
"entire",
"model",
",",
"a",
"matching",
"loss",
"function",
"is",
"proposed",
",",
"which",
"can",
"bootstrap",
"a",
"global",
"optimization",
".",
"the",
"empirical",
"results",
"on",
"the",
"widely",
"used",
"dee",
"dataset",
"show",
"that",
"our",
"approach",
"significantly",
"outperforms",
"current",
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"methods",
"in",
"the",
"challenging",
"dee",
"task",
".",
"code",
"will",
"be",
"available",
"at",
"https",
":",
"/",
"/",
"github",
".",
"com",
"/",
"hangyang",
"-",
"nlp",
"/",
"de",
"-",
"ppn",
"."
] |
ACL
|
Probing Linguistic Systematicity
|
Recently, there has been much interest in the question of whether deep natural language understanding (NLU) models exhibit systematicity, generalizing such that units like words make consistent contributions to the meaning of the sentences in which they appear. There is accumulating evidence that neural models do not learn systematically. We examine the notion of systematicity from a linguistic perspective, defining a set of probing tasks and a set of metrics to measure systematic behaviour. We also identify ways in which network architectures can generalize non-systematically, and discuss why such forms of generalization may be unsatisfying. As a case study, we perform a series of experiments in the setting of natural language inference (NLI). We provide evidence that current state-of-the-art NLU systems do not generalize systematically, despite overall high performance.
|
ae9ac4c1d62ff768a8db745e76b05f67
| 2,020
|
[
"recently , there has been much interest in the question of whether deep natural language understanding ( nlu ) models exhibit systematicity , generalizing such that units like words make consistent contributions to the meaning of the sentences in which they appear .",
"there is accumulating evidence that neural models do not learn systematically .",
"we examine the notion of systematicity from a linguistic perspective , defining a set of probing tasks and a set of metrics to measure systematic behaviour .",
"we also identify ways in which network architectures can generalize non - systematically , and discuss why such forms of generalization may be unsatisfying .",
"as a case study , we perform a series of experiments in the setting of natural language inference ( nli ) .",
"we provide evidence that current state - of - the - art nlu systems do not generalize systematically , despite overall high performance ."
] |
[
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "APP",
"offsets": [
12,
13,
14,
15,
16,
17,
18,
19
],
"text": "deep natural language understanding ( nlu ) models",
"tokens": [
"deep",
"natural",
"language",
"understanding",
"(",
"nlu",
")",
"models"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
20
],
"text": "exhibit",
"tokens": [
"exhibit"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
55
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
68,
69,
70,
71
],
"text": "set of probing tasks",
"tokens": [
"set",
"of",
"probing",
"tasks"
]
},
{
"argument_type": "Content",
"nugget_type": "FEA",
"offsets": [
74,
75,
76
],
"text": "set of metrics",
"tokens": [
"set",
"of",
"metrics"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
78
],
"text": "measure",
"tokens": [
"measure"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
66
],
"text": "defining",
"tokens": [
"defining"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "FEA",
"offsets": [
79,
80
],
"text": "systematic behaviour",
"tokens": [
"systematic",
"behaviour"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
78
],
"text": "measure",
"tokens": [
"measure"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
55
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
58,
59,
60
],
"text": "notion of systematicity",
"tokens": [
"notion",
"of",
"systematicity"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
61,
62,
63,
64
],
"text": "from a linguistic perspective",
"tokens": [
"from",
"a",
"linguistic",
"perspective"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
56
],
"text": "examine",
"tokens": [
"examine"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
112
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
115,
116,
117
],
"text": "series of experiments",
"tokens": [
"series",
"of",
"experiments"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
118,
119,
120,
121,
122,
123,
124
],
"text": "in the setting of natural language inference",
"tokens": [
"in",
"the",
"setting",
"of",
"natural",
"language",
"inference"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
113
],
"text": "perform",
"tokens": [
"perform"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
82
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
85,
86,
87,
88,
89,
90,
91,
92,
93,
94
],
"text": "ways in which network architectures can generalize non - systematically",
"tokens": [
"ways",
"in",
"which",
"network",
"architectures",
"can",
"generalize",
"non",
"-",
"systematically"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
84
],
"text": "identify",
"tokens": [
"identify"
]
}
}
] |
[
"recently",
",",
"there",
"has",
"been",
"much",
"interest",
"in",
"the",
"question",
"of",
"whether",
"deep",
"natural",
"language",
"understanding",
"(",
"nlu",
")",
"models",
"exhibit",
"systematicity",
",",
"generalizing",
"such",
"that",
"units",
"like",
"words",
"make",
"consistent",
"contributions",
"to",
"the",
"meaning",
"of",
"the",
"sentences",
"in",
"which",
"they",
"appear",
".",
"there",
"is",
"accumulating",
"evidence",
"that",
"neural",
"models",
"do",
"not",
"learn",
"systematically",
".",
"we",
"examine",
"the",
"notion",
"of",
"systematicity",
"from",
"a",
"linguistic",
"perspective",
",",
"defining",
"a",
"set",
"of",
"probing",
"tasks",
"and",
"a",
"set",
"of",
"metrics",
"to",
"measure",
"systematic",
"behaviour",
".",
"we",
"also",
"identify",
"ways",
"in",
"which",
"network",
"architectures",
"can",
"generalize",
"non",
"-",
"systematically",
",",
"and",
"discuss",
"why",
"such",
"forms",
"of",
"generalization",
"may",
"be",
"unsatisfying",
".",
"as",
"a",
"case",
"study",
",",
"we",
"perform",
"a",
"series",
"of",
"experiments",
"in",
"the",
"setting",
"of",
"natural",
"language",
"inference",
"(",
"nli",
")",
".",
"we",
"provide",
"evidence",
"that",
"current",
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"nlu",
"systems",
"do",
"not",
"generalize",
"systematically",
",",
"despite",
"overall",
"high",
"performance",
"."
] |
ACL
|
The Cascade Transformer: an Application for Efficient Answer Sentence Selection
|
Large transformer-based language models have been shown to be very effective in many classification tasks. However, their computational complexity prevents their use in applications requiring the classification of a large set of candidates. While previous works have investigated approaches to reduce model size, relatively little attention has been paid to techniques to improve batch throughput during inference. In this paper, we introduce the Cascade Transformer, a simple yet effective technique to adapt transformer-based models into a cascade of rankers. Each ranker is used to prune a subset of candidates in a batch, thus dramatically increasing throughput at inference time. Partial encodings from the transformer model are shared among rerankers, providing further speed-up. When compared to a state-of-the-art transformer model, our approach reduces computation by 37% with almost no impact on accuracy, as measured on two English Question Answering datasets.
|
b514da85a490f843ba2aec68d5b5d879
| 2,020
|
[
"large transformer - based language models have been shown to be very effective in many classification tasks .",
"however , their computational complexity prevents their use in applications requiring the classification of a large set of candidates .",
"while previous works have investigated approaches to reduce model size , relatively little attention has been paid to techniques to improve batch throughput during inference .",
"in this paper , we introduce the cascade transformer , a simple yet effective technique to adapt transformer - based models into a cascade of rankers .",
"each ranker is used to prune a subset of candidates in a batch , thus dramatically increasing throughput at inference time .",
"partial encodings from the transformer model are shared among rerankers , providing further speed - up .",
"when compared to a state - of - the - art transformer model , our approach reduces computation by 37 % with almost no impact on accuracy , as measured on two english question answering datasets ."
] |
[
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
0,
1,
2,
3,
4,
5
],
"text": "large transformer - based language models",
"tokens": [
"large",
"transformer",
"-",
"based",
"language",
"models"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
8
],
"text": "shown",
"tokens": [
"shown"
]
}
},
{
"arguments": [
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
21,
22
],
"text": "computational complexity",
"tokens": [
"computational",
"complexity"
]
},
{
"argument_type": "Concern",
"nugget_type": "APP",
"offsets": [
0,
1,
2,
3,
4,
5
],
"text": "large transformer - based language models",
"tokens": [
"large",
"transformer",
"-",
"based",
"language",
"models"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
23
],
"text": "prevents",
"tokens": [
"prevents"
]
}
},
{
"arguments": [
{
"argument_type": "TriedComponent",
"nugget_type": "MOD",
"offsets": [
71,
72
],
"text": "cascade transformer",
"tokens": [
"cascade",
"transformer"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
80
],
"text": "adapt",
"tokens": [
"adapt"
]
},
{
"argument_type": "BaseComponent",
"nugget_type": "MOD",
"offsets": [
87,
88,
89
],
"text": "cascade of rankers",
"tokens": [
"cascade",
"of",
"rankers"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
69
],
"text": "introduce",
"tokens": [
"introduce"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "APP",
"offsets": [
81,
82,
83,
84
],
"text": "transformer - based models",
"tokens": [
"transformer",
"-",
"based",
"models"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
80
],
"text": "adapt",
"tokens": [
"adapt"
]
}
},
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "FEA",
"offsets": [
98,
99,
100
],
"text": "subset of candidates",
"tokens": [
"subset",
"of",
"candidates"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
101,
102,
103
],
"text": "in a batch",
"tokens": [
"in",
"a",
"batch"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
107
],
"text": "increasing",
"tokens": [
"increasing"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
96
],
"text": "prune",
"tokens": [
"prune"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
108
],
"text": "throughput",
"tokens": [
"throughput"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
109,
110,
111
],
"text": "at inference time",
"tokens": [
"at",
"inference",
"time"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
107
],
"text": "increasing",
"tokens": [
"increasing"
]
}
},
{
"arguments": [
{
"argument_type": "TriedComponent",
"nugget_type": "FEA",
"offsets": [
113,
114
],
"text": "partial encodings",
"tokens": [
"partial",
"encodings"
]
},
{
"argument_type": "BaseComponent",
"nugget_type": "APP",
"offsets": [
117,
118
],
"text": "transformer model",
"tokens": [
"transformer",
"model"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
124
],
"text": "providing",
"tokens": [
"providing"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
121,
122
],
"text": "among rerankers",
"tokens": [
"among",
"rerankers"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
120
],
"text": "shared",
"tokens": [
"shared"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
125,
126,
127,
128
],
"text": "further speed - up",
"tokens": [
"further",
"speed",
"-",
"up"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
124
],
"text": "providing",
"tokens": [
"providing"
]
}
},
{
"arguments": [
{
"argument_type": "Arg2",
"nugget_type": "APP",
"offsets": [
134,
135,
136,
137,
138,
139,
140,
141,
142
],
"text": "state - of - the - art transformer model",
"tokens": [
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"transformer",
"model"
]
},
{
"argument_type": "Arg1",
"nugget_type": "MOD",
"offsets": [
71,
72
],
"text": "cascade transformer",
"tokens": [
"cascade",
"transformer"
]
},
{
"argument_type": "Metrics",
"nugget_type": "TAK",
"offsets": [
147
],
"text": "computation",
"tokens": [
"computation"
]
},
{
"argument_type": "Extent",
"nugget_type": "DEG",
"offsets": [
149,
150
],
"text": "37 %",
"tokens": [
"37",
"%"
]
},
{
"argument_type": "Dataset",
"nugget_type": "DST",
"offsets": [
161,
162,
163,
164,
165
],
"text": "two english question answering datasets",
"tokens": [
"two",
"english",
"question",
"answering",
"datasets"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
151,
152,
153,
154,
155,
156
],
"text": "with almost no impact on accuracy",
"tokens": [
"with",
"almost",
"no",
"impact",
"on",
"accuracy"
]
}
],
"event_type": "CMP",
"trigger": {
"offsets": [
146
],
"text": "reduces",
"tokens": [
"reduces"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "APP",
"offsets": [
81,
82,
83,
84
],
"text": "transformer - based models",
"tokens": [
"transformer",
"-",
"based",
"models"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
80
],
"text": "adapt",
"tokens": [
"adapt"
]
}
}
] |
[
"large",
"transformer",
"-",
"based",
"language",
"models",
"have",
"been",
"shown",
"to",
"be",
"very",
"effective",
"in",
"many",
"classification",
"tasks",
".",
"however",
",",
"their",
"computational",
"complexity",
"prevents",
"their",
"use",
"in",
"applications",
"requiring",
"the",
"classification",
"of",
"a",
"large",
"set",
"of",
"candidates",
".",
"while",
"previous",
"works",
"have",
"investigated",
"approaches",
"to",
"reduce",
"model",
"size",
",",
"relatively",
"little",
"attention",
"has",
"been",
"paid",
"to",
"techniques",
"to",
"improve",
"batch",
"throughput",
"during",
"inference",
".",
"in",
"this",
"paper",
",",
"we",
"introduce",
"the",
"cascade",
"transformer",
",",
"a",
"simple",
"yet",
"effective",
"technique",
"to",
"adapt",
"transformer",
"-",
"based",
"models",
"into",
"a",
"cascade",
"of",
"rankers",
".",
"each",
"ranker",
"is",
"used",
"to",
"prune",
"a",
"subset",
"of",
"candidates",
"in",
"a",
"batch",
",",
"thus",
"dramatically",
"increasing",
"throughput",
"at",
"inference",
"time",
".",
"partial",
"encodings",
"from",
"the",
"transformer",
"model",
"are",
"shared",
"among",
"rerankers",
",",
"providing",
"further",
"speed",
"-",
"up",
".",
"when",
"compared",
"to",
"a",
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"transformer",
"model",
",",
"our",
"approach",
"reduces",
"computation",
"by",
"37",
"%",
"with",
"almost",
"no",
"impact",
"on",
"accuracy",
",",
"as",
"measured",
"on",
"two",
"english",
"question",
"answering",
"datasets",
"."
] |
ACL
|
Conditional Augmentation for Aspect Term Extraction via Masked Sequence-to-Sequence Generation
|
Aspect term extraction aims to extract aspect terms from review texts as opinion targets for sentiment analysis. One of the big challenges with this task is the lack of sufficient annotated data. While data augmentation is potentially an effective technique to address the above issue, it is uncontrollable as it may change aspect words and aspect labels unexpectedly. In this paper, we formulate the data augmentation as a conditional generation task: generating a new sentence while preserving the original opinion targets and labels. We propose a masked sequence-to-sequence method for conditional augmentation of aspect term extraction. Unlike existing augmentation approaches, ours is controllable and allows to generate more diversified sentences. Experimental results confirm that our method alleviates the data scarcity problem significantly. It also effectively boosts the performances of several current models for aspect term extraction.
|
6516e1f4845e3d00240687acc970ac94
| 2,020
|
[
"aspect term extraction aims to extract aspect terms from review texts as opinion targets for sentiment analysis .",
"one of the big challenges with this task is the lack of sufficient annotated data .",
"while data augmentation is potentially an effective technique to address the above issue , it is uncontrollable as it may change aspect words and aspect labels unexpectedly .",
"in this paper , we formulate the data augmentation as a conditional generation task : generating a new sentence while preserving the original opinion targets and labels .",
"we propose a masked sequence - to - sequence method for conditional augmentation of aspect term extraction .",
"unlike existing augmentation approaches , ours is controllable and allows to generate more diversified sentences .",
"experimental results confirm that our method alleviates the data scarcity problem significantly .",
"it also effectively boosts the performances of several current models for aspect term extraction ."
] |
[
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
0,
1,
2
],
"text": "aspect term extraction",
"tokens": [
"aspect",
"term",
"extraction"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
5
],
"text": "extract",
"tokens": [
"extract"
]
}
},
{
"arguments": [
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
28
],
"text": "lack",
"tokens": [
"lack"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
28
],
"text": "lack",
"tokens": [
"lack"
]
}
},
{
"arguments": [
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
50
],
"text": "uncontrollable",
"tokens": [
"uncontrollable"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
50
],
"text": "uncontrollable",
"tokens": [
"uncontrollable"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
66
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
69,
70
],
"text": "data augmentation",
"tokens": [
"data",
"augmentation"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
71,
72,
73,
74,
75
],
"text": "as a conditional generation task",
"tokens": [
"as",
"a",
"conditional",
"generation",
"task"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
67
],
"text": "formulate",
"tokens": [
"formulate"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
90
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
93,
94,
95,
96,
97,
98,
99
],
"text": "masked sequence - to - sequence method",
"tokens": [
"masked",
"sequence",
"-",
"to",
"-",
"sequence",
"method"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
101,
102,
103,
104,
105,
106
],
"text": "conditional augmentation of aspect term extraction",
"tokens": [
"conditional",
"augmentation",
"of",
"aspect",
"term",
"extraction"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
91
],
"text": "propose",
"tokens": [
"propose"
]
}
},
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "E-FAC",
"offsets": [
130
],
"text": "alleviates",
"tokens": [
"alleviates"
]
}
],
"event_type": "FIN",
"trigger": {
"offsets": [
126
],
"text": "confirm",
"tokens": [
"confirm"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
93,
94,
95,
96,
97,
98,
99
],
"text": "masked sequence - to - sequence method",
"tokens": [
"masked",
"sequence",
"-",
"to",
"-",
"sequence",
"method"
]
},
{
"argument_type": "Object",
"nugget_type": "WEA",
"offsets": [
132,
133,
134
],
"text": "data scarcity problem",
"tokens": [
"data",
"scarcity",
"problem"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
130
],
"text": "alleviates",
"tokens": [
"alleviates"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
93,
94,
95,
96,
97,
98,
99
],
"text": "masked sequence - to - sequence method",
"tokens": [
"masked",
"sequence",
"-",
"to",
"-",
"sequence",
"method"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
148,
149,
150
],
"text": "aspect term extraction",
"tokens": [
"aspect",
"term",
"extraction"
]
},
{
"argument_type": "Object",
"nugget_type": "TAK",
"offsets": [
142,
143,
144,
145,
146
],
"text": "performances of several current models",
"tokens": [
"performances",
"of",
"several",
"current",
"models"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
139,
140
],
"text": "effectively boosts",
"tokens": [
"effectively",
"boosts"
]
}
},
{
"arguments": [
{
"argument_type": "Concern",
"nugget_type": "APP",
"offsets": [
35,
36
],
"text": "data augmentation",
"tokens": [
"data",
"augmentation"
]
},
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
54
],
"text": "change",
"tokens": [
"change"
]
},
{
"argument_type": "Fault",
"nugget_type": "FEA",
"offsets": [
55,
56
],
"text": "aspect words",
"tokens": [
"aspect",
"words"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
54
],
"text": "change",
"tokens": [
"change"
]
}
}
] |
[
"aspect",
"term",
"extraction",
"aims",
"to",
"extract",
"aspect",
"terms",
"from",
"review",
"texts",
"as",
"opinion",
"targets",
"for",
"sentiment",
"analysis",
".",
"one",
"of",
"the",
"big",
"challenges",
"with",
"this",
"task",
"is",
"the",
"lack",
"of",
"sufficient",
"annotated",
"data",
".",
"while",
"data",
"augmentation",
"is",
"potentially",
"an",
"effective",
"technique",
"to",
"address",
"the",
"above",
"issue",
",",
"it",
"is",
"uncontrollable",
"as",
"it",
"may",
"change",
"aspect",
"words",
"and",
"aspect",
"labels",
"unexpectedly",
".",
"in",
"this",
"paper",
",",
"we",
"formulate",
"the",
"data",
"augmentation",
"as",
"a",
"conditional",
"generation",
"task",
":",
"generating",
"a",
"new",
"sentence",
"while",
"preserving",
"the",
"original",
"opinion",
"targets",
"and",
"labels",
".",
"we",
"propose",
"a",
"masked",
"sequence",
"-",
"to",
"-",
"sequence",
"method",
"for",
"conditional",
"augmentation",
"of",
"aspect",
"term",
"extraction",
".",
"unlike",
"existing",
"augmentation",
"approaches",
",",
"ours",
"is",
"controllable",
"and",
"allows",
"to",
"generate",
"more",
"diversified",
"sentences",
".",
"experimental",
"results",
"confirm",
"that",
"our",
"method",
"alleviates",
"the",
"data",
"scarcity",
"problem",
"significantly",
".",
"it",
"also",
"effectively",
"boosts",
"the",
"performances",
"of",
"several",
"current",
"models",
"for",
"aspect",
"term",
"extraction",
"."
] |
ACL
|
GCDT: A Global Context Enhanced Deep Transition Architecture for Sequence Labeling
|
Current state-of-the-art systems for sequence labeling are typically based on the family of Recurrent Neural Networks (RNNs). However, the shallow connections between consecutive hidden states of RNNs and insufficient modeling of global information restrict the potential performance of those models. In this paper, we try to address these issues, and thus propose a Global Context enhanced Deep Transition architecture for sequence labeling named GCDT. We deepen the state transition path at each position in a sentence, and further assign every token with a global representation learned from the entire sentence. Experiments on two standard sequence labeling tasks show that, given only training data and the ubiquitous word embeddings (Glove), our GCDT achieves 91.96 F1 on the CoNLL03 NER task and 95.43 F1 on the CoNLL2000 Chunking task, which outperforms the best reported results under the same settings. Furthermore, by leveraging BERT as an additional resource, we establish new state-of-the-art results with 93.47 F1 on NER and 97.30 F1 on Chunking.
|
8f9e42e15b0058618aeeab31defafa81
| 2,019
|
[
"current state - of - the - art systems for sequence labeling are typically based on the family of recurrent neural networks ( rnns ) .",
"however , the shallow connections between consecutive hidden states of rnns and insufficient modeling of global information restrict the potential performance of those models .",
"in this paper , we try to address these issues , and thus propose a global context enhanced deep transition architecture for sequence labeling named gcdt .",
"we deepen the state transition path at each position in a sentence , and further assign every token with a global representation learned from the entire sentence .",
"experiments on two standard sequence labeling tasks show that , given only training data and the ubiquitous word embeddings ( glove ) , our gcdt achieves 91 . 96 f1 on the conll03 ner task and 95 . 43 f1 on the conll2000 chunking task , which outperforms the best reported results under the same settings .",
"furthermore , by leveraging bert as an additional resource , we establish new state - of - the - art results with 93 . 47 f1 on ner and 97 . 30 f1 on chunking ."
] |
[
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "APP",
"offsets": [
19,
20,
21
],
"text": "recurrent neural networks",
"tokens": [
"recurrent",
"neural",
"networks"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
14
],
"text": "based",
"tokens": [
"based"
]
}
},
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
45,
46,
47,
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11
],
"text": "potential performance of those models",
"tokens": [
"potential",
"performance",
"of",
"current",
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"systems",
"for",
"sequence",
"labeling"
]
},
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
38,
39,
40,
41,
42
],
"text": "insufficient modeling of global information",
"tokens": [
"insufficient",
"modeling",
"of",
"global",
"information"
]
},
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
29,
30,
31,
32,
33,
34,
35,
36
],
"text": "shallow connections between consecutive hidden states of rnns",
"tokens": [
"shallow",
"connections",
"between",
"consecutive",
"hidden",
"states",
"of",
"rnns"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
43
],
"text": "restrict",
"tokens": [
"restrict"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
55
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
66,
67,
68,
69,
70,
71
],
"text": "global context enhanced deep transition architecture",
"tokens": [
"global",
"context",
"enhanced",
"deep",
"transition",
"architecture"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
73,
74
],
"text": "sequence labeling",
"tokens": [
"sequence",
"labeling"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
64
],
"text": "propose",
"tokens": [
"propose"
]
}
},
{
"arguments": [
{
"argument_type": "TriedComponent",
"nugget_type": "MOD",
"offsets": [
81,
82,
83
],
"text": "state transition path",
"tokens": [
"state",
"transition",
"path"
]
}
],
"event_type": "RWS",
"trigger": {
"offsets": [
79
],
"text": "deepen",
"tokens": [
"deepen"
]
}
},
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "FEA",
"offsets": [
94,
95
],
"text": "every token",
"tokens": [
"every",
"token"
]
},
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
98,
99
],
"text": "global representation",
"tokens": [
"global",
"representation"
]
},
{
"argument_type": "TriedComponent",
"nugget_type": "FEA",
"offsets": [
103,
104
],
"text": "entire sentence",
"tokens": [
"entire",
"sentence"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
93
],
"text": "assign",
"tokens": [
"assign"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
66,
67,
68,
69,
70,
71
],
"text": "global context enhanced deep transition architecture",
"tokens": [
"global",
"context",
"enhanced",
"deep",
"transition",
"architecture"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
135
],
"text": "f1",
"tokens": [
"f1"
]
},
{
"argument_type": "Extent",
"nugget_type": "DEG",
"offsets": [
132,
133,
134
],
"text": "91 . 96",
"tokens": [
"91",
".",
"96"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
136,
137,
138,
139,
140
],
"text": "on the conll03 ner task",
"tokens": [
"on",
"the",
"conll03",
"ner",
"task"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
116,
117,
118,
119,
120,
121,
122,
123,
124
],
"text": "given only training data and the ubiquitous word embeddings",
"tokens": [
"given",
"only",
"training",
"data",
"and",
"the",
"ubiquitous",
"word",
"embeddings"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
131
],
"text": "achieves",
"tokens": [
"achieves"
]
}
},
{
"arguments": [
{
"argument_type": "Extent",
"nugget_type": "DEG",
"offsets": [
142,
143,
144
],
"text": "95 . 43",
"tokens": [
"95",
".",
"43"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
145
],
"text": "f1",
"tokens": [
"f1"
]
},
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
66,
67,
68,
69,
70,
71
],
"text": "global context enhanced deep transition architecture",
"tokens": [
"global",
"context",
"enhanced",
"deep",
"transition",
"architecture"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
116,
117,
118,
119,
120,
121,
122,
123,
124
],
"text": "given only training data and the ubiquitous word embeddings",
"tokens": [
"given",
"only",
"training",
"data",
"and",
"the",
"ubiquitous",
"word",
"embeddings"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
146,
147,
148,
149,
150
],
"text": "on the conll2000 chunking task",
"tokens": [
"on",
"the",
"conll2000",
"chunking",
"task"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
131
],
"text": "achieves",
"tokens": [
"achieves"
]
}
},
{
"arguments": [
{
"argument_type": "Result",
"nugget_type": "STR",
"offsets": [
153
],
"text": "outperforms",
"tokens": [
"outperforms"
]
},
{
"argument_type": "Arg1",
"nugget_type": "APP",
"offsets": [
66,
67,
68,
69,
70,
71
],
"text": "global context enhanced deep transition architecture",
"tokens": [
"global",
"context",
"enhanced",
"deep",
"transition",
"architecture"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
158,
159,
160,
161
],
"text": "under the same settings",
"tokens": [
"under",
"the",
"same",
"settings"
]
}
],
"event_type": "CMP",
"trigger": {
"offsets": [
153
],
"text": "outperforms",
"tokens": [
"outperforms"
]
}
},
{
"arguments": [
{
"argument_type": "Object",
"nugget_type": "TAK",
"offsets": [
175,
176,
177,
178,
179,
180,
181,
182,
183
],
"text": "new state - of - the - art results",
"tokens": [
"new",
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"results"
]
},
{
"argument_type": "Extent",
"nugget_type": "DEG",
"offsets": [
185,
186,
187
],
"text": "93 . 47",
"tokens": [
"93",
".",
"47"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
188
],
"text": "f1",
"tokens": [
"f1"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
165,
166,
167,
168,
169,
170,
171
],
"text": "by leveraging bert as an additional resource",
"tokens": [
"by",
"leveraging",
"bert",
"as",
"an",
"additional",
"resource"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
174
],
"text": "establish",
"tokens": [
"establish"
]
}
},
{
"arguments": [
{
"argument_type": "Object",
"nugget_type": "TAK",
"offsets": [
175,
176,
177,
178,
179,
180,
181,
182,
183
],
"text": "new state - of - the - art results",
"tokens": [
"new",
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"results"
]
},
{
"argument_type": "Extent",
"nugget_type": "DEG",
"offsets": [
192,
193,
194
],
"text": "97 . 30",
"tokens": [
"97",
".",
"30"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
195
],
"text": "f1",
"tokens": [
"f1"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
165,
166,
167,
168,
169,
170,
171
],
"text": "by leveraging bert as an additional resource",
"tokens": [
"by",
"leveraging",
"bert",
"as",
"an",
"additional",
"resource"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
174
],
"text": "establish",
"tokens": [
"establish"
]
}
}
] |
[
"current",
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"systems",
"for",
"sequence",
"labeling",
"are",
"typically",
"based",
"on",
"the",
"family",
"of",
"recurrent",
"neural",
"networks",
"(",
"rnns",
")",
".",
"however",
",",
"the",
"shallow",
"connections",
"between",
"consecutive",
"hidden",
"states",
"of",
"rnns",
"and",
"insufficient",
"modeling",
"of",
"global",
"information",
"restrict",
"the",
"potential",
"performance",
"of",
"those",
"models",
".",
"in",
"this",
"paper",
",",
"we",
"try",
"to",
"address",
"these",
"issues",
",",
"and",
"thus",
"propose",
"a",
"global",
"context",
"enhanced",
"deep",
"transition",
"architecture",
"for",
"sequence",
"labeling",
"named",
"gcdt",
".",
"we",
"deepen",
"the",
"state",
"transition",
"path",
"at",
"each",
"position",
"in",
"a",
"sentence",
",",
"and",
"further",
"assign",
"every",
"token",
"with",
"a",
"global",
"representation",
"learned",
"from",
"the",
"entire",
"sentence",
".",
"experiments",
"on",
"two",
"standard",
"sequence",
"labeling",
"tasks",
"show",
"that",
",",
"given",
"only",
"training",
"data",
"and",
"the",
"ubiquitous",
"word",
"embeddings",
"(",
"glove",
")",
",",
"our",
"gcdt",
"achieves",
"91",
".",
"96",
"f1",
"on",
"the",
"conll03",
"ner",
"task",
"and",
"95",
".",
"43",
"f1",
"on",
"the",
"conll2000",
"chunking",
"task",
",",
"which",
"outperforms",
"the",
"best",
"reported",
"results",
"under",
"the",
"same",
"settings",
".",
"furthermore",
",",
"by",
"leveraging",
"bert",
"as",
"an",
"additional",
"resource",
",",
"we",
"establish",
"new",
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"results",
"with",
"93",
".",
"47",
"f1",
"on",
"ner",
"and",
"97",
".",
"30",
"f1",
"on",
"chunking",
"."
] |
ACL
|
Few-shot Slot Tagging with Collapsed Dependency Transfer and Label-enhanced Task-adaptive Projection Network
|
In this paper, we explore the slot tagging with only a few labeled support sentences (a.k.a. few-shot). Few-shot slot tagging faces a unique challenge compared to the other fewshot classification problems as it calls for modeling the dependencies between labels. But it is hard to apply previously learned label dependencies to an unseen domain, due to the discrepancy of label sets. To tackle this, we introduce a collapsed dependency transfer mechanism into the conditional random field (CRF) to transfer abstract label dependency patterns as transition scores. In the few-shot setting, the emission score of CRF can be calculated as a word’s similarity to the representation of each label. To calculate such similarity, we propose a Label-enhanced Task-Adaptive Projection Network (L-TapNet) based on the state-of-the-art few-shot classification model – TapNet, by leveraging label name semantics in representing labels. Experimental results show that our model significantly outperforms the strongest few-shot learning baseline by 14.64 F1 scores in the one-shot setting.
|
85aa7c48f88e5ca6ec89eaafbdaf31dc
| 2,020
|
[
"in this paper , we explore the slot tagging with only a few labeled support sentences ( a . k . a . few - shot ) .",
"few - shot slot tagging faces a unique challenge compared to the other fewshot classification problems as it calls for modeling the dependencies between labels .",
"but it is hard to apply previously learned label dependencies to an unseen domain , due to the discrepancy of label sets .",
"to tackle this , we introduce a collapsed dependency transfer mechanism into the conditional random field ( crf ) to transfer abstract label dependency patterns as transition scores .",
"in the few - shot setting , the emission score of crf can be calculated as a word ’ s similarity to the representation of each label .",
"to calculate such similarity , we propose a label - enhanced task - adaptive projection network ( l - tapnet ) based on the state - of - the - art few - shot classification model – tapnet , by leveraging label name semantics in representing labels .",
"experimental results show that our model significantly outperforms the strongest few - shot learning baseline by 14 . 64 f1 scores in the one - shot setting ."
] |
[
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
4
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
7,
8
],
"text": "slot tagging",
"tokens": [
"slot",
"tagging"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
9,
10,
11,
12,
13,
14,
15
],
"text": "with only a few labeled support sentences",
"tokens": [
"with",
"only",
"a",
"few",
"labeled",
"support",
"sentences"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
5
],
"text": "explore",
"tokens": [
"explore"
]
}
},
{
"arguments": [
{
"argument_type": "Concern",
"nugget_type": "FEA",
"offsets": [
60,
61,
62,
63
],
"text": "previously learned label dependencies",
"tokens": [
"previously",
"learned",
"label",
"dependencies"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
66,
67
],
"text": "unseen domain",
"tokens": [
"unseen",
"domain"
]
},
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
57
],
"text": "hard",
"tokens": [
"hard"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
59
],
"text": "apply",
"tokens": [
"apply"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
81
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
84,
85,
86,
87
],
"text": "collapsed dependency transfer mechanism",
"tokens": [
"collapsed",
"dependency",
"transfer",
"mechanism"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
59
],
"text": "apply",
"tokens": [
"apply"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
82
],
"text": "introduce",
"tokens": [
"introduce"
]
}
},
{
"arguments": [
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
106,
107,
108,
109,
110,
111
],
"text": "in the few - shot setting",
"tokens": [
"in",
"the",
"few",
"-",
"shot",
"setting"
]
},
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
114,
115,
116,
90,
91,
92
],
"text": "emission score of crf",
"tokens": [
"emission",
"score",
"of",
"conditional",
"random",
"field"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
120
],
"text": "calculated",
"tokens": [
"calculated"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
139
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
142,
143,
144,
145,
146,
147,
148,
149
],
"text": "label - enhanced task - adaptive projection network",
"tokens": [
"label",
"-",
"enhanced",
"task",
"-",
"adaptive",
"projection",
"network"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
135
],
"text": "calculate",
"tokens": [
"calculate"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
140
],
"text": "propose",
"tokens": [
"propose"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
123,
124,
125,
126,
127,
128,
129,
130,
131,
132
],
"text": "word ’ s similarity to the representation of each label",
"tokens": [
"word",
"’",
"s",
"similarity",
"to",
"the",
"representation",
"of",
"each",
"label"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
135
],
"text": "calculate",
"tokens": [
"calculate"
]
}
},
{
"arguments": [
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
175,
176,
177
],
"text": "label name semantics",
"tokens": [
"label",
"name",
"semantics"
]
},
{
"argument_type": "TriedComponent",
"nugget_type": "FEA",
"offsets": [
180
],
"text": "labels",
"tokens": [
"labels"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
174
],
"text": "leveraging",
"tokens": [
"leveraging"
]
}
},
{
"arguments": [
{
"argument_type": "Arg1",
"nugget_type": "APP",
"offsets": [
142,
143,
144,
145,
146,
147,
148,
149
],
"text": "label - enhanced task - adaptive projection network",
"tokens": [
"label",
"-",
"enhanced",
"task",
"-",
"adaptive",
"projection",
"network"
]
},
{
"argument_type": "Result",
"nugget_type": "STR",
"offsets": [
188,
189
],
"text": "significantly outperforms",
"tokens": [
"significantly",
"outperforms"
]
},
{
"argument_type": "Arg2",
"nugget_type": "APP",
"offsets": [
191,
192,
193,
194,
195,
196
],
"text": "strongest few - shot learning baseline",
"tokens": [
"strongest",
"few",
"-",
"shot",
"learning",
"baseline"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
203,
204,
205,
206,
207,
208
],
"text": "in the one - shot setting",
"tokens": [
"in",
"the",
"one",
"-",
"shot",
"setting"
]
},
{
"argument_type": "Metrics",
"nugget_type": "TAK",
"offsets": [
201,
202
],
"text": "f1 scores",
"tokens": [
"f1",
"scores"
]
}
],
"event_type": "CMP",
"trigger": {
"offsets": [
188,
189
],
"text": "significantly outperforms",
"tokens": [
"significantly",
"outperforms"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "FEA",
"offsets": [
60,
61,
62,
63
],
"text": "previously learned label dependencies",
"tokens": [
"previously",
"learned",
"label",
"dependencies"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
66,
67
],
"text": "unseen domain",
"tokens": [
"unseen",
"domain"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
59
],
"text": "apply",
"tokens": [
"apply"
]
}
},
{
"arguments": [
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
98,
99,
100,
101
],
"text": "abstract label dependency patterns",
"tokens": [
"abstract",
"label",
"dependency",
"patterns"
]
},
{
"argument_type": "Target",
"nugget_type": "FEA",
"offsets": [
103,
104
],
"text": "transition scores",
"tokens": [
"transition",
"scores"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
97
],
"text": "transfer",
"tokens": [
"transfer"
]
}
}
] |
[
"in",
"this",
"paper",
",",
"we",
"explore",
"the",
"slot",
"tagging",
"with",
"only",
"a",
"few",
"labeled",
"support",
"sentences",
"(",
"a",
".",
"k",
".",
"a",
".",
"few",
"-",
"shot",
")",
".",
"few",
"-",
"shot",
"slot",
"tagging",
"faces",
"a",
"unique",
"challenge",
"compared",
"to",
"the",
"other",
"fewshot",
"classification",
"problems",
"as",
"it",
"calls",
"for",
"modeling",
"the",
"dependencies",
"between",
"labels",
".",
"but",
"it",
"is",
"hard",
"to",
"apply",
"previously",
"learned",
"label",
"dependencies",
"to",
"an",
"unseen",
"domain",
",",
"due",
"to",
"the",
"discrepancy",
"of",
"label",
"sets",
".",
"to",
"tackle",
"this",
",",
"we",
"introduce",
"a",
"collapsed",
"dependency",
"transfer",
"mechanism",
"into",
"the",
"conditional",
"random",
"field",
"(",
"crf",
")",
"to",
"transfer",
"abstract",
"label",
"dependency",
"patterns",
"as",
"transition",
"scores",
".",
"in",
"the",
"few",
"-",
"shot",
"setting",
",",
"the",
"emission",
"score",
"of",
"crf",
"can",
"be",
"calculated",
"as",
"a",
"word",
"’",
"s",
"similarity",
"to",
"the",
"representation",
"of",
"each",
"label",
".",
"to",
"calculate",
"such",
"similarity",
",",
"we",
"propose",
"a",
"label",
"-",
"enhanced",
"task",
"-",
"adaptive",
"projection",
"network",
"(",
"l",
"-",
"tapnet",
")",
"based",
"on",
"the",
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"few",
"-",
"shot",
"classification",
"model",
"–",
"tapnet",
",",
"by",
"leveraging",
"label",
"name",
"semantics",
"in",
"representing",
"labels",
".",
"experimental",
"results",
"show",
"that",
"our",
"model",
"significantly",
"outperforms",
"the",
"strongest",
"few",
"-",
"shot",
"learning",
"baseline",
"by",
"14",
".",
"64",
"f1",
"scores",
"in",
"the",
"one",
"-",
"shot",
"setting",
"."
] |
ACL
|
Bridging Anaphora Resolution as Question Answering
|
Most previous studies on bridging anaphora resolution (Poesio et al., 2004; Hou et al., 2013b; Hou, 2018a) use the pairwise model to tackle the problem and assume that the gold mention information is given. In this paper, we cast bridging anaphora resolution as question answering based on context. This allows us to find the antecedent for a given anaphor without knowing any gold mention information (except the anaphor itself). We present a question answering framework (BARQA) for this task, which leverages the power of transfer learning. Furthermore, we propose a novel method to generate a large amount of “quasi-bridging” training data. We show that our model pre-trained on this dataset and fine-tuned on a small amount of in-domain dataset achieves new state-of-the-art results for bridging anaphora resolution on two bridging corpora (ISNotes (Markert et al., 2012) and BASHI (Ro ̈siger, 2018)).
|
db046c7edbe27333a8e99f97c863c6ae
| 2,020
|
[
"most previous studies on bridging anaphora resolution ( poesio et al . , 2004 ; hou et al . , 2013b ; hou , 2018a ) use the pairwise model to tackle the problem and assume that the gold mention information is given .",
"in this paper , we cast bridging anaphora resolution as question answering based on context .",
"this allows us to find the antecedent for a given anaphor without knowing any gold mention information ( except the anaphor itself ) .",
"we present a question answering framework ( barqa ) for this task , which leverages the power of transfer learning .",
"furthermore , we propose a novel method to generate a large amount of “ quasi - bridging ” training data .",
"we show that our model pre - trained on this dataset and fine - tuned on a small amount of in - domain dataset achieves new state - of - the - art results for bridging anaphora resolution on two bridging corpora ( isnotes ( markert et al . , 2012 ) and bashi ( ro siger , 2018 ) ) ."
] |
[
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
4,
5,
6
],
"text": "bridging anaphora resolution",
"tokens": [
"bridging",
"anaphora",
"resolution"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
26
],
"text": "use",
"tokens": [
"use"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
48
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
50,
51,
52
],
"text": "bridging anaphora resolution",
"tokens": [
"bridging",
"anaphora",
"resolution"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
54,
55,
56,
57,
58
],
"text": "question answering based on context",
"tokens": [
"question",
"answering",
"based",
"on",
"context"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
49
],
"text": "cast",
"tokens": [
"cast"
]
}
},
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "FEA",
"offsets": [
66,
67,
68,
69,
70
],
"text": "antecedent for a given anaphor",
"tokens": [
"antecedent",
"for",
"a",
"given",
"anaphor"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
71,
72,
73,
74,
75,
76
],
"text": "without knowing any gold mention information",
"tokens": [
"without",
"knowing",
"any",
"gold",
"mention",
"information"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
64
],
"text": "find",
"tokens": [
"find"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
84
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
87,
88,
89
],
"text": "question answering framework",
"tokens": [
"question",
"answering",
"framework"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
85
],
"text": "present",
"tokens": [
"present"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
107
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
111
],
"text": "method",
"tokens": [
"method"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
113
],
"text": "generate",
"tokens": [
"generate"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
108
],
"text": "propose",
"tokens": [
"propose"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "DST",
"offsets": [
115,
116,
117,
118,
119,
120,
121,
122,
123,
124
],
"text": "large amount of “ quasi - bridging ” training data",
"tokens": [
"large",
"amount",
"of",
"“",
"quasi",
"-",
"bridging",
"”",
"training",
"data"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
113
],
"text": "generate",
"tokens": [
"generate"
]
}
},
{
"arguments": [
{
"argument_type": "Finder",
"nugget_type": "OG",
"offsets": [
126
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "E-FAC",
"offsets": [
150
],
"text": "achieves",
"tokens": [
"achieves"
]
}
],
"event_type": "FIN",
"trigger": {
"offsets": [
127
],
"text": "show",
"tokens": [
"show"
]
}
},
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
161,
162,
163
],
"text": "bridging anaphora resolution",
"tokens": [
"bridging",
"anaphora",
"resolution"
]
},
{
"argument_type": "Dataset",
"nugget_type": "DST",
"offsets": [
165,
166,
167
],
"text": "two bridging corpora",
"tokens": [
"two",
"bridging",
"corpora"
]
},
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
87,
88,
89
],
"text": "question answering framework",
"tokens": [
"question",
"answering",
"framework"
]
},
{
"argument_type": "Object",
"nugget_type": "STR",
"offsets": [
151,
152,
153,
154,
155,
156,
157,
158,
159
],
"text": "new state - of - the - art results",
"tokens": [
"new",
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"results"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
150
],
"text": "achieves",
"tokens": [
"achieves"
]
}
},
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
87,
88,
89
],
"text": "question answering framework",
"tokens": [
"question",
"answering",
"framework"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
134,
135,
136
],
"text": "on this dataset",
"tokens": [
"on",
"this",
"dataset"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
131,
132,
133
],
"text": "pre - trained",
"tokens": [
"pre",
"-",
"trained"
]
}
},
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
87,
88,
89
],
"text": "question answering framework",
"tokens": [
"question",
"answering",
"framework"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
141,
142,
143,
144,
145,
146,
147,
148,
149
],
"text": "on a small amount of in - domain dataset",
"tokens": [
"on",
"a",
"small",
"amount",
"of",
"in",
"-",
"domain",
"dataset"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
138,
139,
140
],
"text": "fine - tuned",
"tokens": [
"fine",
"-",
"tuned"
]
}
}
] |
[
"most",
"previous",
"studies",
"on",
"bridging",
"anaphora",
"resolution",
"(",
"poesio",
"et",
"al",
".",
",",
"2004",
";",
"hou",
"et",
"al",
".",
",",
"2013b",
";",
"hou",
",",
"2018a",
")",
"use",
"the",
"pairwise",
"model",
"to",
"tackle",
"the",
"problem",
"and",
"assume",
"that",
"the",
"gold",
"mention",
"information",
"is",
"given",
".",
"in",
"this",
"paper",
",",
"we",
"cast",
"bridging",
"anaphora",
"resolution",
"as",
"question",
"answering",
"based",
"on",
"context",
".",
"this",
"allows",
"us",
"to",
"find",
"the",
"antecedent",
"for",
"a",
"given",
"anaphor",
"without",
"knowing",
"any",
"gold",
"mention",
"information",
"(",
"except",
"the",
"anaphor",
"itself",
")",
".",
"we",
"present",
"a",
"question",
"answering",
"framework",
"(",
"barqa",
")",
"for",
"this",
"task",
",",
"which",
"leverages",
"the",
"power",
"of",
"transfer",
"learning",
".",
"furthermore",
",",
"we",
"propose",
"a",
"novel",
"method",
"to",
"generate",
"a",
"large",
"amount",
"of",
"“",
"quasi",
"-",
"bridging",
"”",
"training",
"data",
".",
"we",
"show",
"that",
"our",
"model",
"pre",
"-",
"trained",
"on",
"this",
"dataset",
"and",
"fine",
"-",
"tuned",
"on",
"a",
"small",
"amount",
"of",
"in",
"-",
"domain",
"dataset",
"achieves",
"new",
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"results",
"for",
"bridging",
"anaphora",
"resolution",
"on",
"two",
"bridging",
"corpora",
"(",
"isnotes",
"(",
"markert",
"et",
"al",
".",
",",
"2012",
")",
"and",
"bashi",
"(",
"ro",
"siger",
",",
"2018",
")",
")",
"."
] |
ACL
|
Improving Word Translation via Two-Stage Contrastive Learning
|
Word translation or bilingual lexicon induction (BLI) is a key cross-lingual task, aiming to bridge the lexical gap between different languages. In this work, we propose a robust and effective two-stage contrastive learning framework for the BLI task. At Stage C1, we propose to refine standard cross-lingual linear maps between static word embeddings (WEs) via a contrastive learning objective; we also show how to integrate it into the self-learning procedure for even more refined cross-lingual maps. In Stage C2, we conduct BLI-oriented contrastive fine-tuning of mBERT, unlocking its word translation capability. We also show that static WEs induced from the ‘C2-tuned’ mBERT complement static WEs from Stage C1. Comprehensive experiments on standard BLI datasets for diverse languages and different experimental setups demonstrate substantial gains achieved by our framework. While the BLI method from Stage C1 already yields substantial gains over all state-of-the-art BLI methods in our comparison, even stronger improvements are met with the full two-stage framework: e.g., we report gains for 112/112 BLI setups, spanning 28 language pairs.
|
f6893a1f6f0bc2a847bccb017525c3df
| 2,022
|
[
"word translation or bilingual lexicon induction ( bli ) is a key cross - lingual task , aiming to bridge the lexical gap between different languages .",
"in this work , we propose a robust and effective two - stage contrastive learning framework for the bli task .",
"at stage c1 , we propose to refine standard cross - lingual linear maps between static word embeddings ( wes ) via a contrastive learning objective ; we also show how to integrate it into the self - learning procedure for even more refined cross - lingual maps .",
"in stage c2 , we conduct bli - oriented contrastive fine - tuning of mbert , unlocking its word translation capability .",
"we also show that static wes induced from the ‘ c2 - tuned ’ mbert complement static wes from stage c1 .",
"comprehensive experiments on standard bli datasets for diverse languages and different experimental setups demonstrate substantial gains achieved by our framework .",
"while the bli method from stage c1 already yields substantial gains over all state - of - the - art bli methods in our comparison , even stronger improvements are met with the full two - stage framework : e . g . , we report gains for 112 / 112 bli setups , spanning 28 language pairs ."
] |
[
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
0,
1
],
"text": "word translation",
"tokens": [
"word",
"translation"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
3,
4,
5
],
"text": "bilingual lexicon induction",
"tokens": [
"bilingual",
"lexicon",
"induction"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
11,
12,
13,
14,
15
],
"text": "key cross - lingual task",
"tokens": [
"key",
"cross",
"-",
"lingual",
"task"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
31
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
37,
38,
39,
40,
41,
42
],
"text": "two - stage contrastive learning framework",
"tokens": [
"two",
"-",
"stage",
"contrastive",
"learning",
"framework"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
3,
4,
5,
46
],
"text": "bli task",
"tokens": [
"bilingual",
"lexicon",
"induction",
"task"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
32
],
"text": "propose",
"tokens": [
"propose"
]
}
},
{
"arguments": [
{
"argument_type": "TriedComponent",
"nugget_type": "FEA",
"offsets": [
71,
72,
73
],
"text": "contrastive learning objective",
"tokens": [
"contrastive",
"learning",
"objective"
]
},
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
56,
57,
58,
59,
60,
61
],
"text": "standard cross - lingual linear maps",
"tokens": [
"standard",
"cross",
"-",
"lingual",
"linear",
"maps"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
62,
63,
64,
65
],
"text": "between static word embeddings",
"tokens": [
"between",
"static",
"word",
"embeddings"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
55
],
"text": "refine",
"tokens": [
"refine"
]
}
},
{
"arguments": [
{
"argument_type": "BaseComponent",
"nugget_type": "MOD",
"offsets": [
84,
85,
86,
87
],
"text": "self - learning procedure",
"tokens": [
"self",
"-",
"learning",
"procedure"
]
},
{
"argument_type": "Target",
"nugget_type": "FEA",
"offsets": [
90,
91,
92,
93,
94,
95
],
"text": "more refined cross - lingual maps",
"tokens": [
"more",
"refined",
"cross",
"-",
"lingual",
"maps"
]
},
{
"argument_type": "TriedComponent",
"nugget_type": "FEA",
"offsets": [
56,
57,
58,
59,
60,
61
],
"text": "standard cross - lingual linear maps",
"tokens": [
"standard",
"cross",
"-",
"lingual",
"linear",
"maps"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
80
],
"text": "integrate",
"tokens": [
"integrate"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
101
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
113
],
"text": "unlocking",
"tokens": [
"unlocking"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
103,
104,
105,
106,
107,
108,
109,
110,
111
],
"text": "bli - oriented contrastive fine - tuning of mbert",
"tokens": [
"bli",
"-",
"oriented",
"contrastive",
"fine",
"-",
"tuning",
"of",
"mbert"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
102
],
"text": "conduct",
"tokens": [
"conduct"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
115,
116,
117
],
"text": "word translation capability",
"tokens": [
"word",
"translation",
"capability"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
113
],
"text": "unlocking",
"tokens": [
"unlocking"
]
}
},
{
"arguments": [
{
"argument_type": "Dataset",
"nugget_type": "DST",
"offsets": [
144,
145,
146
],
"text": "standard bli datasets",
"tokens": [
"standard",
"bli",
"datasets"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
148,
149
],
"text": "diverse languages",
"tokens": [
"diverse",
"languages"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
151,
152,
153
],
"text": "different experimental setups",
"tokens": [
"different",
"experimental",
"setups"
]
},
{
"argument_type": "Extent",
"nugget_type": "DEG",
"offsets": [
155
],
"text": "substantial",
"tokens": [
"substantial"
]
},
{
"argument_type": "Object",
"nugget_type": "STR",
"offsets": [
156
],
"text": "gains",
"tokens": [
"gains"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
157,
158,
37,
38,
39,
40,
41,
42
],
"text": "achieved by our framework",
"tokens": [
"achieved",
"by",
"two",
"-",
"stage",
"contrastive",
"learning",
"framework"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
154
],
"text": "demonstrate",
"tokens": [
"demonstrate"
]
}
},
{
"arguments": [
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
193,
194,
195,
196,
197,
198,
199
],
"text": "with the full two - stage framework",
"tokens": [
"with",
"the",
"full",
"two",
"-",
"stage",
"framework"
]
},
{
"argument_type": "Object",
"nugget_type": "STR",
"offsets": [
189,
190
],
"text": "stronger improvements",
"tokens": [
"stronger",
"improvements"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
192
],
"text": "met",
"tokens": [
"met"
]
}
},
{
"arguments": [
{
"argument_type": "Finder",
"nugget_type": "OG",
"offsets": [
119
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "E-FAC",
"offsets": [
134
],
"text": "complement",
"tokens": [
"complement"
]
}
],
"event_type": "FIN",
"trigger": {
"offsets": [
121
],
"text": "show",
"tokens": [
"show"
]
}
},
{
"arguments": [
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
125,
126,
127,
128,
129,
130,
131,
132,
133
],
"text": "induced from the ‘ c2 - tuned ’ mbert",
"tokens": [
"induced",
"from",
"the",
"‘",
"c2",
"-",
"tuned",
"’",
"mbert"
]
},
{
"argument_type": "Subject",
"nugget_type": "MOD",
"offsets": [
123,
64,
65
],
"text": "static wes",
"tokens": [
"static",
"word",
"embeddings"
]
},
{
"argument_type": "Object",
"nugget_type": "MOD",
"offsets": [
135,
64,
65
],
"text": "static wes",
"tokens": [
"static",
"word",
"embeddings"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
137,
138,
139
],
"text": "from stage c1",
"tokens": [
"from",
"stage",
"c1"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
134
],
"text": "complement",
"tokens": [
"complement"
]
}
},
{
"arguments": [
{
"argument_type": "Arg1",
"nugget_type": "APP",
"offsets": [
3,
4,
5,
165
],
"text": "bli method",
"tokens": [
"bilingual",
"lexicon",
"induction",
"method"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
166,
167,
168
],
"text": "from stage c1",
"tokens": [
"from",
"stage",
"c1"
]
},
{
"argument_type": "Extent",
"nugget_type": "DEG",
"offsets": [
171
],
"text": "substantial",
"tokens": [
"substantial"
]
},
{
"argument_type": "Result",
"nugget_type": "STR",
"offsets": [
172
],
"text": "gains",
"tokens": [
"gains"
]
},
{
"argument_type": "Arg2",
"nugget_type": "APP",
"offsets": [
174,
175,
176,
177,
178,
179,
180,
181,
3,
4,
5,
183
],
"text": "all state - of - the - art bli methods",
"tokens": [
"all",
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"bilingual",
"lexicon",
"induction",
"methods"
]
}
],
"event_type": "CMP",
"trigger": {
"offsets": [
170
],
"text": "yields",
"tokens": [
"yields"
]
}
}
] |
[
"word",
"translation",
"or",
"bilingual",
"lexicon",
"induction",
"(",
"bli",
")",
"is",
"a",
"key",
"cross",
"-",
"lingual",
"task",
",",
"aiming",
"to",
"bridge",
"the",
"lexical",
"gap",
"between",
"different",
"languages",
".",
"in",
"this",
"work",
",",
"we",
"propose",
"a",
"robust",
"and",
"effective",
"two",
"-",
"stage",
"contrastive",
"learning",
"framework",
"for",
"the",
"bli",
"task",
".",
"at",
"stage",
"c1",
",",
"we",
"propose",
"to",
"refine",
"standard",
"cross",
"-",
"lingual",
"linear",
"maps",
"between",
"static",
"word",
"embeddings",
"(",
"wes",
")",
"via",
"a",
"contrastive",
"learning",
"objective",
";",
"we",
"also",
"show",
"how",
"to",
"integrate",
"it",
"into",
"the",
"self",
"-",
"learning",
"procedure",
"for",
"even",
"more",
"refined",
"cross",
"-",
"lingual",
"maps",
".",
"in",
"stage",
"c2",
",",
"we",
"conduct",
"bli",
"-",
"oriented",
"contrastive",
"fine",
"-",
"tuning",
"of",
"mbert",
",",
"unlocking",
"its",
"word",
"translation",
"capability",
".",
"we",
"also",
"show",
"that",
"static",
"wes",
"induced",
"from",
"the",
"‘",
"c2",
"-",
"tuned",
"’",
"mbert",
"complement",
"static",
"wes",
"from",
"stage",
"c1",
".",
"comprehensive",
"experiments",
"on",
"standard",
"bli",
"datasets",
"for",
"diverse",
"languages",
"and",
"different",
"experimental",
"setups",
"demonstrate",
"substantial",
"gains",
"achieved",
"by",
"our",
"framework",
".",
"while",
"the",
"bli",
"method",
"from",
"stage",
"c1",
"already",
"yields",
"substantial",
"gains",
"over",
"all",
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"bli",
"methods",
"in",
"our",
"comparison",
",",
"even",
"stronger",
"improvements",
"are",
"met",
"with",
"the",
"full",
"two",
"-",
"stage",
"framework",
":",
"e",
".",
"g",
".",
",",
"we",
"report",
"gains",
"for",
"112",
"/",
"112",
"bli",
"setups",
",",
"spanning",
"28",
"language",
"pairs",
"."
] |
ACL
|
Explicitly Capturing Relations between Entity Mentions via Graph Neural Networks for Domain-specific Named Entity Recognition
|
Named entity recognition (NER) is well studied for the general domain, and recent systems have achieved human-level performance for identifying common entity types. However, the NER performance is still moderate for specialized domains that tend to feature complicated contexts and jargonistic entity types. To address these challenges, we propose explicitly connecting entity mentions based on both global coreference relations and local dependency relations for building better entity mention representations. In our experiments, we incorporate entity mention relations by Graph Neural Networks and show that our system noticeably improves the NER performance on two datasets from different domains. We further show that the proposed lightweight system can effectively elevate the NER performance to a higher level even when only a tiny amount of labeled data is available, which is desirable for domain-specific NER.
|
424d96f5896d0e963fa261c9db0e537d
| 2,021
|
[
"named entity recognition ( ner ) is well studied for the general domain , and recent systems have achieved human - level performance for identifying common entity types .",
"however , the ner performance is still moderate for specialized domains that tend to feature complicated contexts and jargonistic entity types .",
"to address these challenges , we propose explicitly connecting entity mentions based on both global coreference relations and local dependency relations for building better entity mention representations .",
"in our experiments , we incorporate entity mention relations by graph neural networks and show that our system noticeably improves the ner performance on two datasets from different domains .",
"we further show that the proposed lightweight system can effectively elevate the ner performance to a higher level even when only a tiny amount of labeled data is available , which is desirable for domain - specific ner ."
] |
[
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
0,
1,
2
],
"text": "named entity recognition",
"tokens": [
"named",
"entity",
"recognition"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
8
],
"text": "studied",
"tokens": [
"studied"
]
}
},
{
"arguments": [
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
36
],
"text": "moderate",
"tokens": [
"moderate"
]
},
{
"argument_type": "Concern",
"nugget_type": "FEA",
"offsets": [
0,
1,
2,
33
],
"text": "ner performance",
"tokens": [
"named",
"entity",
"recognition",
"performance"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
36
],
"text": "moderate",
"tokens": [
"moderate"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
56
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "FEA",
"offsets": [
60,
61
],
"text": "entity mentions",
"tokens": [
"entity",
"mentions"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
73
],
"text": "building",
"tokens": [
"building"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
59
],
"text": "connecting",
"tokens": [
"connecting"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "FEA",
"offsets": [
74,
75,
76,
77
],
"text": "better entity mention representations",
"tokens": [
"better",
"entity",
"mention",
"representations"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
73
],
"text": "building",
"tokens": [
"building"
]
}
},
{
"arguments": [
{
"argument_type": "BaseComponent",
"nugget_type": "APP",
"offsets": [
89,
90,
91
],
"text": "graph neural networks",
"tokens": [
"graph",
"neural",
"networks"
]
},
{
"argument_type": "TriedComponent",
"nugget_type": "FEA",
"offsets": [
85,
86,
87
],
"text": "entity mention relations",
"tokens": [
"entity",
"mention",
"relations"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
84
],
"text": "incorporate",
"tokens": [
"incorporate"
]
}
},
{
"arguments": [
{
"argument_type": "Extent",
"nugget_type": "DEG",
"offsets": [
97
],
"text": "noticeably",
"tokens": [
"noticeably"
]
},
{
"argument_type": "Object",
"nugget_type": "FEA",
"offsets": [
100,
101
],
"text": "ner performance",
"tokens": [
"ner",
"performance"
]
},
{
"argument_type": "Dataset",
"nugget_type": "DST",
"offsets": [
103,
104
],
"text": "two datasets",
"tokens": [
"two",
"datasets"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
98
],
"text": "improves",
"tokens": [
"improves"
]
}
},
{
"arguments": [
{
"argument_type": "Extent",
"nugget_type": "DEG",
"offsets": [
118
],
"text": "effectively",
"tokens": [
"effectively"
]
},
{
"argument_type": "Arg1",
"nugget_type": "APP",
"offsets": [
114,
115,
116
],
"text": "proposed lightweight system",
"tokens": [
"proposed",
"lightweight",
"system"
]
},
{
"argument_type": "Metrics",
"nugget_type": "TAK",
"offsets": [
121,
122
],
"text": "ner performance",
"tokens": [
"ner",
"performance"
]
},
{
"argument_type": "Result",
"nugget_type": "STR",
"offsets": [
125,
126
],
"text": "higher level",
"tokens": [
"higher",
"level"
]
}
],
"event_type": "CMP",
"trigger": {
"offsets": [
119
],
"text": "elevate",
"tokens": [
"elevate"
]
}
}
] |
[
"named",
"entity",
"recognition",
"(",
"ner",
")",
"is",
"well",
"studied",
"for",
"the",
"general",
"domain",
",",
"and",
"recent",
"systems",
"have",
"achieved",
"human",
"-",
"level",
"performance",
"for",
"identifying",
"common",
"entity",
"types",
".",
"however",
",",
"the",
"ner",
"performance",
"is",
"still",
"moderate",
"for",
"specialized",
"domains",
"that",
"tend",
"to",
"feature",
"complicated",
"contexts",
"and",
"jargonistic",
"entity",
"types",
".",
"to",
"address",
"these",
"challenges",
",",
"we",
"propose",
"explicitly",
"connecting",
"entity",
"mentions",
"based",
"on",
"both",
"global",
"coreference",
"relations",
"and",
"local",
"dependency",
"relations",
"for",
"building",
"better",
"entity",
"mention",
"representations",
".",
"in",
"our",
"experiments",
",",
"we",
"incorporate",
"entity",
"mention",
"relations",
"by",
"graph",
"neural",
"networks",
"and",
"show",
"that",
"our",
"system",
"noticeably",
"improves",
"the",
"ner",
"performance",
"on",
"two",
"datasets",
"from",
"different",
"domains",
".",
"we",
"further",
"show",
"that",
"the",
"proposed",
"lightweight",
"system",
"can",
"effectively",
"elevate",
"the",
"ner",
"performance",
"to",
"a",
"higher",
"level",
"even",
"when",
"only",
"a",
"tiny",
"amount",
"of",
"labeled",
"data",
"is",
"available",
",",
"which",
"is",
"desirable",
"for",
"domain",
"-",
"specific",
"ner",
"."
] |
ACL
|
Controllable Paraphrase Generation with a Syntactic Exemplar
|
Prior work on controllable text generation usually assumes that the controlled attribute can take on one of a small set of values known a priori. In this work, we propose a novel task, where the syntax of a generated sentence is controlled rather by a sentential exemplar. To evaluate quantitatively with standard metrics, we create a novel dataset with human annotations. We also develop a variational model with a neural module specifically designed for capturing syntactic knowledge and several multitask training objectives to promote disentangled representation learning. Empirically, the proposed model is observed to achieve improvements over baselines and learn to capture desirable characteristics.
|
10ba7e613a19c9f4a0a1f0a21af0ae76
| 2,019
|
[
"prior work on controllable text generation usually assumes that the controlled attribute can take on one of a small set of values known a priori .",
"in this work , we propose a novel task , where the syntax of a generated sentence is controlled rather by a sentential exemplar .",
"to evaluate quantitatively with standard metrics , we create a novel dataset with human annotations .",
"we also develop a variational model with a neural module specifically designed for capturing syntactic knowledge and several multitask training objectives to promote disentangled representation learning .",
"empirically , the proposed model is observed to achieve improvements over baselines and learn to capture desirable characteristics ."
] |
[
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
3,
4,
5
],
"text": "controllable text generation",
"tokens": [
"controllable",
"text",
"generation"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
6,
7
],
"text": "usually assumes",
"tokens": [
"usually",
"assumes"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
30
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
34
],
"text": "task",
"tokens": [
"task"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
31
],
"text": "propose",
"tokens": [
"propose"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
58
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "DST",
"offsets": [
62,
63,
64,
65
],
"text": "dataset with human annotations",
"tokens": [
"dataset",
"with",
"human",
"annotations"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
52,
53
],
"text": "evaluate quantitatively",
"tokens": [
"evaluate",
"quantitatively"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
59
],
"text": "create",
"tokens": [
"create"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "APP",
"offsets": [
55,
56
],
"text": "standard metrics",
"tokens": [
"standard",
"metrics"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
52,
53
],
"text": "evaluate quantitatively",
"tokens": [
"evaluate",
"quantitatively"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
67
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
89
],
"text": "promote",
"tokens": [
"promote"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
71,
72
],
"text": "variational model",
"tokens": [
"variational",
"model"
]
},
{
"argument_type": "Content",
"nugget_type": "MOD",
"offsets": [
75,
76
],
"text": "neural module",
"tokens": [
"neural",
"module"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
84,
85,
86,
87
],
"text": "several multitask training objectives",
"tokens": [
"several",
"multitask",
"training",
"objectives"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
69
],
"text": "develop",
"tokens": [
"develop"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
90,
91,
92
],
"text": "disentangled representation learning",
"tokens": [
"disentangled",
"representation",
"learning"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
89
],
"text": "promote",
"tokens": [
"promote"
]
}
},
{
"arguments": [
{
"argument_type": "Object",
"nugget_type": "TAK",
"offsets": [
103,
104,
105
],
"text": "improvements over baselines",
"tokens": [
"improvements",
"over",
"baselines"
]
},
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
71,
72
],
"text": "variational model",
"tokens": [
"variational",
"model"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
102
],
"text": "achieve",
"tokens": [
"achieve"
]
}
},
{
"arguments": [
{
"argument_type": "Object",
"nugget_type": "FEA",
"offsets": [
110,
111
],
"text": "desirable characteristics",
"tokens": [
"desirable",
"characteristics"
]
},
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
71,
72
],
"text": "variational model",
"tokens": [
"variational",
"model"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
107,
108,
109
],
"text": "learn to capture",
"tokens": [
"learn",
"to",
"capture"
]
}
}
] |
[
"prior",
"work",
"on",
"controllable",
"text",
"generation",
"usually",
"assumes",
"that",
"the",
"controlled",
"attribute",
"can",
"take",
"on",
"one",
"of",
"a",
"small",
"set",
"of",
"values",
"known",
"a",
"priori",
".",
"in",
"this",
"work",
",",
"we",
"propose",
"a",
"novel",
"task",
",",
"where",
"the",
"syntax",
"of",
"a",
"generated",
"sentence",
"is",
"controlled",
"rather",
"by",
"a",
"sentential",
"exemplar",
".",
"to",
"evaluate",
"quantitatively",
"with",
"standard",
"metrics",
",",
"we",
"create",
"a",
"novel",
"dataset",
"with",
"human",
"annotations",
".",
"we",
"also",
"develop",
"a",
"variational",
"model",
"with",
"a",
"neural",
"module",
"specifically",
"designed",
"for",
"capturing",
"syntactic",
"knowledge",
"and",
"several",
"multitask",
"training",
"objectives",
"to",
"promote",
"disentangled",
"representation",
"learning",
".",
"empirically",
",",
"the",
"proposed",
"model",
"is",
"observed",
"to",
"achieve",
"improvements",
"over",
"baselines",
"and",
"learn",
"to",
"capture",
"desirable",
"characteristics",
"."
] |
ACL
|
Multi-hop Reading Comprehension across Multiple Documents by Reasoning over Heterogeneous Graphs
|
Multi-hop reading comprehension (RC) across documents poses new challenge over single-document RC because it requires reasoning over multiple documents to reach the final answer. In this paper, we propose a new model to tackle the multi-hop RC problem. We introduce a heterogeneous graph with different types of nodes and edges, which is named as Heterogeneous Document-Entity (HDE) graph. The advantage of HDE graph is that it contains different granularity levels of information including candidates, documents and entities in specific document contexts. Our proposed model can do reasoning over the HDE graph with nodes representation initialized with co-attention and self-attention based context encoders. We employ Graph Neural Networks (GNN) based message passing algorithms to accumulate evidences on the proposed HDE graph. Evaluated on the blind test set of the Qangaroo WikiHop data set, our HDE graph based single model delivers competitive result, and the ensemble model achieves the state-of-the-art performance.
|
049972ca3baba47fc25a786c86cabed9
| 2,019
|
[
"multi - hop reading comprehension ( rc ) across documents poses new challenge over single - document rc because it requires reasoning over multiple documents to reach the final answer .",
"in this paper , we propose a new model to tackle the multi - hop rc problem .",
"we introduce a heterogeneous graph with different types of nodes and edges , which is named as heterogeneous document - entity ( hde ) graph .",
"the advantage of hde graph is that it contains different granularity levels of information including candidates , documents and entities in specific document contexts .",
"our proposed model can do reasoning over the hde graph with nodes representation initialized with co - attention and self - attention based context encoders .",
"we employ graph neural networks ( gnn ) based message passing algorithms to accumulate evidences on the proposed hde graph .",
"evaluated on the blind test set of the qangaroo wikihop data set , our hde graph based single model delivers competitive result , and the ensemble model achieves the state - of - the - art performance ."
] |
[
{
"arguments": [
{
"argument_type": "Concern",
"nugget_type": "APP",
"offsets": [
0,
1,
2,
3,
4
],
"text": "multi - hop reading comprehension",
"tokens": [
"multi",
"-",
"hop",
"reading",
"comprehension"
]
},
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
21,
22,
23,
24
],
"text": "reasoning over multiple documents",
"tokens": [
"reasoning",
"over",
"multiple",
"documents"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
26
],
"text": "reach",
"tokens": [
"reach"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
20
],
"text": "requires",
"tokens": [
"requires"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
28,
29
],
"text": "final answer",
"tokens": [
"final",
"answer"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
26
],
"text": "reach",
"tokens": [
"reach"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
35
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
39
],
"text": "model",
"tokens": [
"model"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
41
],
"text": "tackle",
"tokens": [
"tackle"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
36
],
"text": "propose",
"tokens": [
"propose"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
43,
44,
45,
3,
4,
47
],
"text": "multi - hop rc problem",
"tokens": [
"multi",
"-",
"hop",
"reading",
"comprehension",
"problem"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
41
],
"text": "tackle",
"tokens": [
"tackle"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
49
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "MOD",
"offsets": [
66,
67,
68,
69,
70,
71,
72,
73
],
"text": "heterogeneous document - entity ( hde ) graph",
"tokens": [
"heterogeneous",
"document",
"-",
"entity",
"(",
"hde",
")",
"graph"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
50
],
"text": "introduce",
"tokens": [
"introduce"
]
}
},
{
"arguments": [
{
"argument_type": "BaseComponent",
"nugget_type": "APP",
"offsets": [
128,
129,
130
],
"text": "graph neural networks",
"tokens": [
"graph",
"neural",
"networks"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
139
],
"text": "accumulate",
"tokens": [
"accumulate"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
127
],
"text": "employ",
"tokens": [
"employ"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "FEA",
"offsets": [
140
],
"text": "evidences",
"tokens": [
"evidences"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
141,
142,
143,
144,
145
],
"text": "on the proposed hde graph",
"tokens": [
"on",
"the",
"proposed",
"hde",
"graph"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
139
],
"text": "accumulate",
"tokens": [
"accumulate"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
66,
67,
68,
69,
162,
163,
164,
165
],
"text": "hde graph based single model",
"tokens": [
"heterogeneous",
"document",
"-",
"entity",
"graph",
"based",
"single",
"model"
]
},
{
"argument_type": "Object",
"nugget_type": "STR",
"offsets": [
167,
168
],
"text": "competitive result",
"tokens": [
"competitive",
"result"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
166
],
"text": "delivers",
"tokens": [
"delivers"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
172,
173
],
"text": "ensemble model",
"tokens": [
"ensemble",
"model"
]
},
{
"argument_type": "Object",
"nugget_type": "STR",
"offsets": [
176,
177,
178,
179,
180,
181,
182,
183
],
"text": "state - of - the - art performance",
"tokens": [
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"performance"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
174
],
"text": "achieves",
"tokens": [
"achieves"
]
}
}
] |
[
"multi",
"-",
"hop",
"reading",
"comprehension",
"(",
"rc",
")",
"across",
"documents",
"poses",
"new",
"challenge",
"over",
"single",
"-",
"document",
"rc",
"because",
"it",
"requires",
"reasoning",
"over",
"multiple",
"documents",
"to",
"reach",
"the",
"final",
"answer",
".",
"in",
"this",
"paper",
",",
"we",
"propose",
"a",
"new",
"model",
"to",
"tackle",
"the",
"multi",
"-",
"hop",
"rc",
"problem",
".",
"we",
"introduce",
"a",
"heterogeneous",
"graph",
"with",
"different",
"types",
"of",
"nodes",
"and",
"edges",
",",
"which",
"is",
"named",
"as",
"heterogeneous",
"document",
"-",
"entity",
"(",
"hde",
")",
"graph",
".",
"the",
"advantage",
"of",
"hde",
"graph",
"is",
"that",
"it",
"contains",
"different",
"granularity",
"levels",
"of",
"information",
"including",
"candidates",
",",
"documents",
"and",
"entities",
"in",
"specific",
"document",
"contexts",
".",
"our",
"proposed",
"model",
"can",
"do",
"reasoning",
"over",
"the",
"hde",
"graph",
"with",
"nodes",
"representation",
"initialized",
"with",
"co",
"-",
"attention",
"and",
"self",
"-",
"attention",
"based",
"context",
"encoders",
".",
"we",
"employ",
"graph",
"neural",
"networks",
"(",
"gnn",
")",
"based",
"message",
"passing",
"algorithms",
"to",
"accumulate",
"evidences",
"on",
"the",
"proposed",
"hde",
"graph",
".",
"evaluated",
"on",
"the",
"blind",
"test",
"set",
"of",
"the",
"qangaroo",
"wikihop",
"data",
"set",
",",
"our",
"hde",
"graph",
"based",
"single",
"model",
"delivers",
"competitive",
"result",
",",
"and",
"the",
"ensemble",
"model",
"achieves",
"the",
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"performance",
"."
] |
ACL
|
MOOCCube: A Large-scale Data Repository for NLP Applications in MOOCs
|
The prosperity of Massive Open Online Courses (MOOCs) provides fodder for many NLP and AI research for education applications, e.g., course concept extraction, prerequisite relation discovery, etc. However, the publicly available datasets of MOOC are limited in size with few types of data, which hinders advanced models and novel attempts in related topics. Therefore, we present MOOCCube, a large-scale data repository of over 700 MOOC courses, 100k concepts, 8 million student behaviors with an external resource. Moreover, we conduct a prerequisite discovery task as an example application to show the potential of MOOCCube in facilitating relevant research. The data repository is now available at http://moocdata.cn/data/MOOCCube.
|
b8ad4769d291e70991b41a4f3a7e25ad
| 2,020
|
[
"the prosperity of massive open online courses ( moocs ) provides fodder for many nlp and ai research for education applications , e . g . , course concept extraction , prerequisite relation discovery , etc .",
"however , the publicly available datasets of mooc are limited in size with few types of data , which hinders advanced models and novel attempts in related topics .",
"therefore , we present mooccube , a large - scale data repository of over 700 mooc courses , 100k concepts , 8 million student behaviors with an external resource .",
"moreover , we conduct a prerequisite discovery task as an example application to show the potential of mooccube in facilitating relevant research .",
"the data repository is now available at http : / / moocdata . cn / data / mooccube ."
] |
[
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
19,
20
],
"text": "education applications",
"tokens": [
"education",
"applications"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
10
],
"text": "provides",
"tokens": [
"provides"
]
}
},
{
"arguments": [
{
"argument_type": "Dataset",
"nugget_type": "DST",
"offsets": [
40,
41,
42
],
"text": "publicly available datasets",
"tokens": [
"publicly",
"available",
"datasets"
]
},
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
46
],
"text": "limited",
"tokens": [
"limited"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
46
],
"text": "limited",
"tokens": [
"limited"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
68
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "DST",
"offsets": [
70
],
"text": "mooccube",
"tokens": [
"mooccube"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
69
],
"text": "present",
"tokens": [
"present"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
98
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
101,
102,
103
],
"text": "prerequisite discovery task",
"tokens": [
"prerequisite",
"discovery",
"task"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
109
],
"text": "show",
"tokens": [
"show"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
104,
105,
106,
107
],
"text": "as an example application",
"tokens": [
"as",
"an",
"example",
"application"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
99
],
"text": "conduct",
"tokens": [
"conduct"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
111,
112,
113
],
"text": "potential of mooccube",
"tokens": [
"potential",
"of",
"mooccube"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
114,
115,
116,
117
],
"text": "in facilitating relevant research",
"tokens": [
"in",
"facilitating",
"relevant",
"research"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
109
],
"text": "show",
"tokens": [
"show"
]
}
},
{
"arguments": [
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
56
],
"text": "hinders",
"tokens": [
"hinders"
]
},
{
"argument_type": "Concern",
"nugget_type": "APP",
"offsets": [
57,
58
],
"text": "advanced models",
"tokens": [
"advanced",
"models"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
56
],
"text": "hinders",
"tokens": [
"hinders"
]
}
}
] |
[
"the",
"prosperity",
"of",
"massive",
"open",
"online",
"courses",
"(",
"moocs",
")",
"provides",
"fodder",
"for",
"many",
"nlp",
"and",
"ai",
"research",
"for",
"education",
"applications",
",",
"e",
".",
"g",
".",
",",
"course",
"concept",
"extraction",
",",
"prerequisite",
"relation",
"discovery",
",",
"etc",
".",
"however",
",",
"the",
"publicly",
"available",
"datasets",
"of",
"mooc",
"are",
"limited",
"in",
"size",
"with",
"few",
"types",
"of",
"data",
",",
"which",
"hinders",
"advanced",
"models",
"and",
"novel",
"attempts",
"in",
"related",
"topics",
".",
"therefore",
",",
"we",
"present",
"mooccube",
",",
"a",
"large",
"-",
"scale",
"data",
"repository",
"of",
"over",
"700",
"mooc",
"courses",
",",
"100k",
"concepts",
",",
"8",
"million",
"student",
"behaviors",
"with",
"an",
"external",
"resource",
".",
"moreover",
",",
"we",
"conduct",
"a",
"prerequisite",
"discovery",
"task",
"as",
"an",
"example",
"application",
"to",
"show",
"the",
"potential",
"of",
"mooccube",
"in",
"facilitating",
"relevant",
"research",
".",
"the",
"data",
"repository",
"is",
"now",
"available",
"at",
"http",
":",
"/",
"/",
"moocdata",
".",
"cn",
"/",
"data",
"/",
"mooccube",
"."
] |
ACL
|
DeeBERT: Dynamic Early Exiting for Accelerating BERT Inference
|
Large-scale pre-trained language models such as BERT have brought significant improvements to NLP applications. However, they are also notorious for being slow in inference, which makes them difficult to deploy in real-time applications. We propose a simple but effective method, DeeBERT, to accelerate BERT inference. Our approach allows samples to exit earlier without passing through the entire model. Experiments show that DeeBERT is able to save up to ~40% inference time with minimal degradation in model quality. Further analyses show different behaviors in the BERT transformer layers and also reveal their redundancy. Our work provides new ideas to efficiently apply deep transformer-based models to downstream tasks. Code is available at https://github.com/castorini/DeeBERT.
|
4b6fe7b5ad1860a72ffa9af1eced3b84
| 2,020
|
[
"large - scale pre - trained language models such as bert have brought significant improvements to nlp applications .",
"however , they are also notorious for being slow in inference , which makes them difficult to deploy in real - time applications .",
"we propose a simple but effective method , deebert , to accelerate bert inference .",
"our approach allows samples to exit earlier without passing through the entire model .",
"experiments show that deebert is able to save up to ~ 40 % inference time with minimal degradation in model quality .",
"further analyses show different behaviors in the bert transformer layers and also reveal their redundancy .",
"our work provides new ideas to efficiently apply deep transformer - based models to downstream tasks .",
"code is available at https : / / github . com / castorini / deebert ."
] |
[
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "APP",
"offsets": [
0,
1,
2,
3,
4,
5,
6,
7
],
"text": "large - scale pre - trained language models",
"tokens": [
"large",
"-",
"scale",
"pre",
"-",
"trained",
"language",
"models"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
14
],
"text": "improvements",
"tokens": [
"improvements"
]
}
},
{
"arguments": [
{
"argument_type": "Concern",
"nugget_type": "APP",
"offsets": [
0,
1,
2,
3,
4,
5,
6,
7
],
"text": "large - scale pre - trained language models",
"tokens": [
"large",
"-",
"scale",
"pre",
"-",
"trained",
"language",
"models"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
28,
29
],
"text": "in inference",
"tokens": [
"in",
"inference"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
27
],
"text": "slow",
"tokens": [
"slow"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
43
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
51
],
"text": "deebert",
"tokens": [
"deebert"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
54
],
"text": "accelerate",
"tokens": [
"accelerate"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
44
],
"text": "propose",
"tokens": [
"propose"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
55,
56
],
"text": "bert inference",
"tokens": [
"bert",
"inference"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
54
],
"text": "accelerate",
"tokens": [
"accelerate"
]
}
},
{
"arguments": [
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
65,
66,
67,
68,
69,
70
],
"text": "without passing through the entire model",
"tokens": [
"without",
"passing",
"through",
"the",
"entire",
"model"
]
},
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
61
],
"text": "samples",
"tokens": [
"samples"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
63,
64
],
"text": "exit earlier",
"tokens": [
"exit",
"earlier"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
60
],
"text": "allows",
"tokens": [
"allows"
]
}
},
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "E-FAC",
"offsets": [
79
],
"text": "save",
"tokens": [
"save"
]
}
],
"event_type": "FIN",
"trigger": {
"offsets": [
73
],
"text": "show",
"tokens": [
"show"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
75
],
"text": "deebert",
"tokens": [
"deebert"
]
},
{
"argument_type": "Extent",
"nugget_type": "DEG",
"offsets": [
83,
84
],
"text": "40 %",
"tokens": [
"40",
"%"
]
},
{
"argument_type": "Object",
"nugget_type": "TAK",
"offsets": [
85,
86
],
"text": "inference time",
"tokens": [
"inference",
"time"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
87,
88,
89,
90,
91,
92
],
"text": "with minimal degradation in model quality",
"tokens": [
"with",
"minimal",
"degradation",
"in",
"model",
"quality"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
79
],
"text": "save",
"tokens": [
"save"
]
}
},
{
"arguments": [
{
"argument_type": "Object",
"nugget_type": "TAK",
"offsets": [
97,
98,
99,
100,
101,
102,
103
],
"text": "different behaviors in the bert transformer layers",
"tokens": [
"different",
"behaviors",
"in",
"the",
"bert",
"transformer",
"layers"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
96
],
"text": "show",
"tokens": [
"show"
]
}
},
{
"arguments": [
{
"argument_type": "Object",
"nugget_type": "TAK",
"offsets": [
101,
102,
103,
108
],
"text": "their redundancy",
"tokens": [
"bert",
"transformer",
"layers",
"redundancy"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
106
],
"text": "reveal",
"tokens": [
"reveal"
]
}
}
] |
[
"large",
"-",
"scale",
"pre",
"-",
"trained",
"language",
"models",
"such",
"as",
"bert",
"have",
"brought",
"significant",
"improvements",
"to",
"nlp",
"applications",
".",
"however",
",",
"they",
"are",
"also",
"notorious",
"for",
"being",
"slow",
"in",
"inference",
",",
"which",
"makes",
"them",
"difficult",
"to",
"deploy",
"in",
"real",
"-",
"time",
"applications",
".",
"we",
"propose",
"a",
"simple",
"but",
"effective",
"method",
",",
"deebert",
",",
"to",
"accelerate",
"bert",
"inference",
".",
"our",
"approach",
"allows",
"samples",
"to",
"exit",
"earlier",
"without",
"passing",
"through",
"the",
"entire",
"model",
".",
"experiments",
"show",
"that",
"deebert",
"is",
"able",
"to",
"save",
"up",
"to",
"~",
"40",
"%",
"inference",
"time",
"with",
"minimal",
"degradation",
"in",
"model",
"quality",
".",
"further",
"analyses",
"show",
"different",
"behaviors",
"in",
"the",
"bert",
"transformer",
"layers",
"and",
"also",
"reveal",
"their",
"redundancy",
".",
"our",
"work",
"provides",
"new",
"ideas",
"to",
"efficiently",
"apply",
"deep",
"transformer",
"-",
"based",
"models",
"to",
"downstream",
"tasks",
".",
"code",
"is",
"available",
"at",
"https",
":",
"/",
"/",
"github",
".",
"com",
"/",
"castorini",
"/",
"deebert",
"."
] |
ACL
|
Gender Bias in Multilingual Embeddings and Cross-Lingual Transfer
|
Multilingual representations embed words from many languages into a single semantic space such that words with similar meanings are close to each other regardless of the language. These embeddings have been widely used in various settings, such as cross-lingual transfer, where a natural language processing (NLP) model trained on one language is deployed to another language. While the cross-lingual transfer techniques are powerful, they carry gender bias from the source to target languages. In this paper, we study gender bias in multilingual embeddings and how it affects transfer learning for NLP applications. We create a multilingual dataset for bias analysis and propose several ways for quantifying bias in multilingual representations from both the intrinsic and extrinsic perspectives. Experimental results show that the magnitude of bias in the multilingual representations changes differently when we align the embeddings to different target spaces and that the alignment direction can also have an influence on the bias in transfer learning. We further provide recommendations for using the multilingual word representations for downstream tasks.
|
097a7600508cd9212fb670a91130fa8d
| 2,020
|
[
"multilingual representations embed words from many languages into a single semantic space such that words with similar meanings are close to each other regardless of the language .",
"these embeddings have been widely used in various settings , such as cross - lingual transfer , where a natural language processing ( nlp ) model trained on one language is deployed to another language .",
"while the cross - lingual transfer techniques are powerful , they carry gender bias from the source to target languages .",
"in this paper , we study gender bias in multilingual embeddings and how it affects transfer learning for nlp applications .",
"we create a multilingual dataset for bias analysis and propose several ways for quantifying bias in multilingual representations from both the intrinsic and extrinsic perspectives .",
"experimental results show that the magnitude of bias in the multilingual representations changes differently when we align the embeddings to different target spaces and that the alignment direction can also have an influence on the bias in transfer learning .",
"we further provide recommendations for using the multilingual word representations for downstream tasks ."
] |
[
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
40,
41,
42,
43
],
"text": "cross - lingual transfer",
"tokens": [
"cross",
"-",
"lingual",
"transfer"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
33
],
"text": "used",
"tokens": [
"used"
]
}
},
{
"arguments": [
{
"argument_type": "Concern",
"nugget_type": "APP",
"offsets": [
66,
67,
68,
69,
70
],
"text": "cross - lingual transfer techniques",
"tokens": [
"cross",
"-",
"lingual",
"transfer",
"techniques"
]
},
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
76,
77
],
"text": "gender bias",
"tokens": [
"gender",
"bias"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
75
],
"text": "carry",
"tokens": [
"carry"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
89
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "FEA",
"offsets": [
91,
92,
93,
94,
95
],
"text": "gender bias in multilingual embeddings",
"tokens": [
"gender",
"bias",
"in",
"multilingual",
"embeddings"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
100,
101
],
"text": "transfer learning",
"tokens": [
"transfer",
"learning"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
90
],
"text": "study",
"tokens": [
"study"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
106
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "DST",
"offsets": [
109,
110,
111,
112,
113
],
"text": "multilingual dataset for bias analysis",
"tokens": [
"multilingual",
"dataset",
"for",
"bias",
"analysis"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
107
],
"text": "create",
"tokens": [
"create"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
106
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
119
],
"text": "quantifying",
"tokens": [
"quantifying"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
116,
117
],
"text": "several ways",
"tokens": [
"several",
"ways"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
115
],
"text": "propose",
"tokens": [
"propose"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "FEA",
"offsets": [
120,
121,
122,
123
],
"text": "bias in multilingual representations",
"tokens": [
"bias",
"in",
"multilingual",
"representations"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
124,
125,
126,
127,
128,
129,
130
],
"text": "from both the intrinsic and extrinsic perspectives",
"tokens": [
"from",
"both",
"the",
"intrinsic",
"and",
"extrinsic",
"perspectives"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
119
],
"text": "quantifying",
"tokens": [
"quantifying"
]
}
},
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "E-FAC",
"offsets": [
144,
145
],
"text": "changes differently",
"tokens": [
"changes",
"differently"
]
},
{
"argument_type": "Content",
"nugget_type": "E-FAC",
"offsets": [
162,
163,
164
],
"text": "have an influence",
"tokens": [
"have",
"an",
"influence"
]
}
],
"event_type": "FIN",
"trigger": {
"offsets": [
134
],
"text": "show",
"tokens": [
"show"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "FEA",
"offsets": [
137,
138,
139
],
"text": "magnitude of bias",
"tokens": [
"magnitude",
"of",
"bias"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
140,
141,
142,
143
],
"text": "in the multilingual representations",
"tokens": [
"in",
"the",
"multilingual",
"representations"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
146,
147,
148,
149,
150,
151,
152,
153,
154
],
"text": "when we align the embeddings to different target spaces",
"tokens": [
"when",
"we",
"align",
"the",
"embeddings",
"to",
"different",
"target",
"spaces"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
144,
145
],
"text": "changes differently",
"tokens": [
"changes",
"differently"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "FEA",
"offsets": [
158,
159
],
"text": "alignment direction",
"tokens": [
"alignment",
"direction"
]
},
{
"argument_type": "Object",
"nugget_type": "FEA",
"offsets": [
167,
168,
169,
170
],
"text": "bias in transfer learning",
"tokens": [
"bias",
"in",
"transfer",
"learning"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
162,
163,
164
],
"text": "have an influence",
"tokens": [
"have",
"an",
"influence"
]
}
}
] |
[
"multilingual",
"representations",
"embed",
"words",
"from",
"many",
"languages",
"into",
"a",
"single",
"semantic",
"space",
"such",
"that",
"words",
"with",
"similar",
"meanings",
"are",
"close",
"to",
"each",
"other",
"regardless",
"of",
"the",
"language",
".",
"these",
"embeddings",
"have",
"been",
"widely",
"used",
"in",
"various",
"settings",
",",
"such",
"as",
"cross",
"-",
"lingual",
"transfer",
",",
"where",
"a",
"natural",
"language",
"processing",
"(",
"nlp",
")",
"model",
"trained",
"on",
"one",
"language",
"is",
"deployed",
"to",
"another",
"language",
".",
"while",
"the",
"cross",
"-",
"lingual",
"transfer",
"techniques",
"are",
"powerful",
",",
"they",
"carry",
"gender",
"bias",
"from",
"the",
"source",
"to",
"target",
"languages",
".",
"in",
"this",
"paper",
",",
"we",
"study",
"gender",
"bias",
"in",
"multilingual",
"embeddings",
"and",
"how",
"it",
"affects",
"transfer",
"learning",
"for",
"nlp",
"applications",
".",
"we",
"create",
"a",
"multilingual",
"dataset",
"for",
"bias",
"analysis",
"and",
"propose",
"several",
"ways",
"for",
"quantifying",
"bias",
"in",
"multilingual",
"representations",
"from",
"both",
"the",
"intrinsic",
"and",
"extrinsic",
"perspectives",
".",
"experimental",
"results",
"show",
"that",
"the",
"magnitude",
"of",
"bias",
"in",
"the",
"multilingual",
"representations",
"changes",
"differently",
"when",
"we",
"align",
"the",
"embeddings",
"to",
"different",
"target",
"spaces",
"and",
"that",
"the",
"alignment",
"direction",
"can",
"also",
"have",
"an",
"influence",
"on",
"the",
"bias",
"in",
"transfer",
"learning",
".",
"we",
"further",
"provide",
"recommendations",
"for",
"using",
"the",
"multilingual",
"word",
"representations",
"for",
"downstream",
"tasks",
"."
] |
ACL
|
Aspect Sentiment Classification with Document-level Sentiment Preference Modeling
|
In the literature, existing studies always consider Aspect Sentiment Classification (ASC) as an independent sentence-level classification problem aspect by aspect, which largely ignore the document-level sentiment preference information, though obviously such information is crucial for alleviating the information deficiency problem in ASC. In this paper, we explore two kinds of sentiment preference information inside a document, i.e., contextual sentiment consistency w.r.t. the same aspect (namely intra-aspect sentiment consistency) and contextual sentiment tendency w.r.t. all the related aspects (namely inter-aspect sentiment tendency). On the basis, we propose a Cooperative Graph Attention Networks (CoGAN) approach for cooperatively learning the aspect-related sentence representation. Specifically, two graph attention networks are leveraged to model above two kinds of document-level sentiment preference information respectively, followed by an interactive mechanism to integrate the two-fold preference. Detailed evaluation demonstrates the great advantage of the proposed approach to ASC over the state-of-the-art baselines. This justifies the importance of the document-level sentiment preference information to ASC and the effectiveness of our approach capturing such information.
|
ddfa2a00128b08b7605ebc24925da970
| 2,020
|
[
"in the literature , existing studies always consider aspect sentiment classification ( asc ) as an independent sentence - level classification problem aspect by aspect , which largely ignore the document - level sentiment preference information , though obviously such information is crucial for alleviating the information deficiency problem in asc .",
"in this paper , we explore two kinds of sentiment preference information inside a document , i . e . , contextual sentiment consistency w . r . t .",
"the same aspect ( namely intra - aspect sentiment consistency ) and contextual sentiment tendency w . r . t .",
"all the related aspects ( namely inter - aspect sentiment tendency ) .",
"on the basis , we propose a cooperative graph attention networks ( cogan ) approach for cooperatively learning the aspect - related sentence representation .",
"specifically , two graph attention networks are leveraged to model above two kinds of document - level sentiment preference information respectively , followed by an interactive mechanism to integrate the two - fold preference .",
"detailed evaluation demonstrates the great advantage of the proposed approach to asc over the state - of - the - art baselines .",
"this justifies the importance of the document - level sentiment preference information to asc and the effectiveness of our approach capturing such information ."
] |
[
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
187
],
"text": "aspect sentiment classification",
"tokens": [
"asc"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
7
],
"text": "consider",
"tokens": [
"consider"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
56
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "FEA",
"offsets": [
73,
74,
75,
76,
77,
78,
79,
80
],
"text": "contextual sentiment consistency w . r . t",
"tokens": [
"contextual",
"sentiment",
"consistency",
"w",
".",
"r",
".",
"t"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
57
],
"text": "explore",
"tokens": [
"explore"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
120
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
132,
133
],
"text": "cooperatively learning",
"tokens": [
"cooperatively",
"learning"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
123,
124,
125,
126,
130
],
"text": "cooperative graph attention networks ( cogan ) approach",
"tokens": [
"cooperative",
"graph",
"attention",
"networks",
"approach"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
121
],
"text": "propose",
"tokens": [
"propose"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "FEA",
"offsets": [
135,
136,
137,
138,
139
],
"text": "aspect - related sentence representation",
"tokens": [
"aspect",
"-",
"related",
"sentence",
"representation"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
132,
133
],
"text": "cooperatively learning",
"tokens": [
"cooperatively",
"learning"
]
}
},
{
"arguments": [
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
152,
153,
154,
155,
156,
157,
158,
159,
160
],
"text": "two kinds of document - level sentiment preference information",
"tokens": [
"two",
"kinds",
"of",
"document",
"-",
"level",
"sentiment",
"preference",
"information"
]
},
{
"argument_type": "TriedComponent",
"nugget_type": "MOD",
"offsets": [
143,
144,
145,
146
],
"text": "two graph attention networks",
"tokens": [
"two",
"graph",
"attention",
"networks"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
150
],
"text": "model",
"tokens": [
"model"
]
}
},
{
"arguments": [
{
"argument_type": "TriedComponent",
"nugget_type": "APP",
"offsets": [
166,
167
],
"text": "interactive mechanism",
"tokens": [
"interactive",
"mechanism"
]
},
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
171,
172,
173,
174
],
"text": "two - fold preference",
"tokens": [
"two",
"-",
"fold",
"preference"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
169
],
"text": "integrate",
"tokens": [
"integrate"
]
}
},
{
"arguments": [
{
"argument_type": "Object",
"nugget_type": "TAK",
"offsets": [
187
],
"text": "aspect sentiment classification",
"tokens": [
"asc"
]
},
{
"argument_type": "Subject",
"nugget_type": "TAK",
"offsets": [
202,
203,
204,
205,
206,
207,
208,
209,
210
],
"text": "importance of the document - level sentiment preference information",
"tokens": [
"importance",
"of",
"the",
"document",
"-",
"level",
"sentiment",
"preference",
"information"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
187
],
"text": "asc",
"tokens": [
"asc"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
200
],
"text": "justifies",
"tokens": [
"justifies"
]
}
},
{
"arguments": [
{
"argument_type": "Concern",
"nugget_type": "APP",
"offsets": [
4,
5
],
"text": "existing studies",
"tokens": [
"existing",
"studies"
]
},
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
27,
28
],
"text": "largely ignore",
"tokens": [
"largely",
"ignore"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
27,
28
],
"text": "largely ignore",
"tokens": [
"largely",
"ignore"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "STR",
"offsets": [
180,
181,
182,
183,
184,
185
],
"text": "great advantage of the proposed approach",
"tokens": [
"great",
"advantage",
"of",
"the",
"proposed",
"approach"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
188,
189,
190,
191,
192,
193,
194,
195,
196,
197
],
"text": "over the state - of - the - art baselines",
"tokens": [
"over",
"the",
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"baselines"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
187
],
"text": "asc",
"tokens": [
"asc"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
178
],
"text": "demonstrates",
"tokens": [
"demonstrates"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "STR",
"offsets": [
215,
216,
217,
218
],
"text": "effectiveness of our approach",
"tokens": [
"effectiveness",
"of",
"our",
"approach"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
219
],
"text": "capturing",
"tokens": [
"capturing"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
200
],
"text": "justifies",
"tokens": [
"justifies"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "FEA",
"offsets": [
205,
206,
207,
208,
209,
210
],
"text": "document - level sentiment preference information",
"tokens": [
"document",
"-",
"level",
"sentiment",
"preference",
"information"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
219
],
"text": "capturing",
"tokens": [
"capturing"
]
}
}
] |
[
"in",
"the",
"literature",
",",
"existing",
"studies",
"always",
"consider",
"aspect",
"sentiment",
"classification",
"(",
"asc",
")",
"as",
"an",
"independent",
"sentence",
"-",
"level",
"classification",
"problem",
"aspect",
"by",
"aspect",
",",
"which",
"largely",
"ignore",
"the",
"document",
"-",
"level",
"sentiment",
"preference",
"information",
",",
"though",
"obviously",
"such",
"information",
"is",
"crucial",
"for",
"alleviating",
"the",
"information",
"deficiency",
"problem",
"in",
"asc",
".",
"in",
"this",
"paper",
",",
"we",
"explore",
"two",
"kinds",
"of",
"sentiment",
"preference",
"information",
"inside",
"a",
"document",
",",
"i",
".",
"e",
".",
",",
"contextual",
"sentiment",
"consistency",
"w",
".",
"r",
".",
"t",
".",
"the",
"same",
"aspect",
"(",
"namely",
"intra",
"-",
"aspect",
"sentiment",
"consistency",
")",
"and",
"contextual",
"sentiment",
"tendency",
"w",
".",
"r",
".",
"t",
".",
"all",
"the",
"related",
"aspects",
"(",
"namely",
"inter",
"-",
"aspect",
"sentiment",
"tendency",
")",
".",
"on",
"the",
"basis",
",",
"we",
"propose",
"a",
"cooperative",
"graph",
"attention",
"networks",
"(",
"cogan",
")",
"approach",
"for",
"cooperatively",
"learning",
"the",
"aspect",
"-",
"related",
"sentence",
"representation",
".",
"specifically",
",",
"two",
"graph",
"attention",
"networks",
"are",
"leveraged",
"to",
"model",
"above",
"two",
"kinds",
"of",
"document",
"-",
"level",
"sentiment",
"preference",
"information",
"respectively",
",",
"followed",
"by",
"an",
"interactive",
"mechanism",
"to",
"integrate",
"the",
"two",
"-",
"fold",
"preference",
".",
"detailed",
"evaluation",
"demonstrates",
"the",
"great",
"advantage",
"of",
"the",
"proposed",
"approach",
"to",
"asc",
"over",
"the",
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"baselines",
".",
"this",
"justifies",
"the",
"importance",
"of",
"the",
"document",
"-",
"level",
"sentiment",
"preference",
"information",
"to",
"asc",
"and",
"the",
"effectiveness",
"of",
"our",
"approach",
"capturing",
"such",
"information",
"."
] |
ACL
|
What Was Written vs. Who Read It: News Media Profiling Using Text Analysis and Social Media Context
|
Predicting the political bias and the factuality of reporting of entire news outlets are critical elements of media profiling, which is an understudied but an increasingly important research direction. The present level of proliferation of fake, biased, and propagandistic content online has made it impossible to fact-check every single suspicious claim, either manually or automatically. Thus, it has been proposed to profile entire news outlets and to look for those that are likely to publish fake or biased content. This makes it possible to detect likely “fake news” the moment they are published, by simply checking the reliability of their source. From a practical perspective, political bias and factuality of reporting have a linguistic aspect but also a social context. Here, we study the impact of both, namely (i) what was written (i.e., what was published by the target medium, and how it describes itself in Twitter) vs. (ii) who reads it (i.e., analyzing the target medium’s audience on social media). We further study (iii) what was written about the target medium (in Wikipedia). The evaluation results show that what was written matters most, and we further show that putting all information sources together yields huge improvements over the current state-of-the-art.
|
f0836257dd4df8ac15ef193f4376fda5
| 2,020
|
[
"predicting the political bias and the factuality of reporting of entire news outlets are critical elements of media profiling , which is an understudied but an increasingly important research direction .",
"the present level of proliferation of fake , biased , and propagandistic content online has made it impossible to fact - check every single suspicious claim , either manually or automatically .",
"thus , it has been proposed to profile entire news outlets and to look for those that are likely to publish fake or biased content .",
"this makes it possible to detect likely “ fake news ” the moment they are published , by simply checking the reliability of their source .",
"from a practical perspective , political bias and factuality of reporting have a linguistic aspect but also a social context .",
"here , we study the impact of both , namely ( i ) what was written ( i . e . , what was published by the target medium , and how it describes itself in twitter ) vs . ( ii ) who reads it ( i . e . , analyzing the target medium ’ s audience on social media ) .",
"we further study ( iii ) what was written about the target medium ( in wikipedia ) .",
"the evaluation results show that what was written matters most , and we further show that putting all information sources together yields huge improvements over the current state - of - the - art ."
] |
[
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
17,
18
],
"text": "media profiling",
"tokens": [
"media",
"profiling"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
15
],
"text": "elements",
"tokens": [
"elements"
]
}
},
{
"arguments": [
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
48
],
"text": "impossible",
"tokens": [
"impossible"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
50,
51,
52
],
"text": "fact - check",
"tokens": [
"fact",
"-",
"check"
]
},
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
32,
33,
34,
35,
36,
37
],
"text": "present level of proliferation of fake",
"tokens": [
"present",
"level",
"of",
"proliferation",
"of",
"fake"
]
},
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
32,
33,
34,
35,
36,
39
],
"text": "present level of proliferation of biased",
"tokens": [
"present",
"level",
"of",
"proliferation",
"of",
"biased"
]
},
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
32,
33,
34,
35,
36,
42,
43,
44
],
"text": "present level of proliferation of propagandistic content online",
"tokens": [
"present",
"level",
"of",
"proliferation",
"of",
"propagandistic",
"content",
"online"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
46
],
"text": "made",
"tokens": [
"made"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
138
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
141,
142,
149,
150,
151
],
"text": "impact of what was written",
"tokens": [
"impact",
"of",
"what",
"was",
"written"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
141,
142,
174
],
"text": "impact of vs",
"tokens": [
"impact",
"of",
"vs"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
141,
142,
179,
180,
181
],
"text": "impact of who reads it",
"tokens": [
"impact",
"of",
"who",
"reads",
"it"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
139
],
"text": "study",
"tokens": [
"study"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
200
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
206,
207,
208,
209,
210,
211,
212
],
"text": "what was written about the target medium",
"tokens": [
"what",
"was",
"written",
"about",
"the",
"target",
"medium"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
202
],
"text": "study",
"tokens": [
"study"
]
}
},
{
"arguments": [
{
"argument_type": "Finder",
"nugget_type": "OG",
"offsets": [
230
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "E-CMP",
"offsets": [
239
],
"text": "yields",
"tokens": [
"yields"
]
}
],
"event_type": "FIN",
"trigger": {
"offsets": [
232
],
"text": "show",
"tokens": [
"show"
]
}
},
{
"arguments": [
{
"argument_type": "Arg1",
"nugget_type": "APP",
"offsets": [
234,
235,
236,
237,
238
],
"text": "putting all information sources together",
"tokens": [
"putting",
"all",
"information",
"sources",
"together"
]
},
{
"argument_type": "Arg2",
"nugget_type": "APP",
"offsets": [
244,
245,
246,
247,
248,
249,
250,
251
],
"text": "current state - of - the - art",
"tokens": [
"current",
"state",
"-",
"of",
"-",
"the",
"-",
"art"
]
},
{
"argument_type": "Result",
"nugget_type": "STR",
"offsets": [
240,
241
],
"text": "huge improvements",
"tokens": [
"huge",
"improvements"
]
}
],
"event_type": "CMP",
"trigger": {
"offsets": [
239
],
"text": "yields",
"tokens": [
"yields"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "FEA",
"offsets": [
53,
54,
55,
56
],
"text": "every single suspicious claim",
"tokens": [
"every",
"single",
"suspicious",
"claim"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
50,
51,
52
],
"text": "fact - check",
"tokens": [
"fact",
"-",
"check"
]
}
},
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "E-FAC",
"offsets": [
226
],
"text": "matters",
"tokens": [
"matters"
]
}
],
"event_type": "FIN",
"trigger": {
"offsets": [
221
],
"text": "show",
"tokens": [
"show"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "TAK",
"offsets": [
223,
224,
225
],
"text": "what was written",
"tokens": [
"what",
"was",
"written"
]
},
{
"argument_type": "Object",
"nugget_type": "STR",
"offsets": [
227
],
"text": "most",
"tokens": [
"most"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
226
],
"text": "matters",
"tokens": [
"matters"
]
}
}
] |
[
"predicting",
"the",
"political",
"bias",
"and",
"the",
"factuality",
"of",
"reporting",
"of",
"entire",
"news",
"outlets",
"are",
"critical",
"elements",
"of",
"media",
"profiling",
",",
"which",
"is",
"an",
"understudied",
"but",
"an",
"increasingly",
"important",
"research",
"direction",
".",
"the",
"present",
"level",
"of",
"proliferation",
"of",
"fake",
",",
"biased",
",",
"and",
"propagandistic",
"content",
"online",
"has",
"made",
"it",
"impossible",
"to",
"fact",
"-",
"check",
"every",
"single",
"suspicious",
"claim",
",",
"either",
"manually",
"or",
"automatically",
".",
"thus",
",",
"it",
"has",
"been",
"proposed",
"to",
"profile",
"entire",
"news",
"outlets",
"and",
"to",
"look",
"for",
"those",
"that",
"are",
"likely",
"to",
"publish",
"fake",
"or",
"biased",
"content",
".",
"this",
"makes",
"it",
"possible",
"to",
"detect",
"likely",
"“",
"fake",
"news",
"”",
"the",
"moment",
"they",
"are",
"published",
",",
"by",
"simply",
"checking",
"the",
"reliability",
"of",
"their",
"source",
".",
"from",
"a",
"practical",
"perspective",
",",
"political",
"bias",
"and",
"factuality",
"of",
"reporting",
"have",
"a",
"linguistic",
"aspect",
"but",
"also",
"a",
"social",
"context",
".",
"here",
",",
"we",
"study",
"the",
"impact",
"of",
"both",
",",
"namely",
"(",
"i",
")",
"what",
"was",
"written",
"(",
"i",
".",
"e",
".",
",",
"what",
"was",
"published",
"by",
"the",
"target",
"medium",
",",
"and",
"how",
"it",
"describes",
"itself",
"in",
"twitter",
")",
"vs",
".",
"(",
"ii",
")",
"who",
"reads",
"it",
"(",
"i",
".",
"e",
".",
",",
"analyzing",
"the",
"target",
"medium",
"’",
"s",
"audience",
"on",
"social",
"media",
")",
".",
"we",
"further",
"study",
"(",
"iii",
")",
"what",
"was",
"written",
"about",
"the",
"target",
"medium",
"(",
"in",
"wikipedia",
")",
".",
"the",
"evaluation",
"results",
"show",
"that",
"what",
"was",
"written",
"matters",
"most",
",",
"and",
"we",
"further",
"show",
"that",
"putting",
"all",
"information",
"sources",
"together",
"yields",
"huge",
"improvements",
"over",
"the",
"current",
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"."
] |
ACL
|
Complex Question Decomposition for Semantic Parsing
|
In this work, we focus on complex question semantic parsing and propose a novel Hierarchical Semantic Parsing (HSP) method, which utilizes the decompositionality of complex questions for semantic parsing. Our model is designed within a three-stage parsing architecture based on the idea of decomposition-integration. In the first stage, we propose a question decomposer which decomposes a complex question into a sequence of sub-questions. In the second stage, we design an information extractor to derive the type and predicate information of these questions. In the last stage, we integrate the generated information from previous stages and generate a logical form for the complex question. We conduct experiments on COMPLEXWEBQUESTIONS which is a large scale complex question semantic parsing dataset, results show that our model achieves significant improvement compared to state-of-the-art methods.
|
a0adb2b169761652c8ae379e6bfd1f19
| 2,019
|
[
"in this work , we focus on complex question semantic parsing and propose a novel hierarchical semantic parsing ( hsp ) method , which utilizes the decompositionality of complex questions for semantic parsing .",
"our model is designed within a three - stage parsing architecture based on the idea of decomposition - integration .",
"in the first stage , we propose a question decomposer which decomposes a complex question into a sequence of sub - questions .",
"in the second stage , we design an information extractor to derive the type and predicate information of these questions .",
"in the last stage , we integrate the generated information from previous stages and generate a logical form for the complex question .",
"we conduct experiments on complexwebquestions which is a large scale complex question semantic parsing dataset , results show that our model achieves significant improvement compared to state - of - the - art methods ."
] |
[
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
4
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
15,
16,
17,
21
],
"text": "hierarchical semantic parsing ( hsp ) method",
"tokens": [
"hierarchical",
"semantic",
"parsing",
"method"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
12
],
"text": "propose",
"tokens": [
"propose"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
59
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "MOD",
"offsets": [
62,
63
],
"text": "question decomposer",
"tokens": [
"question",
"decomposer"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
60
],
"text": "propose",
"tokens": [
"propose"
]
}
},
{
"arguments": [
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
67,
68
],
"text": "complex question",
"tokens": [
"complex",
"question"
]
},
{
"argument_type": "TriedComponent",
"nugget_type": "FEA",
"offsets": [
71,
72,
73,
74,
75
],
"text": "sequence of sub - questions",
"tokens": [
"sequence",
"of",
"sub",
"-",
"questions"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
65
],
"text": "decomposes",
"tokens": [
"decomposes"
]
}
},
{
"arguments": [
{
"argument_type": "BaseComponent",
"nugget_type": "MOD",
"offsets": [
109,
110
],
"text": "previous stages",
"tokens": [
"previous",
"stages"
]
},
{
"argument_type": "TriedComponent",
"nugget_type": "FEA",
"offsets": [
106,
107
],
"text": "generated information",
"tokens": [
"generated",
"information"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
104
],
"text": "integrate",
"tokens": [
"integrate"
]
}
},
{
"arguments": [
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
118,
119
],
"text": "complex question",
"tokens": [
"complex",
"question"
]
},
{
"argument_type": "TriedComponent",
"nugget_type": "FEA",
"offsets": [
114,
115
],
"text": "logical form",
"tokens": [
"logical",
"form"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
112
],
"text": "generate",
"tokens": [
"generate"
]
}
},
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "E-CMP",
"offsets": [
142
],
"text": "achieves",
"tokens": [
"achieves"
]
}
],
"event_type": "FIN",
"trigger": {
"offsets": [
138
],
"text": "show",
"tokens": [
"show"
]
}
},
{
"arguments": [
{
"argument_type": "Arg2",
"nugget_type": "APP",
"offsets": [
147,
148,
149,
150,
151,
152,
153,
154
],
"text": "state - of - the - art methods",
"tokens": [
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"methods"
]
},
{
"argument_type": "Arg1",
"nugget_type": "APP",
"offsets": [
15,
16,
17,
21
],
"text": "hierarchical semantic parsing ( hsp ) method",
"tokens": [
"hierarchical",
"semantic",
"parsing",
"method"
]
},
{
"argument_type": "Result",
"nugget_type": "STR",
"offsets": [
143,
144
],
"text": "significant improvement",
"tokens": [
"significant",
"improvement"
]
}
],
"event_type": "CMP",
"trigger": {
"offsets": [
142
],
"text": "achieves",
"tokens": [
"achieves"
]
}
},
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
7,
8,
9,
10
],
"text": "complex question semantic parsing",
"tokens": [
"complex",
"question",
"semantic",
"parsing"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
5
],
"text": "focus",
"tokens": [
"focus"
]
}
},
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
26,
27,
28,
29
],
"text": "decompositionality of complex questions",
"tokens": [
"decompositionality",
"of",
"complex",
"questions"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
31,
32
],
"text": "semantic parsing",
"tokens": [
"semantic",
"parsing"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
24
],
"text": "utilizes",
"tokens": [
"utilizes"
]
}
},
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
35
],
"text": "model",
"tokens": [
"model"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
38,
39,
40,
41,
42,
43,
44
],
"text": "within a three - stage parsing architecture",
"tokens": [
"within",
"a",
"three",
"-",
"stage",
"parsing",
"architecture"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
37
],
"text": "designed",
"tokens": [
"designed"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
82
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "MOD",
"offsets": [
85,
86
],
"text": "information extractor",
"tokens": [
"information",
"extractor"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
83
],
"text": "design",
"tokens": [
"design"
]
}
},
{
"arguments": [
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
71,
72,
73,
74,
75
],
"text": "sequence of sub - questions",
"tokens": [
"sequence",
"of",
"sub",
"-",
"questions"
]
},
{
"argument_type": "TriedComponent",
"nugget_type": "FEA",
"offsets": [
90,
93
],
"text": "type information",
"tokens": [
"type",
"information"
]
},
{
"argument_type": "TriedComponent",
"nugget_type": "FEA",
"offsets": [
92,
93
],
"text": "predicate information",
"tokens": [
"predicate",
"information"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
88
],
"text": "derive",
"tokens": [
"derive"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
121
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
123
],
"text": "experiments",
"tokens": [
"experiments"
]
},
{
"argument_type": "Dataset",
"nugget_type": "DST",
"offsets": [
125
],
"text": "complexwebquestions",
"tokens": [
"complexwebquestions"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
122
],
"text": "conduct",
"tokens": [
"conduct"
]
}
}
] |
[
"in",
"this",
"work",
",",
"we",
"focus",
"on",
"complex",
"question",
"semantic",
"parsing",
"and",
"propose",
"a",
"novel",
"hierarchical",
"semantic",
"parsing",
"(",
"hsp",
")",
"method",
",",
"which",
"utilizes",
"the",
"decompositionality",
"of",
"complex",
"questions",
"for",
"semantic",
"parsing",
".",
"our",
"model",
"is",
"designed",
"within",
"a",
"three",
"-",
"stage",
"parsing",
"architecture",
"based",
"on",
"the",
"idea",
"of",
"decomposition",
"-",
"integration",
".",
"in",
"the",
"first",
"stage",
",",
"we",
"propose",
"a",
"question",
"decomposer",
"which",
"decomposes",
"a",
"complex",
"question",
"into",
"a",
"sequence",
"of",
"sub",
"-",
"questions",
".",
"in",
"the",
"second",
"stage",
",",
"we",
"design",
"an",
"information",
"extractor",
"to",
"derive",
"the",
"type",
"and",
"predicate",
"information",
"of",
"these",
"questions",
".",
"in",
"the",
"last",
"stage",
",",
"we",
"integrate",
"the",
"generated",
"information",
"from",
"previous",
"stages",
"and",
"generate",
"a",
"logical",
"form",
"for",
"the",
"complex",
"question",
".",
"we",
"conduct",
"experiments",
"on",
"complexwebquestions",
"which",
"is",
"a",
"large",
"scale",
"complex",
"question",
"semantic",
"parsing",
"dataset",
",",
"results",
"show",
"that",
"our",
"model",
"achieves",
"significant",
"improvement",
"compared",
"to",
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"methods",
"."
] |
ACL
|
Automated Evaluation of Writing – 50 Years and Counting
|
In this theme paper, we focus on Automated Writing Evaluation (AWE), using Ellis Page’s seminal 1966 paper to frame the presentation. We discuss some of the current frontiers in the field and offer some thoughts on the emergent uses of this technology.
|
15160ec78e25daa7e00b4498c36bba28
| 2,020
|
[
"in this theme paper , we focus on automated writing evaluation ( awe ) , using ellis page ’ s seminal 1966 paper to frame the presentation .",
"we discuss some of the current frontiers in the field and offer some thoughts on the emergent uses of this technology ."
] |
[
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
5
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
8,
9,
10
],
"text": "automated writing evaluation",
"tokens": [
"automated",
"writing",
"evaluation"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
6
],
"text": "focus",
"tokens": [
"focus"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
5
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "FEA",
"offsets": [
16,
17,
18,
19,
20,
21,
22
],
"text": "ellis page ’ s seminal 1966 paper",
"tokens": [
"ellis",
"page",
"’",
"s",
"seminal",
"1966",
"paper"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
24
],
"text": "frame",
"tokens": [
"frame"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
15
],
"text": "using",
"tokens": [
"using"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
26
],
"text": "presentation",
"tokens": [
"presentation"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
24
],
"text": "frame",
"tokens": [
"frame"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
28
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
30,
31,
32,
33,
34,
35,
36,
37
],
"text": "some of the current frontiers in the field",
"tokens": [
"some",
"of",
"the",
"current",
"frontiers",
"in",
"the",
"field"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
29
],
"text": "discuss",
"tokens": [
"discuss"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
28
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
40,
41
],
"text": "some thoughts",
"tokens": [
"some",
"thoughts"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
42,
43,
44,
45,
46,
47,
48
],
"text": "on the emergent uses of this technology",
"tokens": [
"on",
"the",
"emergent",
"uses",
"of",
"this",
"technology"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
39
],
"text": "offer",
"tokens": [
"offer"
]
}
}
] |
[
"in",
"this",
"theme",
"paper",
",",
"we",
"focus",
"on",
"automated",
"writing",
"evaluation",
"(",
"awe",
")",
",",
"using",
"ellis",
"page",
"’",
"s",
"seminal",
"1966",
"paper",
"to",
"frame",
"the",
"presentation",
".",
"we",
"discuss",
"some",
"of",
"the",
"current",
"frontiers",
"in",
"the",
"field",
"and",
"offer",
"some",
"thoughts",
"on",
"the",
"emergent",
"uses",
"of",
"this",
"technology",
"."
] |
ACL
|
Automatic Detection of Entity-Manipulated Text using Factual Knowledge
|
In this work, we focus on the problem of distinguishing a human written news article from a news article that is created by manipulating entities in a human written news article (e.g., replacing entities with factually incorrect entities). Such manipulated articles can mislead the reader by posing as a human written news article. We propose a neural network based detector that detects manipulated news articles by reasoning about the facts mentioned in the article. Our proposed detector exploits factual knowledge via graph convolutional neural network along with the textual information in the news article. We also create challenging datasets for this task by considering various strategies to generate the new replacement entity (e.g., entity generation from GPT-2). In all the settings, our proposed model either matches or outperforms the state-of-the-art detector in terms of accuracy. Our code and data are available at https://github.com/UBC-NLP/manipulated_entity_detection.
|
92e2e8c247e3215a1c4bb5f709ae4cce
| 2,022
|
[
"in this work , we focus on the problem of distinguishing a human written news article from a news article that is created by manipulating entities in a human written news article ( e . g . , replacing entities with factually incorrect entities ) .",
"such manipulated articles can mislead the reader by posing as a human written news article .",
"we propose a neural network based detector that detects manipulated news articles by reasoning about the facts mentioned in the article .",
"our proposed detector exploits factual knowledge via graph convolutional neural network along with the textual information in the news article .",
"we also create challenging datasets for this task by considering various strategies to generate the new replacement entity ( e . g . , entity generation from gpt - 2 ) .",
"in all the settings , our proposed model either matches or outperforms the state - of - the - art detector in terms of accuracy .",
"our code and data are available at https : / / github . com / ubc - nlp / manipulated _ entity _ detection ."
] |
[
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
4
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
12,
13,
14,
15
],
"text": "human written news article",
"tokens": [
"human",
"written",
"news",
"article"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31
],
"text": "from a news article that is created by manipulating entities in a human written news article",
"tokens": [
"from",
"a",
"news",
"article",
"that",
"is",
"created",
"by",
"manipulating",
"entities",
"in",
"a",
"human",
"written",
"news",
"article"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
10
],
"text": "distinguishing",
"tokens": [
"distinguishing"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
62
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
65,
66,
67,
68
],
"text": "neural network based detector",
"tokens": [
"neural",
"network",
"based",
"detector"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
63
],
"text": "propose",
"tokens": [
"propose"
]
}
},
{
"arguments": [
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
78,
79,
80,
81,
82
],
"text": "facts mentioned in the article",
"tokens": [
"facts",
"mentioned",
"in",
"the",
"article"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
70
],
"text": "detects",
"tokens": [
"detects"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
75
],
"text": "reasoning",
"tokens": [
"reasoning"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "FEA",
"offsets": [
71,
72,
73
],
"text": "manipulated news articles",
"tokens": [
"manipulated",
"news",
"articles"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
70
],
"text": "detects",
"tokens": [
"detects"
]
}
},
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
87
],
"text": "exploits",
"tokens": [
"exploits"
]
},
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
98,
99
],
"text": "textual information",
"tokens": [
"textual",
"information"
]
},
{
"argument_type": "TriedComponent",
"nugget_type": "APP",
"offsets": [
91,
92,
93,
94
],
"text": "graph convolutional neural network",
"tokens": [
"graph",
"convolutional",
"neural",
"network"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
90
],
"text": "via",
"tokens": [
"via"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "FEA",
"offsets": [
88,
89
],
"text": "factual knowledge",
"tokens": [
"factual",
"knowledge"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
87
],
"text": "exploits",
"tokens": [
"exploits"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
105
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "DST",
"offsets": [
108,
109
],
"text": "challenging datasets",
"tokens": [
"challenging",
"datasets"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
112
],
"text": "task",
"tokens": [
"task"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
107
],
"text": "create",
"tokens": [
"create"
]
}
},
{
"arguments": [
{
"argument_type": "BaseComponent",
"nugget_type": "APP",
"offsets": [
115,
116
],
"text": "various strategies",
"tokens": [
"various",
"strategies"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
118
],
"text": "generate",
"tokens": [
"generate"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
114
],
"text": "considering",
"tokens": [
"considering"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
121,
122
],
"text": "replacement entity",
"tokens": [
"replacement",
"entity"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
118
],
"text": "generate",
"tokens": [
"generate"
]
}
},
{
"arguments": [
{
"argument_type": "Arg1",
"nugget_type": "APP",
"offsets": [
65,
66,
67,
68
],
"text": "neural network based detector",
"tokens": [
"neural",
"network",
"based",
"detector"
]
},
{
"argument_type": "Arg2",
"nugget_type": "APP",
"offsets": [
150,
151,
152,
153,
154,
155,
156,
157
],
"text": "state - of - the - art detector",
"tokens": [
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"detector"
]
},
{
"argument_type": "Metrics",
"nugget_type": "TAK",
"offsets": [
161
],
"text": "accuracy",
"tokens": [
"accuracy"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
137,
138,
139,
140
],
"text": "in all the settings",
"tokens": [
"in",
"all",
"the",
"settings"
]
}
],
"event_type": "CMP",
"trigger": {
"offsets": [
146,
147,
148
],
"text": "matches or outperforms",
"tokens": [
"matches",
"or",
"outperforms"
]
}
}
] |
[
"in",
"this",
"work",
",",
"we",
"focus",
"on",
"the",
"problem",
"of",
"distinguishing",
"a",
"human",
"written",
"news",
"article",
"from",
"a",
"news",
"article",
"that",
"is",
"created",
"by",
"manipulating",
"entities",
"in",
"a",
"human",
"written",
"news",
"article",
"(",
"e",
".",
"g",
".",
",",
"replacing",
"entities",
"with",
"factually",
"incorrect",
"entities",
")",
".",
"such",
"manipulated",
"articles",
"can",
"mislead",
"the",
"reader",
"by",
"posing",
"as",
"a",
"human",
"written",
"news",
"article",
".",
"we",
"propose",
"a",
"neural",
"network",
"based",
"detector",
"that",
"detects",
"manipulated",
"news",
"articles",
"by",
"reasoning",
"about",
"the",
"facts",
"mentioned",
"in",
"the",
"article",
".",
"our",
"proposed",
"detector",
"exploits",
"factual",
"knowledge",
"via",
"graph",
"convolutional",
"neural",
"network",
"along",
"with",
"the",
"textual",
"information",
"in",
"the",
"news",
"article",
".",
"we",
"also",
"create",
"challenging",
"datasets",
"for",
"this",
"task",
"by",
"considering",
"various",
"strategies",
"to",
"generate",
"the",
"new",
"replacement",
"entity",
"(",
"e",
".",
"g",
".",
",",
"entity",
"generation",
"from",
"gpt",
"-",
"2",
")",
".",
"in",
"all",
"the",
"settings",
",",
"our",
"proposed",
"model",
"either",
"matches",
"or",
"outperforms",
"the",
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"detector",
"in",
"terms",
"of",
"accuracy",
".",
"our",
"code",
"and",
"data",
"are",
"available",
"at",
"https",
":",
"/",
"/",
"github",
".",
"com",
"/",
"ubc",
"-",
"nlp",
"/",
"manipulated",
"_",
"entity",
"_",
"detection",
"."
] |
ACL
|
Unsupervised Domain Clusters in Pretrained Language Models
|
The notion of “in-domain data” in NLP is often over-simplistic and vague, as textual data varies in many nuanced linguistic aspects such as topic, style or level of formality. In addition, domain labels are many times unavailable, making it challenging to build domain-specific systems. We show that massive pre-trained language models implicitly learn sentence representations that cluster by domains without supervision – suggesting a simple data-driven definition of domains in textual data. We harness this property and propose domain data selection methods based on such models, which require only a small set of in-domain monolingual data. We evaluate our data selection methods for neural machine translation across five diverse domains, where they outperform an established approach as measured by both BLEU and precision and recall with respect to an oracle selection.
|
e6d492ba8477771dd66891c4e6aac0c5
| 2,020
|
[
"the notion of “ in - domain data ” in nlp is often over - simplistic and vague , as textual data varies in many nuanced linguistic aspects such as topic , style or level of formality .",
"in addition , domain labels are many times unavailable , making it challenging to build domain - specific systems .",
"we show that massive pre - trained language models implicitly learn sentence representations that cluster by domains without supervision – suggesting a simple data - driven definition of domains in textual data .",
"we harness this property and propose domain data selection methods based on such models , which require only a small set of in - domain monolingual data .",
"we evaluate our data selection methods for neural machine translation across five diverse domains , where they outperform an established approach as measured by both bleu and precision and recall with respect to an oracle selection ."
] |
[
{
"arguments": [
{
"argument_type": "Concern",
"nugget_type": "DST",
"offsets": [
4,
5,
6,
7
],
"text": "in - domain data",
"tokens": [
"in",
"-",
"domain",
"data"
]
},
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
13,
14,
15
],
"text": "over - simplistic",
"tokens": [
"over",
"-",
"simplistic"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
13,
14,
15
],
"text": "over - simplistic",
"tokens": [
"over",
"-",
"simplistic"
]
}
},
{
"arguments": [
{
"argument_type": "Concern",
"nugget_type": "FEA",
"offsets": [
41,
42
],
"text": "domain labels",
"tokens": [
"domain",
"labels"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
46
],
"text": "unavailable",
"tokens": [
"unavailable"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
61,
62,
63,
64,
65,
66
],
"text": "massive pre - trained language models",
"tokens": [
"massive",
"pre",
"-",
"trained",
"language",
"models"
]
},
{
"argument_type": "Object",
"nugget_type": "FEA",
"offsets": [
69,
70,
71,
72,
73,
74
],
"text": "sentence representations that cluster by domains",
"tokens": [
"sentence",
"representations",
"that",
"cluster",
"by",
"domains"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
75,
76
],
"text": "without supervision",
"tokens": [
"without",
"supervision"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
67,
68
],
"text": "implicitly learn",
"tokens": [
"implicitly",
"learn"
]
}
},
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
97,
98,
99,
100
],
"text": "domain data selection methods",
"tokens": [
"domain",
"data",
"selection",
"methods"
]
},
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
91
],
"text": "we",
"tokens": [
"we"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
96
],
"text": "propose",
"tokens": [
"propose"
]
}
},
{
"arguments": [
{
"argument_type": "Finder",
"nugget_type": "OG",
"offsets": [
58
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "E-FAC",
"offsets": [
67,
68
],
"text": "implicitly learn",
"tokens": [
"implicitly",
"learn"
]
}
],
"event_type": "FIN",
"trigger": {
"offsets": [
59
],
"text": "show",
"tokens": [
"show"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
119
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
97,
98,
99,
100
],
"text": "domain data selection methods",
"tokens": [
"domain",
"data",
"selection",
"methods"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
126,
127,
128
],
"text": "neural machine translation",
"tokens": [
"neural",
"machine",
"translation"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
129,
130,
131,
132
],
"text": "across five diverse domains",
"tokens": [
"across",
"five",
"diverse",
"domains"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
120
],
"text": "evaluate",
"tokens": [
"evaluate"
]
}
},
{
"arguments": [
{
"argument_type": "Arg1",
"nugget_type": "APP",
"offsets": [
97,
98,
99,
100
],
"text": "domain data selection methods",
"tokens": [
"domain",
"data",
"selection",
"methods"
]
},
{
"argument_type": "Result",
"nugget_type": "STR",
"offsets": [
136
],
"text": "outperform",
"tokens": [
"outperform"
]
},
{
"argument_type": "Arg2",
"nugget_type": "APP",
"offsets": [
138,
139
],
"text": "established approach",
"tokens": [
"established",
"approach"
]
}
],
"event_type": "CMP",
"trigger": {
"offsets": [
136
],
"text": "outperform",
"tokens": [
"outperform"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
97,
98,
99,
100
],
"text": "domain data selection methods",
"tokens": [
"domain",
"data",
"selection",
"methods"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
149,
150,
151,
152,
153,
154
],
"text": "with respect to an oracle selection",
"tokens": [
"with",
"respect",
"to",
"an",
"oracle",
"selection"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
148
],
"text": "recall",
"tokens": [
"recall"
]
}
},
{
"arguments": [
{
"argument_type": "Concern",
"nugget_type": "DST",
"offsets": [
4,
5,
6,
7
],
"text": "in - domain data",
"tokens": [
"in",
"-",
"domain",
"data"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
17
],
"text": "vague",
"tokens": [
"vague"
]
}
}
] |
[
"the",
"notion",
"of",
"“",
"in",
"-",
"domain",
"data",
"”",
"in",
"nlp",
"is",
"often",
"over",
"-",
"simplistic",
"and",
"vague",
",",
"as",
"textual",
"data",
"varies",
"in",
"many",
"nuanced",
"linguistic",
"aspects",
"such",
"as",
"topic",
",",
"style",
"or",
"level",
"of",
"formality",
".",
"in",
"addition",
",",
"domain",
"labels",
"are",
"many",
"times",
"unavailable",
",",
"making",
"it",
"challenging",
"to",
"build",
"domain",
"-",
"specific",
"systems",
".",
"we",
"show",
"that",
"massive",
"pre",
"-",
"trained",
"language",
"models",
"implicitly",
"learn",
"sentence",
"representations",
"that",
"cluster",
"by",
"domains",
"without",
"supervision",
"–",
"suggesting",
"a",
"simple",
"data",
"-",
"driven",
"definition",
"of",
"domains",
"in",
"textual",
"data",
".",
"we",
"harness",
"this",
"property",
"and",
"propose",
"domain",
"data",
"selection",
"methods",
"based",
"on",
"such",
"models",
",",
"which",
"require",
"only",
"a",
"small",
"set",
"of",
"in",
"-",
"domain",
"monolingual",
"data",
".",
"we",
"evaluate",
"our",
"data",
"selection",
"methods",
"for",
"neural",
"machine",
"translation",
"across",
"five",
"diverse",
"domains",
",",
"where",
"they",
"outperform",
"an",
"established",
"approach",
"as",
"measured",
"by",
"both",
"bleu",
"and",
"precision",
"and",
"recall",
"with",
"respect",
"to",
"an",
"oracle",
"selection",
"."
] |
ACL
|
Identifying Visible Actions in Lifestyle Vlogs
|
We consider the task of identifying human actions visible in online videos. We focus on the widely spread genre of lifestyle vlogs, which consist of videos of people performing actions while verbally describing them. Our goal is to identify if actions mentioned in the speech description of a video are visually present. We construct a dataset with crowdsourced manual annotations of visible actions, and introduce a multimodal algorithm that leverages information derived from visual and linguistic clues to automatically infer which actions are visible in a video.
|
98e2dfb868d9c42d24ffedc6e1917178
| 2,019
|
[
"we consider the task of identifying human actions visible in online videos .",
"we focus on the widely spread genre of lifestyle vlogs , which consist of videos of people performing actions while verbally describing them .",
"our goal is to identify if actions mentioned in the speech description of a video are visually present .",
"we construct a dataset with crowdsourced manual annotations of visible actions , and introduce a multimodal algorithm that leverages information derived from visual and linguistic clues to automatically infer which actions are visible in a video ."
] |
[
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
5,
6,
7,
8
],
"text": "identifying human actions visible",
"tokens": [
"identifying",
"human",
"actions",
"visible"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
9,
10,
11
],
"text": "in online videos",
"tokens": [
"in",
"online",
"videos"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
3
],
"text": "task",
"tokens": [
"task"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
56
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "DST",
"offsets": [
59,
60,
61,
62,
63,
64,
65,
66
],
"text": "dataset with crowdsourced manual annotations of visible actions",
"tokens": [
"dataset",
"with",
"crowdsourced",
"manual",
"annotations",
"of",
"visible",
"actions"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
57
],
"text": "construct",
"tokens": [
"construct"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
56
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
71,
72
],
"text": "multimodal algorithm",
"tokens": [
"multimodal",
"algorithm"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
69
],
"text": "introduce",
"tokens": [
"introduce"
]
}
},
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
83,
84
],
"text": "automatically infer",
"tokens": [
"automatically",
"infer"
]
},
{
"argument_type": "TriedComponent",
"nugget_type": "FEA",
"offsets": [
75
],
"text": "information",
"tokens": [
"information"
]
},
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
78,
79
],
"text": "visual and",
"tokens": [
"visual",
"and"
]
},
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
79,
80,
81
],
"text": "and linguistic clues",
"tokens": [
"and",
"linguistic",
"clues"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
74
],
"text": "leverages",
"tokens": [
"leverages"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
85,
86,
87,
88
],
"text": "which actions are visible",
"tokens": [
"which",
"actions",
"are",
"visible"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
83,
84
],
"text": "automatically infer",
"tokens": [
"automatically",
"infer"
]
}
}
] |
[
"we",
"consider",
"the",
"task",
"of",
"identifying",
"human",
"actions",
"visible",
"in",
"online",
"videos",
".",
"we",
"focus",
"on",
"the",
"widely",
"spread",
"genre",
"of",
"lifestyle",
"vlogs",
",",
"which",
"consist",
"of",
"videos",
"of",
"people",
"performing",
"actions",
"while",
"verbally",
"describing",
"them",
".",
"our",
"goal",
"is",
"to",
"identify",
"if",
"actions",
"mentioned",
"in",
"the",
"speech",
"description",
"of",
"a",
"video",
"are",
"visually",
"present",
".",
"we",
"construct",
"a",
"dataset",
"with",
"crowdsourced",
"manual",
"annotations",
"of",
"visible",
"actions",
",",
"and",
"introduce",
"a",
"multimodal",
"algorithm",
"that",
"leverages",
"information",
"derived",
"from",
"visual",
"and",
"linguistic",
"clues",
"to",
"automatically",
"infer",
"which",
"actions",
"are",
"visible",
"in",
"a",
"video",
"."
] |
ACL
|
WatClaimCheck: A new Dataset for Claim Entailment and Inference
|
We contribute a new dataset for the task of automated fact checking and an evaluation of state of the art algorithms. The dataset includes claims (from speeches, interviews, social media and news articles), review articles published by professional fact checkers and premise articles used by those professional fact checkers to support their review and verify the veracity of the claims. An important challenge in the use of premise articles is the identification of relevant passages that will help to infer the veracity of a claim. We show that transferring a dense passage retrieval model trained with review articles improves the retrieval quality of passages in premise articles. We report results for the prediction of claim veracity by inference from premise articles.
|
22d41006008f1191f4fd90e1f24d7f25
| 2,022
|
[
"we contribute a new dataset for the task of automated fact checking and an evaluation of state of the art algorithms .",
"the dataset includes claims ( from speeches , interviews , social media and news articles ) , review articles published by professional fact checkers and premise articles used by those professional fact checkers to support their review and verify the veracity of the claims .",
"an important challenge in the use of premise articles is the identification of relevant passages that will help to infer the veracity of a claim .",
"we show that transferring a dense passage retrieval model trained with review articles improves the retrieval quality of passages in premise articles .",
"we report results for the prediction of claim veracity by inference from premise articles ."
] |
[
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
0
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "DST",
"offsets": [
4
],
"text": "dataset",
"tokens": [
"dataset"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
7,
8,
9,
10,
11
],
"text": "task of automated fact checking",
"tokens": [
"task",
"of",
"automated",
"fact",
"checking"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
14,
15,
16,
17,
18,
19,
20
],
"text": "evaluation of state of the art algorithms",
"tokens": [
"evaluation",
"of",
"state",
"of",
"the",
"art",
"algorithms"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
1
],
"text": "contribute",
"tokens": [
"contribute"
]
}
},
{
"arguments": [
{
"argument_type": "Finder",
"nugget_type": "OG",
"offsets": [
93
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "E-FAC",
"offsets": [
106
],
"text": "improves",
"tokens": [
"improves"
]
}
],
"event_type": "FIN",
"trigger": {
"offsets": [
94
],
"text": "show",
"tokens": [
"show"
]
}
},
{
"arguments": [
{
"argument_type": "Object",
"nugget_type": "TAK",
"offsets": [
108,
109,
110,
111
],
"text": "retrieval quality of passages",
"tokens": [
"retrieval",
"quality",
"of",
"passages"
]
},
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
98,
99,
100,
101,
102,
103,
104,
105
],
"text": "dense passage retrieval model trained with review articles",
"tokens": [
"dense",
"passage",
"retrieval",
"model",
"trained",
"with",
"review",
"articles"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
106
],
"text": "improves",
"tokens": [
"improves"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
116
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "FEA",
"offsets": [
128,
129
],
"text": "premise articles",
"tokens": [
"premise",
"articles"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
117
],
"text": "report",
"tokens": [
"report"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
126
],
"text": "inference",
"tokens": [
"inference"
]
}
},
{
"arguments": [
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
80,
81
],
"text": "relevant passages",
"tokens": [
"relevant",
"passages"
]
},
{
"argument_type": "Concern",
"nugget_type": "FEA",
"offsets": [
74,
75
],
"text": "premise articles",
"tokens": [
"premise",
"articles"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
78
],
"text": "identification",
"tokens": [
"identification"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
118,
119,
120,
121,
122,
123,
124
],
"text": "results for the prediction of claim veracity",
"tokens": [
"results",
"for",
"the",
"prediction",
"of",
"claim",
"veracity"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
117
],
"text": "report",
"tokens": [
"report"
]
}
}
] |
[
"we",
"contribute",
"a",
"new",
"dataset",
"for",
"the",
"task",
"of",
"automated",
"fact",
"checking",
"and",
"an",
"evaluation",
"of",
"state",
"of",
"the",
"art",
"algorithms",
".",
"the",
"dataset",
"includes",
"claims",
"(",
"from",
"speeches",
",",
"interviews",
",",
"social",
"media",
"and",
"news",
"articles",
")",
",",
"review",
"articles",
"published",
"by",
"professional",
"fact",
"checkers",
"and",
"premise",
"articles",
"used",
"by",
"those",
"professional",
"fact",
"checkers",
"to",
"support",
"their",
"review",
"and",
"verify",
"the",
"veracity",
"of",
"the",
"claims",
".",
"an",
"important",
"challenge",
"in",
"the",
"use",
"of",
"premise",
"articles",
"is",
"the",
"identification",
"of",
"relevant",
"passages",
"that",
"will",
"help",
"to",
"infer",
"the",
"veracity",
"of",
"a",
"claim",
".",
"we",
"show",
"that",
"transferring",
"a",
"dense",
"passage",
"retrieval",
"model",
"trained",
"with",
"review",
"articles",
"improves",
"the",
"retrieval",
"quality",
"of",
"passages",
"in",
"premise",
"articles",
".",
"we",
"report",
"results",
"for",
"the",
"prediction",
"of",
"claim",
"veracity",
"by",
"inference",
"from",
"premise",
"articles",
"."
] |
ACL
|
Fast and Accurate Deep Bidirectional Language Representations for Unsupervised Learning
|
Even though BERT has achieved successful performance improvements in various supervised learning tasks, BERT is still limited by repetitive inferences on unsupervised tasks for the computation of contextual language representations. To resolve this limitation, we propose a novel deep bidirectional language model called a Transformer-based Text Autoencoder (T-TA). The T-TA computes contextual language representations without repetition and displays the benefits of a deep bidirectional architecture, such as that of BERT. In computation time experiments in a CPU environment, the proposed T-TA performs over six times faster than the BERT-like model on a reranking task and twelve times faster on a semantic similarity task. Furthermore, the T-TA shows competitive or even better accuracies than those of BERT on the above tasks. Code is available at https://github.com/joongbo/tta.
|
1716f68f1eec3a5f3903ec64bc8cc949
| 2,020
|
[
"even though bert has achieved successful performance improvements in various supervised learning tasks , bert is still limited by repetitive inferences on unsupervised tasks for the computation of contextual language representations .",
"to resolve this limitation , we propose a novel deep bidirectional language model called a transformer - based text autoencoder ( t - ta ) .",
"the t - ta computes contextual language representations without repetition and displays the benefits of a deep bidirectional architecture , such as that of bert .",
"in computation time experiments in a cpu environment , the proposed t - ta performs over six times faster than the bert - like model on a reranking task and twelve times faster on a semantic similarity task .",
"furthermore , the t - ta shows competitive or even better accuracies than those of bert on the above tasks .",
"code is available at https : / / github . com / joongbo / tta ."
] |
[
{
"arguments": [
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
17
],
"text": "limited",
"tokens": [
"limited"
]
},
{
"argument_type": "Concern",
"nugget_type": "APP",
"offsets": [
14
],
"text": "bert",
"tokens": [
"bert"
]
},
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
19,
20
],
"text": "repetitive inferences",
"tokens": [
"repetitive",
"inferences"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
17
],
"text": "limited",
"tokens": [
"limited"
]
}
},
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
33
],
"text": "resolve",
"tokens": [
"resolve"
]
},
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
37
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
41,
42,
43,
44
],
"text": "deep bidirectional language model",
"tokens": [
"deep",
"bidirectional",
"language",
"model"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
38
],
"text": "propose",
"tokens": [
"propose"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
35
],
"text": "limitation",
"tokens": [
"limitation"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
33
],
"text": "resolve",
"tokens": [
"resolve"
]
}
},
{
"arguments": [
{
"argument_type": "Arg1",
"nugget_type": "APP",
"offsets": [
47,
48,
49,
50,
51
],
"text": "transformer - based text autoencoder",
"tokens": [
"transformer",
"-",
"based",
"text",
"autoencoder"
]
},
{
"argument_type": "Extent",
"nugget_type": "DEG",
"offsets": [
100,
101
],
"text": "six times",
"tokens": [
"six",
"times"
]
},
{
"argument_type": "Result",
"nugget_type": "STR",
"offsets": [
102
],
"text": "faster",
"tokens": [
"faster"
]
},
{
"argument_type": "Arg2",
"nugget_type": "APP",
"offsets": [
105,
106,
107,
108
],
"text": "bert - like model",
"tokens": [
"bert",
"-",
"like",
"model"
]
},
{
"argument_type": "Metrics",
"nugget_type": "TAK",
"offsets": [
111,
112
],
"text": "reranking task",
"tokens": [
"reranking",
"task"
]
}
],
"event_type": "CMP",
"trigger": {
"offsets": [
98
],
"text": "performs",
"tokens": [
"performs"
]
}
},
{
"arguments": [
{
"argument_type": "Arg1",
"nugget_type": "APP",
"offsets": [
47,
48,
49,
50,
51
],
"text": "transformer - based text autoencoder",
"tokens": [
"transformer",
"-",
"based",
"text",
"autoencoder"
]
},
{
"argument_type": "Extent",
"nugget_type": "DEG",
"offsets": [
114,
115
],
"text": "twelve times",
"tokens": [
"twelve",
"times"
]
},
{
"argument_type": "Result",
"nugget_type": "STR",
"offsets": [
116
],
"text": "faster",
"tokens": [
"faster"
]
},
{
"argument_type": "Arg2",
"nugget_type": "APP",
"offsets": [
105,
106,
107,
108
],
"text": "bert - like model",
"tokens": [
"bert",
"-",
"like",
"model"
]
},
{
"argument_type": "Metrics",
"nugget_type": "TAK",
"offsets": [
119,
120,
121
],
"text": "semantic similarity task",
"tokens": [
"semantic",
"similarity",
"task"
]
}
],
"event_type": "CMP",
"trigger": {
"offsets": [
98
],
"text": "performs",
"tokens": [
"performs"
]
}
},
{
"arguments": [
{
"argument_type": "Arg1",
"nugget_type": "APP",
"offsets": [
47,
48,
49,
50,
51
],
"text": "transformer - based text autoencoder",
"tokens": [
"transformer",
"-",
"based",
"text",
"autoencoder"
]
},
{
"argument_type": "Metrics",
"nugget_type": "TAK",
"offsets": [
134
],
"text": "accuracies",
"tokens": [
"accuracies"
]
},
{
"argument_type": "Arg2",
"nugget_type": "APP",
"offsets": [
138
],
"text": "bert",
"tokens": [
"bert"
]
},
{
"argument_type": "Result",
"nugget_type": "STR",
"offsets": [
130,
131,
132,
133
],
"text": "competitive or even better",
"tokens": [
"competitive",
"or",
"even",
"better"
]
},
{
"argument_type": "Metrics",
"nugget_type": "TAK",
"offsets": [
119,
120,
121
],
"text": "semantic similarity task",
"tokens": [
"semantic",
"similarity",
"task"
]
},
{
"argument_type": "Metrics",
"nugget_type": "TAK",
"offsets": [
111,
112
],
"text": "reranking task",
"tokens": [
"reranking",
"task"
]
}
],
"event_type": "CMP",
"trigger": {
"offsets": [
129
],
"text": "shows",
"tokens": [
"shows"
]
}
}
] |
[
"even",
"though",
"bert",
"has",
"achieved",
"successful",
"performance",
"improvements",
"in",
"various",
"supervised",
"learning",
"tasks",
",",
"bert",
"is",
"still",
"limited",
"by",
"repetitive",
"inferences",
"on",
"unsupervised",
"tasks",
"for",
"the",
"computation",
"of",
"contextual",
"language",
"representations",
".",
"to",
"resolve",
"this",
"limitation",
",",
"we",
"propose",
"a",
"novel",
"deep",
"bidirectional",
"language",
"model",
"called",
"a",
"transformer",
"-",
"based",
"text",
"autoencoder",
"(",
"t",
"-",
"ta",
")",
".",
"the",
"t",
"-",
"ta",
"computes",
"contextual",
"language",
"representations",
"without",
"repetition",
"and",
"displays",
"the",
"benefits",
"of",
"a",
"deep",
"bidirectional",
"architecture",
",",
"such",
"as",
"that",
"of",
"bert",
".",
"in",
"computation",
"time",
"experiments",
"in",
"a",
"cpu",
"environment",
",",
"the",
"proposed",
"t",
"-",
"ta",
"performs",
"over",
"six",
"times",
"faster",
"than",
"the",
"bert",
"-",
"like",
"model",
"on",
"a",
"reranking",
"task",
"and",
"twelve",
"times",
"faster",
"on",
"a",
"semantic",
"similarity",
"task",
".",
"furthermore",
",",
"the",
"t",
"-",
"ta",
"shows",
"competitive",
"or",
"even",
"better",
"accuracies",
"than",
"those",
"of",
"bert",
"on",
"the",
"above",
"tasks",
".",
"code",
"is",
"available",
"at",
"https",
":",
"/",
"/",
"github",
".",
"com",
"/",
"joongbo",
"/",
"tta",
"."
] |
ACL
|
Document Translation vs. Query Translation for Cross-Lingual Information Retrieval in the Medical Domain
|
We present a thorough comparison of two principal approaches to Cross-Lingual Information Retrieval: document translation (DT) and query translation (QT). Our experiments are conducted using the cross-lingual test collection produced within the CLEF eHealth information retrieval tasks in 2013–2015 containing English documents and queries in several European languages. We exploit the Statistical Machine Translation (SMT) and Neural Machine Translation (NMT) paradigms and train several domain-specific and task-specific machine translation systems to translate the non-English queries into English (for the QT approach) and the English documents to all the query languages (for the DT approach). The results show that the quality of QT by SMT is sufficient enough to outperform the retrieval results of the DT approach for all the languages. NMT then further boosts translation quality and retrieval quality for both QT and DT for most languages, but still, QT provides generally better retrieval results than DT.
|
bf246fff8807bce9c1b1246cea605c18
| 2,020
|
[
"we present a thorough comparison of two principal approaches to cross - lingual information retrieval : document translation ( dt ) and query translation ( qt ) .",
"our experiments are conducted using the cross - lingual test collection produced within the clef ehealth information retrieval tasks in 2013 – 2015 containing english documents and queries in several european languages .",
"we exploit the statistical machine translation ( smt ) and neural machine translation ( nmt ) paradigms and train several domain - specific and task - specific machine translation systems to translate the non - english queries into english ( for the qt approach ) and the english documents to all the query languages ( for the dt approach ) .",
"the results show that the quality of qt by smt is sufficient enough to outperform the retrieval results of the dt approach for all the languages .",
"nmt then further boosts translation quality and retrieval quality for both qt and dt for most languages , but still , qt provides generally better retrieval results than dt ."
] |
[
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
0
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
3,
4
],
"text": "thorough comparison",
"tokens": [
"thorough",
"comparison"
]
},
{
"argument_type": "Target",
"nugget_type": "APP",
"offsets": [
16,
17
],
"text": "document translation",
"tokens": [
"document",
"translation"
]
},
{
"argument_type": "Target",
"nugget_type": "APP",
"offsets": [
22,
23
],
"text": "query translation",
"tokens": [
"query",
"translation"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
1
],
"text": "present",
"tokens": [
"present"
]
}
},
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "DST",
"offsets": [
34,
35,
36,
37,
38
],
"text": "cross - lingual test collection",
"tokens": [
"cross",
"-",
"lingual",
"test",
"collection"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
31
],
"text": "conducted",
"tokens": [
"conducted"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
32
],
"text": "using",
"tokens": [
"using"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
29
],
"text": "experiments",
"tokens": [
"experiments"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
31
],
"text": "conducted",
"tokens": [
"conducted"
]
}
},
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "E-CMP",
"offsets": [
136
],
"text": "outperform",
"tokens": [
"outperform"
]
},
{
"argument_type": "Content",
"nugget_type": "E-FAC",
"offsets": [
152
],
"text": "boosts",
"tokens": [
"boosts"
]
},
{
"argument_type": "Content",
"nugget_type": "E-CMP",
"offsets": [
171
],
"text": "provides",
"tokens": [
"provides"
]
}
],
"event_type": "FIN",
"trigger": {
"offsets": [
124
],
"text": "show",
"tokens": [
"show"
]
}
},
{
"arguments": [
{
"argument_type": "Result",
"nugget_type": "STR",
"offsets": [
136
],
"text": "outperform",
"tokens": [
"outperform"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
144,
145,
146,
147
],
"text": "for all the languages",
"tokens": [
"for",
"all",
"the",
"languages"
]
},
{
"argument_type": "Arg2",
"nugget_type": "APP",
"offsets": [
16,
17
],
"text": "document translation",
"tokens": [
"document",
"translation"
]
}
],
"event_type": "CMP",
"trigger": {
"offsets": [
136
],
"text": "outperform",
"tokens": [
"outperform"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
71,
72,
73
],
"text": "neural machine translation",
"tokens": [
"neural",
"machine",
"translation"
]
},
{
"argument_type": "Object",
"nugget_type": "TAK",
"offsets": [
153,
154
],
"text": "translation quality",
"tokens": [
"translation",
"quality"
]
},
{
"argument_type": "Object",
"nugget_type": "TAK",
"offsets": [
156,
157
],
"text": "retrieval quality",
"tokens": [
"retrieval",
"quality"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
158,
159,
22,
23,
161,
16,
17
],
"text": "for both qt and dt",
"tokens": [
"for",
"both",
"query",
"translation",
"and",
"document",
"translation"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
163,
164,
165
],
"text": "for most languages",
"tokens": [
"for",
"most",
"languages"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
152
],
"text": "boosts",
"tokens": [
"boosts"
]
}
},
{
"arguments": [
{
"argument_type": "Arg1",
"nugget_type": "APP",
"offsets": [
22,
23
],
"text": "query translation",
"tokens": [
"query",
"translation"
]
},
{
"argument_type": "Arg2",
"nugget_type": "APP",
"offsets": [
16,
17
],
"text": "document translation",
"tokens": [
"document",
"translation"
]
},
{
"argument_type": "Result",
"nugget_type": "STR",
"offsets": [
172,
173,
174,
175
],
"text": "generally better retrieval results",
"tokens": [
"generally",
"better",
"retrieval",
"results"
]
}
],
"event_type": "CMP",
"trigger": {
"offsets": [
171
],
"text": "provides",
"tokens": [
"provides"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
61
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
92
],
"text": "translate",
"tokens": [
"translate"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
64,
65,
66,
77
],
"text": "statistical machine translation paradigms",
"tokens": [
"statistical",
"machine",
"translation",
"paradigms"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
71,
72,
73,
77
],
"text": "neural machine translation paradigms",
"tokens": [
"neural",
"machine",
"translation",
"paradigms"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
62
],
"text": "exploit",
"tokens": [
"exploit"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
61
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
92
],
"text": "translate",
"tokens": [
"translate"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
81,
82,
83,
88,
89,
90
],
"text": "domain - specific machine translation systems",
"tokens": [
"domain",
"-",
"specific",
"machine",
"translation",
"systems"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
85,
86,
87,
88,
89,
90
],
"text": "task - specific machine translation systems",
"tokens": [
"task",
"-",
"specific",
"machine",
"translation",
"systems"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
79
],
"text": "train",
"tokens": [
"train"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
94,
95,
96,
97,
98,
99
],
"text": "non - english queries into english",
"tokens": [
"non",
"-",
"english",
"queries",
"into",
"english"
]
},
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
108,
109,
110,
111,
112,
113,
114
],
"text": "english documents to all the query languages",
"tokens": [
"english",
"documents",
"to",
"all",
"the",
"query",
"languages"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
92
],
"text": "translate",
"tokens": [
"translate"
]
}
}
] |
[
"we",
"present",
"a",
"thorough",
"comparison",
"of",
"two",
"principal",
"approaches",
"to",
"cross",
"-",
"lingual",
"information",
"retrieval",
":",
"document",
"translation",
"(",
"dt",
")",
"and",
"query",
"translation",
"(",
"qt",
")",
".",
"our",
"experiments",
"are",
"conducted",
"using",
"the",
"cross",
"-",
"lingual",
"test",
"collection",
"produced",
"within",
"the",
"clef",
"ehealth",
"information",
"retrieval",
"tasks",
"in",
"2013",
"–",
"2015",
"containing",
"english",
"documents",
"and",
"queries",
"in",
"several",
"european",
"languages",
".",
"we",
"exploit",
"the",
"statistical",
"machine",
"translation",
"(",
"smt",
")",
"and",
"neural",
"machine",
"translation",
"(",
"nmt",
")",
"paradigms",
"and",
"train",
"several",
"domain",
"-",
"specific",
"and",
"task",
"-",
"specific",
"machine",
"translation",
"systems",
"to",
"translate",
"the",
"non",
"-",
"english",
"queries",
"into",
"english",
"(",
"for",
"the",
"qt",
"approach",
")",
"and",
"the",
"english",
"documents",
"to",
"all",
"the",
"query",
"languages",
"(",
"for",
"the",
"dt",
"approach",
")",
".",
"the",
"results",
"show",
"that",
"the",
"quality",
"of",
"qt",
"by",
"smt",
"is",
"sufficient",
"enough",
"to",
"outperform",
"the",
"retrieval",
"results",
"of",
"the",
"dt",
"approach",
"for",
"all",
"the",
"languages",
".",
"nmt",
"then",
"further",
"boosts",
"translation",
"quality",
"and",
"retrieval",
"quality",
"for",
"both",
"qt",
"and",
"dt",
"for",
"most",
"languages",
",",
"but",
"still",
",",
"qt",
"provides",
"generally",
"better",
"retrieval",
"results",
"than",
"dt",
"."
] |
ACL
|
Synthetic Question Value Estimation for Domain Adaptation of Question Answering
|
Synthesizing QA pairs with a question generator (QG) on the target domain has become a popular approach for domain adaptation of question answering (QA) models. Since synthetic questions are often noisy in practice, existing work adapts scores from a pretrained QA (or QG) model as criteria to select high-quality questions. However, these scores do not directly serve the ultimate goal of improving QA performance on the target domain. In this paper, we introduce a novel idea of training a question value estimator (QVE) that directly estimates the usefulness of synthetic questions for improving the target-domain QA performance. By conducting comprehensive experiments, we show that the synthetic questions selected by QVE can help achieve better target-domain QA performance, in comparison with existing techniques. We additionally show that by using such questions and only around 15% of the human annotations on the target domain, we can achieve comparable performance to the fully-supervised baselines.
|
0645ace4c01ac874a0def0c40c48635b
| 2,022
|
[
"synthesizing qa pairs with a question generator ( qg ) on the target domain has become a popular approach for domain adaptation of question answering ( qa ) models .",
"since synthetic questions are often noisy in practice , existing work adapts scores from a pretrained qa ( or qg ) model as criteria to select high - quality questions .",
"however , these scores do not directly serve the ultimate goal of improving qa performance on the target domain .",
"in this paper , we introduce a novel idea of training a question value estimator ( qve ) that directly estimates the usefulness of synthetic questions for improving the target - domain qa performance .",
"by conducting comprehensive experiments , we show that the synthetic questions selected by qve can help achieve better target - domain qa performance , in comparison with existing techniques .",
"we additionally show that by using such questions and only around 15 % of the human annotations on the target domain , we can achieve comparable performance to the fully - supervised baselines ."
] |
[
{
"arguments": [
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
36,
37
],
"text": "in practice",
"tokens": [
"in",
"practice"
]
},
{
"argument_type": "Concern",
"nugget_type": "FEA",
"offsets": [
31,
32
],
"text": "synthetic questions",
"tokens": [
"synthetic",
"questions"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
35
],
"text": "noisy",
"tokens": [
"noisy"
]
}
},
{
"arguments": [
{
"argument_type": "TriedComponent",
"nugget_type": "FEA",
"offsets": [
42,
43,
44,
45,
23,
24,
51
],
"text": "scores from a pretrained qa ( or qg ) model",
"tokens": [
"scores",
"from",
"a",
"pretrained",
"question",
"answering",
"model"
]
},
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
53
],
"text": "criteria",
"tokens": [
"criteria"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
55
],
"text": "select",
"tokens": [
"select"
]
}
],
"event_type": "RWS",
"trigger": {
"offsets": [
41
],
"text": "adapts",
"tokens": [
"adapts"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "FEA",
"offsets": [
56,
57,
58,
59
],
"text": "high - quality questions",
"tokens": [
"high",
"-",
"quality",
"questions"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
55
],
"text": "select",
"tokens": [
"select"
]
}
},
{
"arguments": [
{
"argument_type": "Concern",
"nugget_type": "FEA",
"offsets": [
42,
43,
44,
45,
23,
24,
51
],
"text": "scores from a pretrained qa ( or qg ) model",
"tokens": [
"scores",
"from",
"a",
"pretrained",
"question",
"answering",
"model"
]
},
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
66,
67,
68
],
"text": "not directly serve",
"tokens": [
"not",
"directly",
"serve"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
66,
67,
68
],
"text": "not directly serve",
"tokens": [
"not",
"directly",
"serve"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
23,
24,
75
],
"text": "qa performance",
"tokens": [
"question",
"answering",
"performance"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
73
],
"text": "improving",
"tokens": [
"improving"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
85
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "MOD",
"offsets": [
93,
94,
95
],
"text": "question value estimator",
"tokens": [
"question",
"value",
"estimator"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
86
],
"text": "introduce",
"tokens": [
"introduce"
]
}
},
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "FEA",
"offsets": [
103,
104,
105,
106
],
"text": "usefulness of synthetic questions",
"tokens": [
"usefulness",
"of",
"synthetic",
"questions"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
108
],
"text": "improving",
"tokens": [
"improving"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
100,
101
],
"text": "directly estimates",
"tokens": [
"directly",
"estimates"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
110,
111,
112,
23,
24,
114
],
"text": "target - domain qa performance",
"tokens": [
"target",
"-",
"domain",
"question",
"answering",
"performance"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
108
],
"text": "improving",
"tokens": [
"improving"
]
}
},
{
"arguments": [
{
"argument_type": "Finder",
"nugget_type": "OG",
"offsets": [
121
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "E-CMP",
"offsets": [
132
],
"text": "achieve",
"tokens": [
"achieve"
]
}
],
"event_type": "FIN",
"trigger": {
"offsets": [
122
],
"text": "show",
"tokens": [
"show"
]
}
},
{
"arguments": [
{
"argument_type": "Arg1",
"nugget_type": "APP",
"offsets": [
93,
94,
95
],
"text": "question value estimator",
"tokens": [
"question",
"value",
"estimator"
]
},
{
"argument_type": "Arg2",
"nugget_type": "APP",
"offsets": [
143,
144
],
"text": "existing techniques",
"tokens": [
"existing",
"techniques"
]
},
{
"argument_type": "Result",
"nugget_type": "STR",
"offsets": [
133,
134,
135,
136,
137,
138
],
"text": "better target - domain qa performance",
"tokens": [
"better",
"target",
"-",
"domain",
"qa",
"performance"
]
}
],
"event_type": "CMP",
"trigger": {
"offsets": [
132
],
"text": "achieve",
"tokens": [
"achieve"
]
}
},
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "E-CMP",
"offsets": [
170
],
"text": "achieve",
"tokens": [
"achieve"
]
}
],
"event_type": "FIN",
"trigger": {
"offsets": [
148
],
"text": "show",
"tokens": [
"show"
]
}
},
{
"arguments": [
{
"argument_type": "Arg2",
"nugget_type": "APP",
"offsets": [
175,
176,
177,
178
],
"text": "fully - supervised baselines",
"tokens": [
"fully",
"-",
"supervised",
"baselines"
]
},
{
"argument_type": "Result",
"nugget_type": "STR",
"offsets": [
171,
172
],
"text": "comparable performance",
"tokens": [
"comparable",
"performance"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166
],
"text": "by using such questions and only around 15 % of the human annotations on the target domain",
"tokens": [
"by",
"using",
"such",
"questions",
"and",
"only",
"around",
"15",
"%",
"of",
"the",
"human",
"annotations",
"on",
"the",
"target",
"domain"
]
}
],
"event_type": "CMP",
"trigger": {
"offsets": [
170
],
"text": "achieve",
"tokens": [
"achieve"
]
}
},
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "APP",
"offsets": [
23,
24,
28
],
"text": "question answering ( qa ) models",
"tokens": [
"question",
"answering",
"models"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
15
],
"text": "become",
"tokens": [
"become"
]
}
}
] |
[
"synthesizing",
"qa",
"pairs",
"with",
"a",
"question",
"generator",
"(",
"qg",
")",
"on",
"the",
"target",
"domain",
"has",
"become",
"a",
"popular",
"approach",
"for",
"domain",
"adaptation",
"of",
"question",
"answering",
"(",
"qa",
")",
"models",
".",
"since",
"synthetic",
"questions",
"are",
"often",
"noisy",
"in",
"practice",
",",
"existing",
"work",
"adapts",
"scores",
"from",
"a",
"pretrained",
"qa",
"(",
"or",
"qg",
")",
"model",
"as",
"criteria",
"to",
"select",
"high",
"-",
"quality",
"questions",
".",
"however",
",",
"these",
"scores",
"do",
"not",
"directly",
"serve",
"the",
"ultimate",
"goal",
"of",
"improving",
"qa",
"performance",
"on",
"the",
"target",
"domain",
".",
"in",
"this",
"paper",
",",
"we",
"introduce",
"a",
"novel",
"idea",
"of",
"training",
"a",
"question",
"value",
"estimator",
"(",
"qve",
")",
"that",
"directly",
"estimates",
"the",
"usefulness",
"of",
"synthetic",
"questions",
"for",
"improving",
"the",
"target",
"-",
"domain",
"qa",
"performance",
".",
"by",
"conducting",
"comprehensive",
"experiments",
",",
"we",
"show",
"that",
"the",
"synthetic",
"questions",
"selected",
"by",
"qve",
"can",
"help",
"achieve",
"better",
"target",
"-",
"domain",
"qa",
"performance",
",",
"in",
"comparison",
"with",
"existing",
"techniques",
".",
"we",
"additionally",
"show",
"that",
"by",
"using",
"such",
"questions",
"and",
"only",
"around",
"15",
"%",
"of",
"the",
"human",
"annotations",
"on",
"the",
"target",
"domain",
",",
"we",
"can",
"achieve",
"comparable",
"performance",
"to",
"the",
"fully",
"-",
"supervised",
"baselines",
"."
] |
ACL
|
MOROCO: The Moldavian and Romanian Dialectal Corpus
|
In this work, we introduce the MOldavian and ROmanian Dialectal COrpus (MOROCO), which is freely available for download at https://github.com/butnaruandrei/MOROCO. The corpus contains 33564 samples of text (with over 10 million tokens) collected from the news domain. The samples belong to one of the following six topics: culture, finance, politics, science, sports and tech. The data set is divided into 21719 samples for training, 5921 samples for validation and another 5924 samples for testing. For each sample, we provide corresponding dialectal and category labels. This allows us to perform empirical studies on several classification tasks such as (i) binary discrimination of Moldavian versus Romanian text samples, (ii) intra-dialect multi-class categorization by topic and (iii) cross-dialect multi-class categorization by topic. We perform experiments using a shallow approach based on string kernels, as well as a novel deep approach based on character-level convolutional neural networks containing Squeeze-and-Excitation blocks. We also present and analyze the most discriminative features of our best performing model, before and after named entity removal.
|
dcd6a027e32c99d0638e217a6bc6ba32
| 2,019
|
[
"in this work , we introduce the moldavian and romanian dialectal corpus ( moroco ) , which is freely available for download at https : / / github . com / butnaruandrei / moroco .",
"the corpus contains 33564 samples of text ( with over 10 million tokens ) collected from the news domain .",
"the samples belong to one of the following six topics : culture , finance , politics , science , sports and tech .",
"the data set is divided into 21719 samples for training , 5921 samples for validation and another 5924 samples for testing .",
"for each sample , we provide corresponding dialectal and category labels .",
"this allows us to perform empirical studies on several classification tasks such as ( i ) binary discrimination of moldavian versus romanian text samples , ( ii ) intra - dialect multi - class categorization by topic and ( iii ) cross - dialect multi - class categorization by topic .",
"we perform experiments using a shallow approach based on string kernels , as well as a novel deep approach based on character - level convolutional neural networks containing squeeze - and - excitation blocks .",
"we also present and analyze the most discriminative features of our best performing model , before and after named entity removal ."
] |
[
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
4
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "DST",
"offsets": [
7,
8,
9,
10,
11
],
"text": "moldavian and romanian dialectal corpus",
"tokens": [
"moldavian",
"and",
"romanian",
"dialectal",
"corpus"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
5
],
"text": "introduce",
"tokens": [
"introduce"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
104
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Target",
"nugget_type": "FEA",
"offsets": [
101,
102
],
"text": "each sample",
"tokens": [
"each",
"sample"
]
},
{
"argument_type": "Content",
"nugget_type": "FEA",
"offsets": [
106,
107,
110
],
"text": "corresponding dialectal labels",
"tokens": [
"corresponding",
"dialectal",
"labels"
]
},
{
"argument_type": "Content",
"nugget_type": "FEA",
"offsets": [
106,
109,
110
],
"text": "corresponding category labels",
"tokens": [
"corresponding",
"category",
"labels"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
105
],
"text": "provide",
"tokens": [
"provide"
]
}
},
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
117,
118
],
"text": "empirical studies",
"tokens": [
"empirical",
"studies"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
119,
120,
121,
122
],
"text": "on several classification tasks",
"tokens": [
"on",
"several",
"classification",
"tasks"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
116
],
"text": "perform",
"tokens": [
"perform"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
198
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "FEA",
"offsets": [
204,
205,
206,
207,
208,
209,
210,
211
],
"text": "most discriminative features of our best performing model",
"tokens": [
"most",
"discriminative",
"features",
"of",
"our",
"best",
"performing",
"model"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
213,
214,
215,
216,
217,
218
],
"text": "before and after named entity removal",
"tokens": [
"before",
"and",
"after",
"named",
"entity",
"removal"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
200,
201,
202
],
"text": "present and analyze",
"tokens": [
"present",
"and",
"analyze"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
163
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
168,
169,
170,
171,
172,
173
],
"text": "shallow approach based on string kernels",
"tokens": [
"shallow",
"approach",
"based",
"on",
"string",
"kernels"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196
],
"text": "deep approach based on character - level convolutional neural networks containing squeeze - and - excitation blocks",
"tokens": [
"deep",
"approach",
"based",
"on",
"character",
"-",
"level",
"convolutional",
"neural",
"networks",
"containing",
"squeeze",
"-",
"and",
"-",
"excitation",
"blocks"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
164
],
"text": "perform",
"tokens": [
"perform"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
166
],
"text": "using",
"tokens": [
"using"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
165
],
"text": "experiments",
"tokens": [
"experiments"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
164
],
"text": "perform",
"tokens": [
"perform"
]
}
}
] |
[
"in",
"this",
"work",
",",
"we",
"introduce",
"the",
"moldavian",
"and",
"romanian",
"dialectal",
"corpus",
"(",
"moroco",
")",
",",
"which",
"is",
"freely",
"available",
"for",
"download",
"at",
"https",
":",
"/",
"/",
"github",
".",
"com",
"/",
"butnaruandrei",
"/",
"moroco",
".",
"the",
"corpus",
"contains",
"33564",
"samples",
"of",
"text",
"(",
"with",
"over",
"10",
"million",
"tokens",
")",
"collected",
"from",
"the",
"news",
"domain",
".",
"the",
"samples",
"belong",
"to",
"one",
"of",
"the",
"following",
"six",
"topics",
":",
"culture",
",",
"finance",
",",
"politics",
",",
"science",
",",
"sports",
"and",
"tech",
".",
"the",
"data",
"set",
"is",
"divided",
"into",
"21719",
"samples",
"for",
"training",
",",
"5921",
"samples",
"for",
"validation",
"and",
"another",
"5924",
"samples",
"for",
"testing",
".",
"for",
"each",
"sample",
",",
"we",
"provide",
"corresponding",
"dialectal",
"and",
"category",
"labels",
".",
"this",
"allows",
"us",
"to",
"perform",
"empirical",
"studies",
"on",
"several",
"classification",
"tasks",
"such",
"as",
"(",
"i",
")",
"binary",
"discrimination",
"of",
"moldavian",
"versus",
"romanian",
"text",
"samples",
",",
"(",
"ii",
")",
"intra",
"-",
"dialect",
"multi",
"-",
"class",
"categorization",
"by",
"topic",
"and",
"(",
"iii",
")",
"cross",
"-",
"dialect",
"multi",
"-",
"class",
"categorization",
"by",
"topic",
".",
"we",
"perform",
"experiments",
"using",
"a",
"shallow",
"approach",
"based",
"on",
"string",
"kernels",
",",
"as",
"well",
"as",
"a",
"novel",
"deep",
"approach",
"based",
"on",
"character",
"-",
"level",
"convolutional",
"neural",
"networks",
"containing",
"squeeze",
"-",
"and",
"-",
"excitation",
"blocks",
".",
"we",
"also",
"present",
"and",
"analyze",
"the",
"most",
"discriminative",
"features",
"of",
"our",
"best",
"performing",
"model",
",",
"before",
"and",
"after",
"named",
"entity",
"removal",
"."
] |
ACL
|
An Empirical Investigation of Structured Output Modeling for Graph-based Neural Dependency Parsing
|
In this paper, we investigate the aspect of structured output modeling for the state-of-the-art graph-based neural dependency parser (Dozat and Manning, 2017). With evaluations on 14 treebanks, we empirically show that global output-structured models can generally obtain better performance, especially on the metric of sentence-level Complete Match. However, probably because neural models already learn good global views of the inputs, the improvement brought by structured output modeling is modest.
|
0208939f00017ead964e2aea4ca2ae6d
| 2,019
|
[
"in this paper , we investigate the aspect of structured output modeling for the state - of - the - art graph - based neural dependency parser ( dozat and manning , 2017 ) .",
"with evaluations on 14 treebanks , we empirically show that global output - structured models can generally obtain better performance , especially on the metric of sentence - level complete match .",
"however , probably because neural models already learn good global views of the inputs , the improvement brought by structured output modeling is modest ."
] |
[
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
4
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
7,
8,
9,
10,
11
],
"text": "aspect of structured output modeling",
"tokens": [
"aspect",
"of",
"structured",
"output",
"modeling"
]
},
{
"argument_type": "Target",
"nugget_type": "MOD",
"offsets": [
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26
],
"text": "state - of - the - art graph - based neural dependency parser",
"tokens": [
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"graph",
"-",
"based",
"neural",
"dependency",
"parser"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
5
],
"text": "investigate",
"tokens": [
"investigate"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
41
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Dataset",
"nugget_type": "DST",
"offsets": [
38,
39
],
"text": "14 treebanks",
"tokens": [
"14",
"treebanks"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
36
],
"text": "evaluations",
"tokens": [
"evaluations"
]
}
},
{
"arguments": [
{
"argument_type": "Finder",
"nugget_type": "OG",
"offsets": [
41
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "E-FAC",
"offsets": [
52
],
"text": "obtain",
"tokens": [
"obtain"
]
}
],
"event_type": "FIN",
"trigger": {
"offsets": [
43
],
"text": "show",
"tokens": [
"show"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
45,
46,
47,
48,
49
],
"text": "global output - structured models",
"tokens": [
"global",
"output",
"-",
"structured",
"models"
]
},
{
"argument_type": "Extent",
"nugget_type": "DEG",
"offsets": [
53
],
"text": "better",
"tokens": [
"better"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
54
],
"text": "performance",
"tokens": [
"performance"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
52
],
"text": "obtain",
"tokens": [
"obtain"
]
}
},
{
"arguments": [],
"event_type": "RWF",
"trigger": {
"offsets": [
90
],
"text": "modest",
"tokens": [
"modest"
]
}
}
] |
[
"in",
"this",
"paper",
",",
"we",
"investigate",
"the",
"aspect",
"of",
"structured",
"output",
"modeling",
"for",
"the",
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"graph",
"-",
"based",
"neural",
"dependency",
"parser",
"(",
"dozat",
"and",
"manning",
",",
"2017",
")",
".",
"with",
"evaluations",
"on",
"14",
"treebanks",
",",
"we",
"empirically",
"show",
"that",
"global",
"output",
"-",
"structured",
"models",
"can",
"generally",
"obtain",
"better",
"performance",
",",
"especially",
"on",
"the",
"metric",
"of",
"sentence",
"-",
"level",
"complete",
"match",
".",
"however",
",",
"probably",
"because",
"neural",
"models",
"already",
"learn",
"good",
"global",
"views",
"of",
"the",
"inputs",
",",
"the",
"improvement",
"brought",
"by",
"structured",
"output",
"modeling",
"is",
"modest",
"."
] |
ACL
|
A Simple Theoretical Model of Importance for Summarization
|
Research on summarization has mainly been driven by empirical approaches, crafting systems to perform well on standard datasets with the notion of information Importance remaining latent. We argue that establishing theoretical models of Importance will advance our understanding of the task and help to further improve summarization systems. To this end, we propose simple but rigorous definitions of several concepts that were previously used only intuitively in summarization: Redundancy, Relevance, and Informativeness. Importance arises as a single quantity naturally unifying these concepts. Additionally, we provide intuitions to interpret the proposed quantities and experiments to demonstrate the potential of the framework to inform and guide subsequent works.
|
b52c87438699253d7eae3d86c46011fb
| 2,019
|
[
"research on summarization has mainly been driven by empirical approaches , crafting systems to perform well on standard datasets with the notion of information importance remaining latent .",
"we argue that establishing theoretical models of importance will advance our understanding of the task and help to further improve summarization systems .",
"to this end , we propose simple but rigorous definitions of several concepts that were previously used only intuitively in summarization : redundancy , relevance , and informativeness .",
"importance arises as a single quantity naturally unifying these concepts .",
"additionally , we provide intuitions to interpret the proposed quantities and experiments to demonstrate the potential of the framework to inform and guide subsequent works ."
] |
[
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
2
],
"text": "summarization",
"tokens": [
"summarization"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
6
],
"text": "driven",
"tokens": [
"driven"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
55
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "FEA",
"offsets": [
60,
61,
62,
63
],
"text": "definitions of several concepts",
"tokens": [
"definitions",
"of",
"several",
"concepts"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
56
],
"text": "propose",
"tokens": [
"propose"
]
}
},
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
84,
85
],
"text": "single quantity",
"tokens": [
"single",
"quantity"
]
},
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
73,
74,
75,
76,
77,
78,
89
],
"text": "these concepts",
"tokens": [
"redundancy",
",",
"relevance",
",",
"and",
"informativeness",
"concepts"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
87
],
"text": "unifying",
"tokens": [
"unifying"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
93
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "FEA",
"offsets": [
95
],
"text": "intuitions",
"tokens": [
"intuitions"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
97
],
"text": "interpret",
"tokens": [
"interpret"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
94
],
"text": "provide",
"tokens": [
"provide"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "FEA",
"offsets": [
99,
100
],
"text": "proposed quantities",
"tokens": [
"proposed",
"quantities"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
97
],
"text": "interpret",
"tokens": [
"interpret"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
106,
107,
108,
109
],
"text": "potential of the framework",
"tokens": [
"potential",
"of",
"the",
"framework"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
104
],
"text": "demonstrate",
"tokens": [
"demonstrate"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
114,
115
],
"text": "subsequent works",
"tokens": [
"subsequent",
"works"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
111,
112,
113
],
"text": "inform and guide",
"tokens": [
"inform",
"and",
"guide"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
93
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
102
],
"text": "experiments",
"tokens": [
"experiments"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
104
],
"text": "demonstrate",
"tokens": [
"demonstrate"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
111,
112,
113
],
"text": "inform and guide",
"tokens": [
"inform",
"and",
"guide"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
94
],
"text": "provide",
"tokens": [
"provide"
]
}
}
] |
[
"research",
"on",
"summarization",
"has",
"mainly",
"been",
"driven",
"by",
"empirical",
"approaches",
",",
"crafting",
"systems",
"to",
"perform",
"well",
"on",
"standard",
"datasets",
"with",
"the",
"notion",
"of",
"information",
"importance",
"remaining",
"latent",
".",
"we",
"argue",
"that",
"establishing",
"theoretical",
"models",
"of",
"importance",
"will",
"advance",
"our",
"understanding",
"of",
"the",
"task",
"and",
"help",
"to",
"further",
"improve",
"summarization",
"systems",
".",
"to",
"this",
"end",
",",
"we",
"propose",
"simple",
"but",
"rigorous",
"definitions",
"of",
"several",
"concepts",
"that",
"were",
"previously",
"used",
"only",
"intuitively",
"in",
"summarization",
":",
"redundancy",
",",
"relevance",
",",
"and",
"informativeness",
".",
"importance",
"arises",
"as",
"a",
"single",
"quantity",
"naturally",
"unifying",
"these",
"concepts",
".",
"additionally",
",",
"we",
"provide",
"intuitions",
"to",
"interpret",
"the",
"proposed",
"quantities",
"and",
"experiments",
"to",
"demonstrate",
"the",
"potential",
"of",
"the",
"framework",
"to",
"inform",
"and",
"guide",
"subsequent",
"works",
"."
] |
ACL
|
Syntopical Graphs for Computational Argumentation Tasks
|
Approaches to computational argumentation tasks such as stance detection and aspect detection have largely focused on the text of independent claims, losing out on potentially valuable context provided by the rest of the collection. We introduce a general approach to these tasks motivated by syntopical reading, a reading process that emphasizes comparing and contrasting viewpoints in order to improve topic understanding. To capture collection-level context, we introduce the syntopical graph, a data structure for linking claims within a collection. A syntopical graph is a typed multi-graph where nodes represent claims and edges represent different possible pairwise relationships, such as entailment, paraphrase, or support. Experiments applying syntopical graphs to the problems of detecting stance and aspects demonstrate state-of-the-art performance in each domain, significantly outperforming approaches that do not utilize collection-level information.
|
6642e0f6f10a935613208605f482f051
| 2,021
|
[
"approaches to computational argumentation tasks such as stance detection and aspect detection have largely focused on the text of independent claims , losing out on potentially valuable context provided by the rest of the collection .",
"we introduce a general approach to these tasks motivated by syntopical reading , a reading process that emphasizes comparing and contrasting viewpoints in order to improve topic understanding .",
"to capture collection - level context , we introduce the syntopical graph , a data structure for linking claims within a collection .",
"a syntopical graph is a typed multi - graph where nodes represent claims and edges represent different possible pairwise relationships , such as entailment , paraphrase , or support .",
"experiments applying syntopical graphs to the problems of detecting stance and aspects demonstrate state - of - the - art performance in each domain , significantly outperforming approaches that do not utilize collection - level information ."
] |
[
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
2,
3,
4
],
"text": "computational argumentation tasks",
"tokens": [
"computational",
"argumentation",
"tasks"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
14
],
"text": "focused",
"tokens": [
"focused"
]
}
},
{
"arguments": [
{
"argument_type": "Concern",
"nugget_type": "APP",
"offsets": [
0,
1,
2,
3,
4
],
"text": "approaches to computational argumentation tasks",
"tokens": [
"approaches",
"to",
"computational",
"argumentation",
"tasks"
]
},
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
25,
26,
27
],
"text": "potentially valuable context",
"tokens": [
"potentially",
"valuable",
"context"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
22
],
"text": "losing",
"tokens": [
"losing"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
36
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
39,
40,
41,
42,
43,
44,
45,
46,
47
],
"text": "general approach to these tasks motivated by syntopical reading",
"tokens": [
"general",
"approach",
"to",
"these",
"tasks",
"motivated",
"by",
"syntopical",
"reading"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
37
],
"text": "introduce",
"tokens": [
"introduce"
]
}
},
{
"arguments": [
{
"argument_type": "TriedComponent",
"nugget_type": "FEA",
"offsets": [
53,
54
],
"text": "emphasizes comparing",
"tokens": [
"emphasizes",
"comparing"
]
},
{
"argument_type": "TriedComponent",
"nugget_type": "FEA",
"offsets": [
56,
57
],
"text": "contrasting viewpoints",
"tokens": [
"contrasting",
"viewpoints"
]
},
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
62,
63
],
"text": "topic understanding",
"tokens": [
"topic",
"understanding"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
61
],
"text": "improve",
"tokens": [
"improve"
]
}
},
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
66
],
"text": "capture",
"tokens": [
"capture"
]
},
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
72
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
82
],
"text": "linking",
"tokens": [
"linking"
]
},
{
"argument_type": "Content",
"nugget_type": "MOD",
"offsets": [
75,
76
],
"text": "syntopical graph",
"tokens": [
"syntopical",
"graph"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
73
],
"text": "introduce",
"tokens": [
"introduce"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "FEA",
"offsets": [
67,
68,
69,
70
],
"text": "collection - level context",
"tokens": [
"collection",
"-",
"level",
"context"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
66
],
"text": "capture",
"tokens": [
"capture"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "FEA",
"offsets": [
83
],
"text": "claims",
"tokens": [
"claims"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
82
],
"text": "linking",
"tokens": [
"linking"
]
}
},
{
"arguments": [
{
"argument_type": "TriedComponent",
"nugget_type": "FEA",
"offsets": [
98
],
"text": "nodes",
"tokens": [
"nodes"
]
},
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
100
],
"text": "claims",
"tokens": [
"claims"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
99
],
"text": "represent",
"tokens": [
"represent"
]
}
},
{
"arguments": [
{
"argument_type": "TriedComponent",
"nugget_type": "FEA",
"offsets": [
102
],
"text": "edges",
"tokens": [
"edges"
]
},
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
104,
105,
106,
107
],
"text": "different possible pairwise relationships",
"tokens": [
"different",
"possible",
"pairwise",
"relationships"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
103
],
"text": "represent",
"tokens": [
"represent"
]
}
},
{
"arguments": [
{
"argument_type": "Object",
"nugget_type": "FEA",
"offsets": [
131,
132,
133,
134,
135,
136,
137,
138
],
"text": "state - of - the - art performance",
"tokens": [
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"performance"
]
},
{
"argument_type": "Subject",
"nugget_type": "MOD",
"offsets": [
120,
121
],
"text": "syntopical graphs",
"tokens": [
"syntopical",
"graphs"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
130
],
"text": "demonstrate",
"tokens": [
"demonstrate"
]
}
}
] |
[
"approaches",
"to",
"computational",
"argumentation",
"tasks",
"such",
"as",
"stance",
"detection",
"and",
"aspect",
"detection",
"have",
"largely",
"focused",
"on",
"the",
"text",
"of",
"independent",
"claims",
",",
"losing",
"out",
"on",
"potentially",
"valuable",
"context",
"provided",
"by",
"the",
"rest",
"of",
"the",
"collection",
".",
"we",
"introduce",
"a",
"general",
"approach",
"to",
"these",
"tasks",
"motivated",
"by",
"syntopical",
"reading",
",",
"a",
"reading",
"process",
"that",
"emphasizes",
"comparing",
"and",
"contrasting",
"viewpoints",
"in",
"order",
"to",
"improve",
"topic",
"understanding",
".",
"to",
"capture",
"collection",
"-",
"level",
"context",
",",
"we",
"introduce",
"the",
"syntopical",
"graph",
",",
"a",
"data",
"structure",
"for",
"linking",
"claims",
"within",
"a",
"collection",
".",
"a",
"syntopical",
"graph",
"is",
"a",
"typed",
"multi",
"-",
"graph",
"where",
"nodes",
"represent",
"claims",
"and",
"edges",
"represent",
"different",
"possible",
"pairwise",
"relationships",
",",
"such",
"as",
"entailment",
",",
"paraphrase",
",",
"or",
"support",
".",
"experiments",
"applying",
"syntopical",
"graphs",
"to",
"the",
"problems",
"of",
"detecting",
"stance",
"and",
"aspects",
"demonstrate",
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"performance",
"in",
"each",
"domain",
",",
"significantly",
"outperforming",
"approaches",
"that",
"do",
"not",
"utilize",
"collection",
"-",
"level",
"information",
"."
] |
ACL
|
POS-Constrained Parallel Decoding for Non-autoregressive Generation
|
The multimodality problem has become a major challenge of existing non-autoregressive generation (NAG) systems. A common solution often resorts to sequence-level knowledge distillation by rebuilding the training dataset through autoregressive generation (hereinafter known as “teacher AG”). The success of such methods may largely depend on a latent assumption, i.e., the teacher AG is superior to the NAG model. However, in this work, we experimentally reveal that this assumption does not always hold for the text generation tasks like text summarization and story ending generation. To provide a feasible solution to the multimodality problem of NAG, we propose incorporating linguistic structure (Part-of-Speech sequence in particular) into NAG inference instead of relying on teacher AG. More specifically, the proposed POS-constrained Parallel Decoding (POSPD) method aims at providing a specific POS sequence to constrain the NAG model during decoding. Our experiments demonstrate that POSPD consistently improves NAG models on four text generation tasks to a greater extent compared to knowledge distillation. This observation validates the necessity of exploring the alternatives for sequence-level knowledge distillation.
|
847defa9acc9889b99cff5451802e0be
| 2,021
|
[
"the multimodality problem has become a major challenge of existing non - autoregressive generation ( nag ) systems .",
"a common solution often resorts to sequence - level knowledge distillation by rebuilding the training dataset through autoregressive generation ( hereinafter known as “ teacher ag ” ) .",
"the success of such methods may largely depend on a latent assumption , i . e . , the teacher ag is superior to the nag model .",
"however , in this work , we experimentally reveal that this assumption does not always hold for the text generation tasks like text summarization and story ending generation .",
"to provide a feasible solution to the multimodality problem of nag , we propose incorporating linguistic structure ( part - of - speech sequence in particular ) into nag inference instead of relying on teacher ag .",
"more specifically , the proposed pos - constrained parallel decoding ( pospd ) method aims at providing a specific pos sequence to constrain the nag model during decoding .",
"our experiments demonstrate that pospd consistently improves nag models on four text generation tasks to a greater extent compared to knowledge distillation .",
"this observation validates the necessity of exploring the alternatives for sequence - level knowledge distillation ."
] |
[
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "APP",
"offsets": [
115,
17
],
"text": "non - autoregressive generation ( nag ) systems",
"tokens": [
"nag",
"systems"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
7
],
"text": "challenge",
"tokens": [
"challenge"
]
}
},
{
"arguments": [
{
"argument_type": "TriedComponent",
"nugget_type": "DST",
"offsets": [
33,
34
],
"text": "training dataset",
"tokens": [
"training",
"dataset"
]
},
{
"argument_type": "TriedComponent",
"nugget_type": "APP",
"offsets": [
25,
26,
27,
28,
29
],
"text": "sequence - level knowledge distillation",
"tokens": [
"sequence",
"-",
"level",
"knowledge",
"distillation"
]
}
],
"event_type": "RWS",
"trigger": {
"offsets": [
31
],
"text": "rebuilding",
"tokens": [
"rebuilding"
]
}
},
{
"arguments": [
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
88,
89,
90,
91
],
"text": "does not always hold",
"tokens": [
"does",
"not",
"always",
"hold"
]
},
{
"argument_type": "Concern",
"nugget_type": "FEA",
"offsets": [
58,
59
],
"text": "latent assumption",
"tokens": [
"latent",
"assumption"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
88,
89,
90,
91
],
"text": "does not always hold",
"tokens": [
"does",
"not",
"always",
"hold"
]
}
},
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
106
],
"text": "provide",
"tokens": [
"provide"
]
},
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
133,
134
],
"text": "nag inference",
"tokens": [
"nag",
"inference"
]
},
{
"argument_type": "TriedComponent",
"nugget_type": "MOD",
"offsets": [
120,
121
],
"text": "linguistic structure",
"tokens": [
"linguistic",
"structure"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
119
],
"text": "incorporating",
"tokens": [
"incorporating"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
108,
109,
110,
111,
112,
113,
114,
115
],
"text": "feasible solution to the multimodality problem of nag",
"tokens": [
"feasible",
"solution",
"to",
"the",
"multimodality",
"problem",
"of",
"nag"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
106
],
"text": "provide",
"tokens": [
"provide"
]
}
},
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
147,
148,
149,
150,
151
],
"text": "pos - constrained parallel decoding",
"tokens": [
"pos",
"-",
"constrained",
"parallel",
"decoding"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
158
],
"text": "providing",
"tokens": [
"providing"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
156
],
"text": "aims",
"tokens": [
"aims"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
160,
161,
162,
163,
164,
165,
10,
11,
12,
13,
167,
168,
169
],
"text": "specific pos sequence to constrain the nag model during decoding",
"tokens": [
"specific",
"pos",
"sequence",
"to",
"constrain",
"the",
"non",
"-",
"autoregressive",
"generation",
"model",
"during",
"decoding"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
158
],
"text": "providing",
"tokens": [
"providing"
]
}
},
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "E-CMP",
"offsets": [
177
],
"text": "improves",
"tokens": [
"improves"
]
}
],
"event_type": "FIN",
"trigger": {
"offsets": [
173
],
"text": "demonstrate",
"tokens": [
"demonstrate"
]
}
},
{
"arguments": [
{
"argument_type": "Arg1",
"nugget_type": "APP",
"offsets": [
147,
148,
149,
150,
151
],
"text": "pos - constrained parallel decoding",
"tokens": [
"pos",
"-",
"constrained",
"parallel",
"decoding"
]
},
{
"argument_type": "Arg2",
"nugget_type": "APP",
"offsets": [
191,
192
],
"text": "knowledge distillation",
"tokens": [
"knowledge",
"distillation"
]
},
{
"argument_type": "Result",
"nugget_type": "STR",
"offsets": [
177
],
"text": "improves",
"tokens": [
"improves"
]
},
{
"argument_type": "Extent",
"nugget_type": "DEG",
"offsets": [
176
],
"text": "consistently",
"tokens": [
"consistently"
]
}
],
"event_type": "CMP",
"trigger": {
"offsets": [
177
],
"text": "improves",
"tokens": [
"improves"
]
}
}
] |
[
"the",
"multimodality",
"problem",
"has",
"become",
"a",
"major",
"challenge",
"of",
"existing",
"non",
"-",
"autoregressive",
"generation",
"(",
"nag",
")",
"systems",
".",
"a",
"common",
"solution",
"often",
"resorts",
"to",
"sequence",
"-",
"level",
"knowledge",
"distillation",
"by",
"rebuilding",
"the",
"training",
"dataset",
"through",
"autoregressive",
"generation",
"(",
"hereinafter",
"known",
"as",
"“",
"teacher",
"ag",
"”",
")",
".",
"the",
"success",
"of",
"such",
"methods",
"may",
"largely",
"depend",
"on",
"a",
"latent",
"assumption",
",",
"i",
".",
"e",
".",
",",
"the",
"teacher",
"ag",
"is",
"superior",
"to",
"the",
"nag",
"model",
".",
"however",
",",
"in",
"this",
"work",
",",
"we",
"experimentally",
"reveal",
"that",
"this",
"assumption",
"does",
"not",
"always",
"hold",
"for",
"the",
"text",
"generation",
"tasks",
"like",
"text",
"summarization",
"and",
"story",
"ending",
"generation",
".",
"to",
"provide",
"a",
"feasible",
"solution",
"to",
"the",
"multimodality",
"problem",
"of",
"nag",
",",
"we",
"propose",
"incorporating",
"linguistic",
"structure",
"(",
"part",
"-",
"of",
"-",
"speech",
"sequence",
"in",
"particular",
")",
"into",
"nag",
"inference",
"instead",
"of",
"relying",
"on",
"teacher",
"ag",
".",
"more",
"specifically",
",",
"the",
"proposed",
"pos",
"-",
"constrained",
"parallel",
"decoding",
"(",
"pospd",
")",
"method",
"aims",
"at",
"providing",
"a",
"specific",
"pos",
"sequence",
"to",
"constrain",
"the",
"nag",
"model",
"during",
"decoding",
".",
"our",
"experiments",
"demonstrate",
"that",
"pospd",
"consistently",
"improves",
"nag",
"models",
"on",
"four",
"text",
"generation",
"tasks",
"to",
"a",
"greater",
"extent",
"compared",
"to",
"knowledge",
"distillation",
".",
"this",
"observation",
"validates",
"the",
"necessity",
"of",
"exploring",
"the",
"alternatives",
"for",
"sequence",
"-",
"level",
"knowledge",
"distillation",
"."
] |
ACL
|
SP-10K: A Large-scale Evaluation Set for Selectional Preference Acquisition
|
Selectional Preference (SP) is a commonly observed language phenomenon and proved to be useful in many natural language processing tasks. To provide a better evaluation method for SP models, we introduce SP-10K, a large-scale evaluation set that provides human ratings for the plausibility of 10,000 SP pairs over five SP relations, covering 2,500 most frequent verbs, nouns, and adjectives in American English. Three representative SP acquisition methods based on pseudo-disambiguation are evaluated with SP-10K. To demonstrate the importance of our dataset, we investigate the relationship between SP-10K and the commonsense knowledge in ConceptNet5 and show the potential of using SP to represent the commonsense knowledge. We also use the Winograd Schema Challenge to prove that the proposed new SP relations are essential for the hard pronoun coreference resolution problem.
|
482368d7e911254c437439f618798499
| 2,019
|
[
"selectional preference ( sp ) is a commonly observed language phenomenon and proved to be useful in many natural language processing tasks .",
"to provide a better evaluation method for sp models , we introduce sp - 10k , a large - scale evaluation set that provides human ratings for the plausibility of 10 , 000 sp pairs over five sp relations , covering 2 , 500 most frequent verbs , nouns , and adjectives in american english .",
"three representative sp acquisition methods based on pseudo - disambiguation are evaluated with sp - 10k . to demonstrate the importance of our dataset , we investigate the relationship between sp - 10k and the commonsense knowledge in conceptnet5 and show the potential of using sp to represent the commonsense knowledge .",
"we also use the winograd schema challenge to prove that the proposed new sp relations are essential for the hard pronoun coreference resolution problem ."
] |
[
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
18,
19,
20
],
"text": "natural language processing",
"tokens": [
"natural",
"language",
"processing"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
12
],
"text": "proved",
"tokens": [
"proved"
]
}
},
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
24
],
"text": "provide",
"tokens": [
"provide"
]
},
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
33
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "DST",
"offsets": [
35,
36,
37
],
"text": "sp - 10k",
"tokens": [
"sp",
"-",
"10k"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
34
],
"text": "introduce",
"tokens": [
"introduce"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "APP",
"offsets": [
26,
27,
28,
29,
0,
1,
31
],
"text": "better evaluation method for sp models",
"tokens": [
"better",
"evaluation",
"method",
"for",
"selectional",
"preference",
"models"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
24
],
"text": "provide",
"tokens": [
"provide"
]
}
},
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
97
],
"text": "demonstrate",
"tokens": [
"demonstrate"
]
},
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
104
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117
],
"text": "relationship between sp - 10k and the commonsense knowledge in conceptnet5",
"tokens": [
"relationship",
"between",
"sp",
"-",
"10k",
"and",
"the",
"commonsense",
"knowledge",
"in",
"conceptnet5"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
105
],
"text": "investigate",
"tokens": [
"investigate"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
99,
100,
101,
102
],
"text": "importance of our dataset",
"tokens": [
"importance",
"of",
"our",
"dataset"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
97
],
"text": "demonstrate",
"tokens": [
"demonstrate"
]
}
},
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
79,
80,
81,
82,
83,
84,
85,
86,
87,
88
],
"text": "three representative sp acquisition methods based on pseudo - disambiguation",
"tokens": [
"three",
"representative",
"sp",
"acquisition",
"methods",
"based",
"on",
"pseudo",
"-",
"disambiguation"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
91,
92,
93,
94
],
"text": "with sp - 10k",
"tokens": [
"with",
"sp",
"-",
"10k"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
90
],
"text": "evaluated",
"tokens": [
"evaluated"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
104
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
121
],
"text": "potential",
"tokens": [
"potential"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
119
],
"text": "show",
"tokens": [
"show"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
131
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
135,
136,
137
],
"text": "winograd schema challenge",
"tokens": [
"winograd",
"schema",
"challenge"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
139
],
"text": "prove",
"tokens": [
"prove"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
133
],
"text": "use",
"tokens": [
"use"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
142,
143,
0,
1,
145
],
"text": "proposed new sp relations",
"tokens": [
"proposed",
"new",
"selectional",
"preference",
"relations"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
139
],
"text": "prove",
"tokens": [
"prove"
]
}
}
] |
[
"selectional",
"preference",
"(",
"sp",
")",
"is",
"a",
"commonly",
"observed",
"language",
"phenomenon",
"and",
"proved",
"to",
"be",
"useful",
"in",
"many",
"natural",
"language",
"processing",
"tasks",
".",
"to",
"provide",
"a",
"better",
"evaluation",
"method",
"for",
"sp",
"models",
",",
"we",
"introduce",
"sp",
"-",
"10k",
",",
"a",
"large",
"-",
"scale",
"evaluation",
"set",
"that",
"provides",
"human",
"ratings",
"for",
"the",
"plausibility",
"of",
"10",
",",
"000",
"sp",
"pairs",
"over",
"five",
"sp",
"relations",
",",
"covering",
"2",
",",
"500",
"most",
"frequent",
"verbs",
",",
"nouns",
",",
"and",
"adjectives",
"in",
"american",
"english",
".",
"three",
"representative",
"sp",
"acquisition",
"methods",
"based",
"on",
"pseudo",
"-",
"disambiguation",
"are",
"evaluated",
"with",
"sp",
"-",
"10k",
".",
"to",
"demonstrate",
"the",
"importance",
"of",
"our",
"dataset",
",",
"we",
"investigate",
"the",
"relationship",
"between",
"sp",
"-",
"10k",
"and",
"the",
"commonsense",
"knowledge",
"in",
"conceptnet5",
"and",
"show",
"the",
"potential",
"of",
"using",
"sp",
"to",
"represent",
"the",
"commonsense",
"knowledge",
".",
"we",
"also",
"use",
"the",
"winograd",
"schema",
"challenge",
"to",
"prove",
"that",
"the",
"proposed",
"new",
"sp",
"relations",
"are",
"essential",
"for",
"the",
"hard",
"pronoun",
"coreference",
"resolution",
"problem",
"."
] |
ACL
|
VIFIDEL: Evaluating the Visual Fidelity of Image Descriptions
|
We address the task of evaluating image description generation systems. We propose a novel image-aware metric for this task: VIFIDEL. It estimates the faithfulness of a generated caption with respect to the content of the actual image, based on the semantic similarity between labels of objects depicted in images and words in the description. The metric is also able to take into account the relative importance of objects mentioned in human reference descriptions during evaluation. Even if these human reference descriptions are not available, VIFIDEL can still reliably evaluate system descriptions. The metric achieves high correlation with human judgments on two well-known datasets and is competitive with metrics that depend on and rely exclusively on human references.
|
2187bbed861e54974c0ab02725f0d059
| 2,019
|
[
"we address the task of evaluating image description generation systems .",
"we propose a novel image - aware metric for this task : vifidel .",
"it estimates the faithfulness of a generated caption with respect to the content of the actual image , based on the semantic similarity between labels of objects depicted in images and words in the description .",
"the metric is also able to take into account the relative importance of objects mentioned in human reference descriptions during evaluation .",
"even if these human reference descriptions are not available , vifidel can still reliably evaluate system descriptions .",
"the metric achieves high correlation with human judgments on two well - known datasets and is competitive with metrics that depend on and rely exclusively on human references ."
] |
[
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
0
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
3,
4,
5,
6,
7,
8,
9
],
"text": "task of evaluating image description generation systems",
"tokens": [
"task",
"of",
"evaluating",
"image",
"description",
"generation",
"systems"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
1
],
"text": "address",
"tokens": [
"address"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
11
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
15,
16,
17,
18
],
"text": "image - aware metric",
"tokens": [
"image",
"-",
"aware",
"metric"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
23
],
"text": "vifidel",
"tokens": [
"vifidel"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
12
],
"text": "propose",
"tokens": [
"propose"
]
}
},
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
28,
29,
30,
31,
32
],
"text": "faithfulness of a generated caption",
"tokens": [
"faithfulness",
"of",
"a",
"generated",
"caption"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
46,
47
],
"text": "semantic similarity",
"tokens": [
"semantic",
"similarity"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
33,
34,
35,
36,
37,
38,
39,
40,
41
],
"text": "with respect to the content of the actual image",
"tokens": [
"with",
"respect",
"to",
"the",
"content",
"of",
"the",
"actual",
"image"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
26
],
"text": "estimates",
"tokens": [
"estimates"
]
}
},
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
71,
72,
73,
74
],
"text": "relative importance of objects",
"tokens": [
"relative",
"importance",
"of",
"objects"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
75,
76,
77,
78,
79,
80,
81
],
"text": "mentioned in human reference descriptions during evaluation",
"tokens": [
"mentioned",
"in",
"human",
"reference",
"descriptions",
"during",
"evaluation"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
67,
68,
69
],
"text": "take into account",
"tokens": [
"take",
"into",
"account"
]
}
},
{
"arguments": [
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
83,
84,
85,
86,
87,
88,
89,
90,
91
],
"text": "even if these human reference descriptions are not available",
"tokens": [
"even",
"if",
"these",
"human",
"reference",
"descriptions",
"are",
"not",
"available"
]
},
{
"argument_type": "Content",
"nugget_type": "FEA",
"offsets": [
98,
99
],
"text": "system descriptions",
"tokens": [
"system",
"descriptions"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
96,
97
],
"text": "reliably evaluate",
"tokens": [
"reliably",
"evaluate"
]
}
},
{
"arguments": [
{
"argument_type": "Object",
"nugget_type": "STR",
"offsets": [
104,
105
],
"text": "high correlation",
"tokens": [
"high",
"correlation"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
107,
108
],
"text": "human judgments",
"tokens": [
"human",
"judgments"
]
},
{
"argument_type": "Dataset",
"nugget_type": "DST",
"offsets": [
110,
111,
112,
113,
114
],
"text": "two well - known datasets",
"tokens": [
"two",
"well",
"-",
"known",
"datasets"
]
},
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
15,
16,
17,
18
],
"text": "image - aware metric",
"tokens": [
"image",
"-",
"aware",
"metric"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
103
],
"text": "achieves",
"tokens": [
"achieves"
]
}
},
{
"arguments": [
{
"argument_type": "Result",
"nugget_type": "STR",
"offsets": [
117
],
"text": "competitive",
"tokens": [
"competitive"
]
},
{
"argument_type": "Arg2",
"nugget_type": "APP",
"offsets": [
119
],
"text": "metrics",
"tokens": [
"metrics"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
120,
121,
122,
123,
124,
125,
126,
127,
128
],
"text": "that depend on and rely exclusively on human references",
"tokens": [
"that",
"depend",
"on",
"and",
"rely",
"exclusively",
"on",
"human",
"references"
]
}
],
"event_type": "CMP",
"trigger": {
"offsets": [
117
],
"text": "competitive",
"tokens": [
"competitive"
]
}
}
] |
[
"we",
"address",
"the",
"task",
"of",
"evaluating",
"image",
"description",
"generation",
"systems",
".",
"we",
"propose",
"a",
"novel",
"image",
"-",
"aware",
"metric",
"for",
"this",
"task",
":",
"vifidel",
".",
"it",
"estimates",
"the",
"faithfulness",
"of",
"a",
"generated",
"caption",
"with",
"respect",
"to",
"the",
"content",
"of",
"the",
"actual",
"image",
",",
"based",
"on",
"the",
"semantic",
"similarity",
"between",
"labels",
"of",
"objects",
"depicted",
"in",
"images",
"and",
"words",
"in",
"the",
"description",
".",
"the",
"metric",
"is",
"also",
"able",
"to",
"take",
"into",
"account",
"the",
"relative",
"importance",
"of",
"objects",
"mentioned",
"in",
"human",
"reference",
"descriptions",
"during",
"evaluation",
".",
"even",
"if",
"these",
"human",
"reference",
"descriptions",
"are",
"not",
"available",
",",
"vifidel",
"can",
"still",
"reliably",
"evaluate",
"system",
"descriptions",
".",
"the",
"metric",
"achieves",
"high",
"correlation",
"with",
"human",
"judgments",
"on",
"two",
"well",
"-",
"known",
"datasets",
"and",
"is",
"competitive",
"with",
"metrics",
"that",
"depend",
"on",
"and",
"rely",
"exclusively",
"on",
"human",
"references",
"."
] |
ACL
|
Are Girls Neko or Shōjo? Cross-Lingual Alignment of Non-Isomorphic Embeddings with Iterative Normalization
|
Cross-lingual word embeddings (CLWE) underlie many multilingual natural language processing systems, often through orthogonal transformations of pre-trained monolingual embeddings. However, orthogonal mapping only works on language pairs whose embeddings are naturally isomorphic. For non-isomorphic pairs, our method (Iterative Normalization) transforms monolingual embeddings to make orthogonal alignment easier by simultaneously enforcing that (1) individual word vectors are unit length, and (2) each language’s average vector is zero. Iterative Normalization consistently improves word translation accuracy of three CLWE methods, with the largest improvement observed on English-Japanese (from 2% to 44% test accuracy).
|
cd6693052f07194f38c18c5a516fe99c
| 2,019
|
[
"cross - lingual word embeddings ( clwe ) underlie many multilingual natural language processing systems , often through orthogonal transformations of pre - trained monolingual embeddings .",
"however , orthogonal mapping only works on language pairs whose embeddings are naturally isomorphic .",
"for non - isomorphic pairs , our method ( iterative normalization ) transforms monolingual embeddings to make orthogonal alignment easier by simultaneously enforcing that ( 1 ) individual word vectors are unit length , and ( 2 ) each language ’ s average vector is zero .",
"iterative normalization consistently improves word translation accuracy of three clwe methods , with the largest improvement observed on english - japanese ( from 2 % to 44 % test accuracy ) ."
] |
[
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "FEA",
"offsets": [
0,
1,
2,
3,
4
],
"text": "cross - lingual word embeddings",
"tokens": [
"cross",
"-",
"lingual",
"word",
"embeddings"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
8
],
"text": "underlie",
"tokens": [
"underlie"
]
}
},
{
"arguments": [
{
"argument_type": "Concern",
"nugget_type": "APP",
"offsets": [
29,
30
],
"text": "orthogonal mapping",
"tokens": [
"orthogonal",
"mapping"
]
},
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
31,
32
],
"text": "only works",
"tokens": [
"only",
"works"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
31,
32
],
"text": "only works",
"tokens": [
"only",
"works"
]
}
},
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "FEA",
"offsets": [
43,
44,
45,
46
],
"text": "non - isomorphic pairs",
"tokens": [
"non",
"-",
"isomorphic",
"pairs"
]
},
{
"argument_type": "BaseComponent",
"nugget_type": "MOD",
"offsets": [
55,
56
],
"text": "monolingual embeddings",
"tokens": [
"monolingual",
"embeddings"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
54
],
"text": "transforms",
"tokens": [
"transforms"
]
}
},
{
"arguments": [
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
69,
70,
71
],
"text": "individual word vectors",
"tokens": [
"individual",
"word",
"vectors"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
73,
74
],
"text": "unit length",
"tokens": [
"unit",
"length"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
58
],
"text": "make",
"tokens": [
"make"
]
},
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
81,
82,
83,
84,
85
],
"text": "language ’ s average vector",
"tokens": [
"language",
"’",
"s",
"average",
"vector"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
87
],
"text": "zero",
"tokens": [
"zero"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
63,
64
],
"text": "simultaneously enforcing",
"tokens": [
"simultaneously",
"enforcing"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "STR",
"offsets": [
59,
60,
61
],
"text": "orthogonal alignment easier",
"tokens": [
"orthogonal",
"alignment",
"easier"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
58
],
"text": "make",
"tokens": [
"make"
]
}
},
{
"arguments": [
{
"argument_type": "Arg1",
"nugget_type": "APP",
"offsets": [
89,
90
],
"text": "iterative normalization",
"tokens": [
"iterative",
"normalization"
]
},
{
"argument_type": "Metrics",
"nugget_type": "TAK",
"offsets": [
93,
94,
95,
96,
97,
98,
99
],
"text": "word translation accuracy of three clwe methods",
"tokens": [
"word",
"translation",
"accuracy",
"of",
"three",
"clwe",
"methods"
]
},
{
"argument_type": "Extent",
"nugget_type": "DEG",
"offsets": [
91
],
"text": "consistently",
"tokens": [
"consistently"
]
},
{
"argument_type": "Result",
"nugget_type": "STR",
"offsets": [
92
],
"text": "improves",
"tokens": [
"improves"
]
}
],
"event_type": "CMP",
"trigger": {
"offsets": [
92
],
"text": "improves",
"tokens": [
"improves"
]
}
}
] |
[
"cross",
"-",
"lingual",
"word",
"embeddings",
"(",
"clwe",
")",
"underlie",
"many",
"multilingual",
"natural",
"language",
"processing",
"systems",
",",
"often",
"through",
"orthogonal",
"transformations",
"of",
"pre",
"-",
"trained",
"monolingual",
"embeddings",
".",
"however",
",",
"orthogonal",
"mapping",
"only",
"works",
"on",
"language",
"pairs",
"whose",
"embeddings",
"are",
"naturally",
"isomorphic",
".",
"for",
"non",
"-",
"isomorphic",
"pairs",
",",
"our",
"method",
"(",
"iterative",
"normalization",
")",
"transforms",
"monolingual",
"embeddings",
"to",
"make",
"orthogonal",
"alignment",
"easier",
"by",
"simultaneously",
"enforcing",
"that",
"(",
"1",
")",
"individual",
"word",
"vectors",
"are",
"unit",
"length",
",",
"and",
"(",
"2",
")",
"each",
"language",
"’",
"s",
"average",
"vector",
"is",
"zero",
".",
"iterative",
"normalization",
"consistently",
"improves",
"word",
"translation",
"accuracy",
"of",
"three",
"clwe",
"methods",
",",
"with",
"the",
"largest",
"improvement",
"observed",
"on",
"english",
"-",
"japanese",
"(",
"from",
"2",
"%",
"to",
"44",
"%",
"test",
"accuracy",
")",
"."
] |
ACL
|
Stochastic Tokenization with a Language Model for Neural Text Classification
|
For unsegmented languages such as Japanese and Chinese, tokenization of a sentence has a significant impact on the performance of text classification. Sentences are usually segmented with words or subwords by a morphological analyzer or byte pair encoding and then encoded with word (or subword) representations for neural networks. However, segmentation is potentially ambiguous, and it is unclear whether the segmented tokens achieve the best performance for the target task. In this paper, we propose a method to simultaneously learn tokenization and text classification to address these problems. Our model incorporates a language model for unsupervised tokenization into a text classifier and then trains both models simultaneously. To make the model robust against infrequent tokens, we sampled segmentation for each sentence stochastically during training, which resulted in improved performance of text classification. We conducted experiments on sentiment analysis as a text classification task and show that our method achieves better performance than previous methods.
|
d6d29c18133a67e711bdc1af673ad347
| 2,019
|
[
"for unsegmented languages such as japanese and chinese , tokenization of a sentence has a significant impact on the performance of text classification .",
"sentences are usually segmented with words or subwords by a morphological analyzer or byte pair encoding and then encoded with word ( or subword ) representations for neural networks .",
"however , segmentation is potentially ambiguous , and it is unclear whether the segmented tokens achieve the best performance for the target task .",
"in this paper , we propose a method to simultaneously learn tokenization and text classification to address these problems .",
"our model incorporates a language model for unsupervised tokenization into a text classifier and then trains both models simultaneously .",
"to make the model robust against infrequent tokens , we sampled segmentation for each sentence stochastically during training , which resulted in improved performance of text classification .",
"we conducted experiments on sentiment analysis as a text classification task and show that our method achieves better performance than previous methods ."
] |
[
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
21,
22
],
"text": "text classification",
"tokens": [
"text",
"classification"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
16
],
"text": "impact",
"tokens": [
"impact"
]
}
},
{
"arguments": [
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
58,
59
],
"text": "potentially ambiguous",
"tokens": [
"potentially",
"ambiguous"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
58,
59
],
"text": "potentially ambiguous",
"tokens": [
"potentially",
"ambiguous"
]
}
},
{
"arguments": [
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
64
],
"text": "unclear",
"tokens": [
"unclear"
]
},
{
"argument_type": "Concern",
"nugget_type": "FEA",
"offsets": [
67,
68
],
"text": "segmented tokens",
"tokens": [
"segmented",
"tokens"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
64
],
"text": "unclear",
"tokens": [
"unclear"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
82
],
"text": "we",
"tokens": [
"we"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
83
],
"text": "propose",
"tokens": [
"propose"
]
}
},
{
"arguments": [
{
"argument_type": "TriedComponent",
"nugget_type": "APP",
"offsets": [
102,
103,
104,
105,
106
],
"text": "language model for unsupervised tokenization",
"tokens": [
"language",
"model",
"for",
"unsupervised",
"tokenization"
]
},
{
"argument_type": "BaseComponent",
"nugget_type": "APP",
"offsets": [
109,
110
],
"text": "text classifier",
"tokens": [
"text",
"classifier"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
100
],
"text": "incorporates",
"tokens": [
"incorporates"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
127
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
119
],
"text": "make",
"tokens": [
"make"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
134,
135
],
"text": "during training",
"tokens": [
"during",
"training"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
140
],
"text": "improved",
"tokens": [
"improved"
]
},
{
"argument_type": "Content",
"nugget_type": "FEA",
"offsets": [
129,
130,
131,
132
],
"text": "segmentation for each sentence",
"tokens": [
"segmentation",
"for",
"each",
"sentence"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
133
],
"text": "stochastically",
"tokens": [
"stochastically"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
128
],
"text": "sampled",
"tokens": [
"sampled"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
121,
122
],
"text": "model robust",
"tokens": [
"model",
"robust"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
123,
124,
125
],
"text": "against infrequent tokens",
"tokens": [
"against",
"infrequent",
"tokens"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
119
],
"text": "make",
"tokens": [
"make"
]
}
},
{
"arguments": [
{
"argument_type": "Finder",
"nugget_type": "OG",
"offsets": [
146
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "E-CMP",
"offsets": [
162
],
"text": "achieves",
"tokens": [
"achieves"
]
}
],
"event_type": "FIN",
"trigger": {
"offsets": [
158
],
"text": "show",
"tokens": [
"show"
]
}
},
{
"arguments": [
{
"argument_type": "Arg1",
"nugget_type": "APP",
"offsets": [
85,
86,
87,
88,
89,
90,
91,
92
],
"text": "method to simultaneously learn tokenization and text classification",
"tokens": [
"method",
"to",
"simultaneously",
"learn",
"tokenization",
"and",
"text",
"classification"
]
},
{
"argument_type": "Arg2",
"nugget_type": "APP",
"offsets": [
166,
167
],
"text": "previous methods",
"tokens": [
"previous",
"methods"
]
},
{
"argument_type": "Result",
"nugget_type": "STR",
"offsets": [
163,
164
],
"text": "better performance",
"tokens": [
"better",
"performance"
]
}
],
"event_type": "CMP",
"trigger": {
"offsets": [
162
],
"text": "achieves",
"tokens": [
"achieves"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
146
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
148,
149,
150,
151
],
"text": "experiments on sentiment analysis",
"tokens": [
"experiments",
"on",
"sentiment",
"analysis"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
154,
155,
156
],
"text": "text classification task",
"tokens": [
"text",
"classification",
"task"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
147
],
"text": "conducted",
"tokens": [
"conducted"
]
}
},
{
"arguments": [
{
"argument_type": "TriedComponent",
"nugget_type": "MOD",
"offsets": [
34,
35
],
"text": "morphological analyzer",
"tokens": [
"morphological",
"analyzer"
]
},
{
"argument_type": "TriedComponent",
"nugget_type": "MOD",
"offsets": [
37,
38,
39
],
"text": "byte pair encoding",
"tokens": [
"byte",
"pair",
"encoding"
]
},
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
24
],
"text": "sentences",
"tokens": [
"sentences"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
28,
29,
30,
31
],
"text": "with words or subwords",
"tokens": [
"with",
"words",
"or",
"subwords"
]
}
],
"event_type": "RWS",
"trigger": {
"offsets": [
27
],
"text": "segmented",
"tokens": [
"segmented"
]
}
},
{
"arguments": [
{
"argument_type": "TriedComponent",
"nugget_type": "FEA",
"offsets": [
44,
49
],
"text": "word ( or subword ) representations",
"tokens": [
"word",
"representations"
]
},
{
"argument_type": "BaseComponent",
"nugget_type": "APP",
"offsets": [
51,
52
],
"text": "neural networks",
"tokens": [
"neural",
"networks"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
42
],
"text": "encoded",
"tokens": [
"encoded"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "STR",
"offsets": [
71,
72
],
"text": "best performance",
"tokens": [
"best",
"performance"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
73,
74,
75,
76
],
"text": "for the target task",
"tokens": [
"for",
"the",
"target",
"task"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
69
],
"text": "achieve",
"tokens": [
"achieve"
]
}
},
{
"arguments": [
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
116
],
"text": "simultaneously",
"tokens": [
"simultaneously"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
102,
103,
104,
105,
106
],
"text": "language model for unsupervised tokenization",
"tokens": [
"language",
"model",
"for",
"unsupervised",
"tokenization"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
109,
110
],
"text": "text classifier",
"tokens": [
"text",
"classifier"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
113
],
"text": "trains",
"tokens": [
"trains"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
141,
142,
143,
144
],
"text": "performance of text classification",
"tokens": [
"performance",
"of",
"text",
"classification"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
140
],
"text": "improved",
"tokens": [
"improved"
]
}
}
] |
[
"for",
"unsegmented",
"languages",
"such",
"as",
"japanese",
"and",
"chinese",
",",
"tokenization",
"of",
"a",
"sentence",
"has",
"a",
"significant",
"impact",
"on",
"the",
"performance",
"of",
"text",
"classification",
".",
"sentences",
"are",
"usually",
"segmented",
"with",
"words",
"or",
"subwords",
"by",
"a",
"morphological",
"analyzer",
"or",
"byte",
"pair",
"encoding",
"and",
"then",
"encoded",
"with",
"word",
"(",
"or",
"subword",
")",
"representations",
"for",
"neural",
"networks",
".",
"however",
",",
"segmentation",
"is",
"potentially",
"ambiguous",
",",
"and",
"it",
"is",
"unclear",
"whether",
"the",
"segmented",
"tokens",
"achieve",
"the",
"best",
"performance",
"for",
"the",
"target",
"task",
".",
"in",
"this",
"paper",
",",
"we",
"propose",
"a",
"method",
"to",
"simultaneously",
"learn",
"tokenization",
"and",
"text",
"classification",
"to",
"address",
"these",
"problems",
".",
"our",
"model",
"incorporates",
"a",
"language",
"model",
"for",
"unsupervised",
"tokenization",
"into",
"a",
"text",
"classifier",
"and",
"then",
"trains",
"both",
"models",
"simultaneously",
".",
"to",
"make",
"the",
"model",
"robust",
"against",
"infrequent",
"tokens",
",",
"we",
"sampled",
"segmentation",
"for",
"each",
"sentence",
"stochastically",
"during",
"training",
",",
"which",
"resulted",
"in",
"improved",
"performance",
"of",
"text",
"classification",
".",
"we",
"conducted",
"experiments",
"on",
"sentiment",
"analysis",
"as",
"a",
"text",
"classification",
"task",
"and",
"show",
"that",
"our",
"method",
"achieves",
"better",
"performance",
"than",
"previous",
"methods",
"."
] |
ACL
|
Slangvolution: A Causal Analysis of Semantic Change and Frequency Dynamics in Slang
|
Languages are continuously undergoing changes, and the mechanisms that underlie these changes are still a matter of debate. In this work, we approach language evolution through the lens of causality in order to model not only how various distributional factors associate with language change, but how they causally affect it. In particular, we study slang, which is an informal language that is typically restricted to a specific group or social setting. We analyze the semantic change and frequency shift of slang words and compare them to those of standard, nonslang words. With causal discovery and causal inference techniques, we measure the effect that word type (slang/nonslang) has on both semantic change and frequency shift, as well as its relationship to frequency, polysemy and part of speech. Our analysis provides some new insights in the study of language change, e.g., we show that slang words undergo less semantic change but tend to have larger frequency shifts over time.
|
a50039508746831717fe56f6d01c9a88
| 2,022
|
[
"languages are continuously undergoing changes , and the mechanisms that underlie these changes are still a matter of debate .",
"in this work , we approach language evolution through the lens of causality in order to model not only how various distributional factors associate with language change , but how they causally affect it .",
"in particular , we study slang , which is an informal language that is typically restricted to a specific group or social setting .",
"we analyze the semantic change and frequency shift of slang words and compare them to those of standard , nonslang words .",
"with causal discovery and causal inference techniques , we measure the effect that word type ( slang / nonslang ) has on both semantic change and frequency shift , as well as its relationship to frequency , polysemy and part of speech .",
"our analysis provides some new insights in the study of language change , e . g . , we show that slang words undergo less semantic change but tend to have larger frequency shifts over time ."
] |
[
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "FEA",
"offsets": [
0
],
"text": "languages",
"tokens": [
"languages"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
4
],
"text": "changes",
"tokens": [
"changes"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
24
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
26,
27
],
"text": "language evolution",
"tokens": [
"language",
"evolution"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
28,
29,
30,
31,
32
],
"text": "through the lens of causality",
"tokens": [
"through",
"the",
"lens",
"of",
"causality"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
36
],
"text": "model",
"tokens": [
"model"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
25
],
"text": "approach",
"tokens": [
"approach"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
40,
41,
42,
43,
44,
45,
46
],
"text": "various distributional factors associate with language change",
"tokens": [
"various",
"distributional",
"factors",
"associate",
"with",
"language",
"change"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
36
],
"text": "model",
"tokens": [
"model"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
58
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "FEA",
"offsets": [
60
],
"text": "slang",
"tokens": [
"slang"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
59
],
"text": "study",
"tokens": [
"study"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
79
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
82,
83,
84,
85,
86,
87,
88,
89
],
"text": "semantic change and frequency shift of slang words",
"tokens": [
"semantic",
"change",
"and",
"frequency",
"shift",
"of",
"slang",
"words"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
80
],
"text": "analyze",
"tokens": [
"analyze"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
79
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
82,
83,
84,
85,
86,
87,
88,
89
],
"text": "semantic change and frequency shift of slang words",
"tokens": [
"semantic",
"change",
"and",
"frequency",
"shift",
"of",
"slang",
"words"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
94,
95,
96,
97,
98,
99
],
"text": "those of standard , nonslang words",
"tokens": [
"those",
"of",
"standard",
",",
"nonslang",
"words"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
91
],
"text": "compare",
"tokens": [
"compare"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
109
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
101,
102,
103,
104,
105,
106,
107
],
"text": "with causal discovery and causal inference techniques",
"tokens": [
"with",
"causal",
"discovery",
"and",
"causal",
"inference",
"techniques"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
112
],
"text": "effect",
"tokens": [
"effect"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
134
],
"text": "relationship",
"tokens": [
"relationship"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
136,
137,
138,
139,
140,
141,
142
],
"text": "frequency , polysemy and part of speech",
"tokens": [
"frequency",
",",
"polysemy",
"and",
"part",
"of",
"speech"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
110
],
"text": "measure",
"tokens": [
"measure"
]
}
},
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
152,
153,
154,
155
],
"text": "study of language change",
"tokens": [
"study",
"of",
"language",
"change"
]
},
{
"argument_type": "Object",
"nugget_type": "STR",
"offsets": [
148,
149
],
"text": "new insights",
"tokens": [
"new",
"insights"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
146
],
"text": "provides",
"tokens": [
"provides"
]
}
},
{
"arguments": [
{
"argument_type": "Finder",
"nugget_type": "OG",
"offsets": [
162
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "E-FAC",
"offsets": [
167
],
"text": "undergo",
"tokens": [
"undergo"
]
},
{
"argument_type": "Content",
"nugget_type": "E-FAC",
"offsets": [
172,
173,
174
],
"text": "tend to have",
"tokens": [
"tend",
"to",
"have"
]
}
],
"event_type": "FIN",
"trigger": {
"offsets": [
163
],
"text": "show",
"tokens": [
"show"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "FEA",
"offsets": [
165,
166
],
"text": "slang words",
"tokens": [
"slang",
"words"
]
},
{
"argument_type": "Object",
"nugget_type": "STR",
"offsets": [
168,
169,
170
],
"text": "less semantic change",
"tokens": [
"less",
"semantic",
"change"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
167
],
"text": "undergo",
"tokens": [
"undergo"
]
}
},
{
"arguments": [
{
"argument_type": "Object",
"nugget_type": "TAK",
"offsets": [
175,
176,
177
],
"text": "larger frequency shifts",
"tokens": [
"larger",
"frequency",
"shifts"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
178,
179
],
"text": "over time",
"tokens": [
"over",
"time"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
172,
173,
174
],
"text": "tend to have",
"tokens": [
"tend",
"to",
"have"
]
}
}
] |
[
"languages",
"are",
"continuously",
"undergoing",
"changes",
",",
"and",
"the",
"mechanisms",
"that",
"underlie",
"these",
"changes",
"are",
"still",
"a",
"matter",
"of",
"debate",
".",
"in",
"this",
"work",
",",
"we",
"approach",
"language",
"evolution",
"through",
"the",
"lens",
"of",
"causality",
"in",
"order",
"to",
"model",
"not",
"only",
"how",
"various",
"distributional",
"factors",
"associate",
"with",
"language",
"change",
",",
"but",
"how",
"they",
"causally",
"affect",
"it",
".",
"in",
"particular",
",",
"we",
"study",
"slang",
",",
"which",
"is",
"an",
"informal",
"language",
"that",
"is",
"typically",
"restricted",
"to",
"a",
"specific",
"group",
"or",
"social",
"setting",
".",
"we",
"analyze",
"the",
"semantic",
"change",
"and",
"frequency",
"shift",
"of",
"slang",
"words",
"and",
"compare",
"them",
"to",
"those",
"of",
"standard",
",",
"nonslang",
"words",
".",
"with",
"causal",
"discovery",
"and",
"causal",
"inference",
"techniques",
",",
"we",
"measure",
"the",
"effect",
"that",
"word",
"type",
"(",
"slang",
"/",
"nonslang",
")",
"has",
"on",
"both",
"semantic",
"change",
"and",
"frequency",
"shift",
",",
"as",
"well",
"as",
"its",
"relationship",
"to",
"frequency",
",",
"polysemy",
"and",
"part",
"of",
"speech",
".",
"our",
"analysis",
"provides",
"some",
"new",
"insights",
"in",
"the",
"study",
"of",
"language",
"change",
",",
"e",
".",
"g",
".",
",",
"we",
"show",
"that",
"slang",
"words",
"undergo",
"less",
"semantic",
"change",
"but",
"tend",
"to",
"have",
"larger",
"frequency",
"shifts",
"over",
"time",
"."
] |
ACL
|
Probing for Semantic Classes: Diagnosing the Meaning Content of Word Embeddings
|
Word embeddings typically represent different meanings of a word in a single conflated vector. Empirical analysis of embeddings of ambiguous words is currently limited by the small size of manually annotated resources and by the fact that word senses are treated as unrelated individual concepts. We present a large dataset based on manual Wikipedia annotations and word senses, where word senses from different words are related by semantic classes. This is the basis for novel diagnostic tests for an embedding’s content: we probe word embeddings for semantic classes and analyze the embedding space by classifying embeddings into semantic classes. Our main findings are: (i) Information about a sense is generally represented well in a single-vector embedding – if the sense is frequent. (ii) A classifier can accurately predict whether a word is single-sense or multi-sense, based only on its embedding. (iii) Although rare senses are not well represented in single-vector embeddings, this does not have negative impact on an NLP application whose performance depends on frequent senses.
|
b8d19f18eae23347546671b786f63ae5
| 2,019
|
[
"word embeddings typically represent different meanings of a word in a single conflated vector .",
"empirical analysis of embeddings of ambiguous words is currently limited by the small size of manually annotated resources and by the fact that word senses are treated as unrelated individual concepts .",
"we present a large dataset based on manual wikipedia annotations and word senses , where word senses from different words are related by semantic classes .",
"this is the basis for novel diagnostic tests for an embedding ’ s content : we probe word embeddings for semantic classes and analyze the embedding space by classifying embeddings into semantic classes .",
"our main findings are : ( i ) information about a sense is generally represented well in a single - vector embedding – if the sense is frequent .",
"( ii ) a classifier can accurately predict whether a word is single - sense or multi - sense , based only on its embedding .",
"( iii ) although rare senses are not well represented in single - vector embeddings , this does not have negative impact on an nlp application whose performance depends on frequent senses ."
] |
[
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
0,
1
],
"text": "word embeddings",
"tokens": [
"word",
"embeddings"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
3
],
"text": "represent",
"tokens": [
"represent"
]
}
},
{
"arguments": [
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
27,
28
],
"text": "small size",
"tokens": [
"small",
"size"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
27,
28
],
"text": "small size",
"tokens": [
"small",
"size"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
47
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "DST",
"offsets": [
51,
52,
53,
54,
55,
56,
57,
58,
59
],
"text": "dataset based on manual wikipedia annotations and word senses",
"tokens": [
"dataset",
"based",
"on",
"manual",
"wikipedia",
"annotations",
"and",
"word",
"senses"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
48
],
"text": "present",
"tokens": [
"present"
]
}
},
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
96
],
"text": "analyze",
"tokens": [
"analyze"
]
},
{
"argument_type": "TriedComponent",
"nugget_type": "FEA",
"offsets": [
104,
105
],
"text": "semantic classes",
"tokens": [
"semantic",
"classes"
]
},
{
"argument_type": "TriedComponent",
"nugget_type": "FEA",
"offsets": [
102
],
"text": "embeddings",
"tokens": [
"embeddings"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
101
],
"text": "classifying",
"tokens": [
"classifying"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "MOD",
"offsets": [
98,
99
],
"text": "embedding space",
"tokens": [
"embedding",
"space"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
96
],
"text": "analyze",
"tokens": [
"analyze"
]
}
},
{
"arguments": [
{
"argument_type": "Object",
"nugget_type": "FEA",
"offsets": [
115,
116,
117,
118
],
"text": "information about a sense",
"tokens": [
"information",
"about",
"a",
"sense"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
130,
131,
132,
133,
134
],
"text": "if the sense is frequent",
"tokens": [
"if",
"the",
"sense",
"is",
"frequent"
]
},
{
"argument_type": "Object",
"nugget_type": "STR",
"offsets": [
122
],
"text": "well",
"tokens": [
"well"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
123,
124,
125,
126,
127,
128
],
"text": "in a single - vector embedding",
"tokens": [
"in",
"a",
"single",
"-",
"vector",
"embedding"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
120,
121
],
"text": "generally represented",
"tokens": [
"generally",
"represented"
]
}
},
{
"arguments": [
{
"argument_type": "Object",
"nugget_type": "FEA",
"offsets": [
146
],
"text": "word",
"tokens": [
"word"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
156,
157,
158,
159,
160
],
"text": "based only on its embedding",
"tokens": [
"based",
"only",
"on",
"its",
"embedding"
]
},
{
"argument_type": "Subject",
"nugget_type": "MOD",
"offsets": [
140
],
"text": "classifier",
"tokens": [
"classifier"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
142,
143
],
"text": "accurately predict",
"tokens": [
"accurately",
"predict"
]
}
},
{
"arguments": [
{
"argument_type": "Object",
"nugget_type": "FEA",
"offsets": [
166,
167
],
"text": "rare senses",
"tokens": [
"rare",
"senses"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
172,
173,
174,
175,
176
],
"text": "in single - vector embeddings",
"tokens": [
"in",
"single",
"-",
"vector",
"embeddings"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
169,
170,
171
],
"text": "not well represented",
"tokens": [
"not",
"well",
"represented"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "FEA",
"offsets": [
166,
167
],
"text": "rare senses",
"tokens": [
"rare",
"senses"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
184,
185,
186,
187,
188,
189,
190,
191,
192,
193
],
"text": "on an nlp application whose performance depends on frequent senses",
"tokens": [
"on",
"an",
"nlp",
"application",
"whose",
"performance",
"depends",
"on",
"frequent",
"senses"
]
},
{
"argument_type": "Object",
"nugget_type": "WEA",
"offsets": [
182,
183
],
"text": "negative impact",
"tokens": [
"negative",
"impact"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
179,
180,
181
],
"text": "does not have",
"tokens": [
"does",
"not",
"have"
]
}
},
{
"arguments": [
{
"argument_type": "Concern",
"nugget_type": "FEA",
"offsets": [
38,
39
],
"text": "word senses",
"tokens": [
"word",
"senses"
]
},
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
43,
44,
45
],
"text": "unrelated individual concepts",
"tokens": [
"unrelated",
"individual",
"concepts"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
41
],
"text": "treated",
"tokens": [
"treated"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
88
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
90,
91,
92,
93,
94
],
"text": "word embeddings for semantic classes",
"tokens": [
"word",
"embeddings",
"for",
"semantic",
"classes"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
89
],
"text": "probe",
"tokens": [
"probe"
]
}
}
] |
[
"word",
"embeddings",
"typically",
"represent",
"different",
"meanings",
"of",
"a",
"word",
"in",
"a",
"single",
"conflated",
"vector",
".",
"empirical",
"analysis",
"of",
"embeddings",
"of",
"ambiguous",
"words",
"is",
"currently",
"limited",
"by",
"the",
"small",
"size",
"of",
"manually",
"annotated",
"resources",
"and",
"by",
"the",
"fact",
"that",
"word",
"senses",
"are",
"treated",
"as",
"unrelated",
"individual",
"concepts",
".",
"we",
"present",
"a",
"large",
"dataset",
"based",
"on",
"manual",
"wikipedia",
"annotations",
"and",
"word",
"senses",
",",
"where",
"word",
"senses",
"from",
"different",
"words",
"are",
"related",
"by",
"semantic",
"classes",
".",
"this",
"is",
"the",
"basis",
"for",
"novel",
"diagnostic",
"tests",
"for",
"an",
"embedding",
"’",
"s",
"content",
":",
"we",
"probe",
"word",
"embeddings",
"for",
"semantic",
"classes",
"and",
"analyze",
"the",
"embedding",
"space",
"by",
"classifying",
"embeddings",
"into",
"semantic",
"classes",
".",
"our",
"main",
"findings",
"are",
":",
"(",
"i",
")",
"information",
"about",
"a",
"sense",
"is",
"generally",
"represented",
"well",
"in",
"a",
"single",
"-",
"vector",
"embedding",
"–",
"if",
"the",
"sense",
"is",
"frequent",
".",
"(",
"ii",
")",
"a",
"classifier",
"can",
"accurately",
"predict",
"whether",
"a",
"word",
"is",
"single",
"-",
"sense",
"or",
"multi",
"-",
"sense",
",",
"based",
"only",
"on",
"its",
"embedding",
".",
"(",
"iii",
")",
"although",
"rare",
"senses",
"are",
"not",
"well",
"represented",
"in",
"single",
"-",
"vector",
"embeddings",
",",
"this",
"does",
"not",
"have",
"negative",
"impact",
"on",
"an",
"nlp",
"application",
"whose",
"performance",
"depends",
"on",
"frequent",
"senses",
"."
] |
ACL
|
Pretrained Transformers Improve Out-of-Distribution Robustness
|
Although pretrained Transformers such as BERT achieve high accuracy on in-distribution examples, do they generalize to new distributions? We systematically measure out-of-distribution (OOD) generalization for seven NLP datasets by constructing a new robustness benchmark with realistic distribution shifts. We measure the generalization of previous models including bag-of-words models, ConvNets, and LSTMs, and we show that pretrained Transformers’ performance declines are substantially smaller. Pretrained transformers are also more effective at detecting anomalous or OOD examples, while many previous models are frequently worse than chance. We examine which factors affect robustness, finding that larger models are not necessarily more robust, distillation can be harmful, and more diverse pretraining data can enhance robustness. Finally, we show where future work can improve OOD robustness.
|
f6d253d2b4a15754f8fee9d463aa04a9
| 2,020
|
[
"although pretrained transformers such as bert achieve high accuracy on in - distribution examples , do they generalize to new distributions ?",
"we systematically measure out - of - distribution ( ood ) generalization for seven nlp datasets by constructing a new robustness benchmark with realistic distribution shifts .",
"we measure the generalization of previous models including bag - of - words models , convnets , and lstms , and we show that pretrained transformers ’ performance declines are substantially smaller .",
"pretrained transformers are also more effective at detecting anomalous or ood examples , while many previous models are frequently worse than chance .",
"we examine which factors affect robustness , finding that larger models are not necessarily more robust , distillation can be harmful , and more diverse pretraining data can enhance robustness .",
"finally , we show where future work can improve ood robustness ."
] |
[
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "MOD",
"offsets": [
1,
2
],
"text": "pretrained transformers",
"tokens": [
"pretrained",
"transformers"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
6
],
"text": "achieve",
"tokens": [
"achieve"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
22
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
42,
43
],
"text": "robustness benchmark",
"tokens": [
"robustness",
"benchmark"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
24
],
"text": "measure",
"tokens": [
"measure"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
39
],
"text": "constructing",
"tokens": [
"constructing"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
25,
26,
27,
28,
29,
33
],
"text": "out - of - distribution ( ood ) generalization",
"tokens": [
"out",
"-",
"of",
"-",
"distribution",
"generalization"
]
},
{
"argument_type": "Dataset",
"nugget_type": "DST",
"offsets": [
35,
36,
37
],
"text": "seven nlp datasets",
"tokens": [
"seven",
"nlp",
"datasets"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
24
],
"text": "measure",
"tokens": [
"measure"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
49
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
52,
53,
54,
55
],
"text": "generalization of previous models",
"tokens": [
"generalization",
"of",
"previous",
"models"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
50
],
"text": "measure",
"tokens": [
"measure"
]
}
},
{
"arguments": [
{
"argument_type": "Finder",
"nugget_type": "OG",
"offsets": [
70
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "E-FAC",
"offsets": [
79,
80
],
"text": "substantially smaller",
"tokens": [
"substantially",
"smaller"
]
}
],
"event_type": "FIN",
"trigger": {
"offsets": [
71
],
"text": "show",
"tokens": [
"show"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "TAK",
"offsets": [
73,
74,
75,
76,
77
],
"text": "pretrained transformers ’ performance declines",
"tokens": [
"pretrained",
"transformers",
"’",
"performance",
"declines"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
79,
80
],
"text": "substantially smaller",
"tokens": [
"substantially",
"smaller"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "MOD",
"offsets": [
82,
83
],
"text": "pretrained transformers",
"tokens": [
"pretrained",
"transformers"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
89
],
"text": "detecting",
"tokens": [
"detecting"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
86,
87
],
"text": "more effective",
"tokens": [
"more",
"effective"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "FEA",
"offsets": [
90,
93
],
"text": "anomalous examples",
"tokens": [
"anomalous",
"examples"
]
},
{
"argument_type": "Aim",
"nugget_type": "FEA",
"offsets": [
25,
26,
27,
28,
29,
93
],
"text": "ood examples",
"tokens": [
"out",
"-",
"of",
"-",
"distribution",
"examples"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
89
],
"text": "detecting",
"tokens": [
"detecting"
]
}
},
{
"arguments": [
{
"argument_type": "Result",
"nugget_type": "WEA",
"offsets": [
101
],
"text": "worse",
"tokens": [
"worse"
]
},
{
"argument_type": "Arg2",
"nugget_type": "APP",
"offsets": [
96,
97,
98
],
"text": "many previous models",
"tokens": [
"many",
"previous",
"models"
]
}
],
"event_type": "CMP",
"trigger": {
"offsets": [
101
],
"text": "worse",
"tokens": [
"worse"
]
}
},
{
"arguments": [
{
"argument_type": "Finder",
"nugget_type": "OG",
"offsets": [
105
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "E-FAC",
"offsets": [
117,
118,
119,
120
],
"text": "not necessarily more robust",
"tokens": [
"not",
"necessarily",
"more",
"robust"
]
},
{
"argument_type": "Content",
"nugget_type": "E-FAC",
"offsets": [
125
],
"text": "harmful",
"tokens": [
"harmful"
]
},
{
"argument_type": "Content",
"nugget_type": "E-FAC",
"offsets": [
133
],
"text": "enhance",
"tokens": [
"enhance"
]
}
],
"event_type": "FIN",
"trigger": {
"offsets": [
112
],
"text": "finding",
"tokens": [
"finding"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
114,
115
],
"text": "larger models",
"tokens": [
"larger",
"models"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
117,
118,
119,
120
],
"text": "not necessarily more robust",
"tokens": [
"not",
"necessarily",
"more",
"robust"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
122
],
"text": "distillation",
"tokens": [
"distillation"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
125
],
"text": "harmful",
"tokens": [
"harmful"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "DST",
"offsets": [
128,
129,
130,
131
],
"text": "more diverse pretraining data",
"tokens": [
"more",
"diverse",
"pretraining",
"data"
]
},
{
"argument_type": "Object",
"nugget_type": "TAK",
"offsets": [
134
],
"text": "robustness",
"tokens": [
"robustness"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
133
],
"text": "enhance",
"tokens": [
"enhance"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
138
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
141,
142
],
"text": "future work",
"tokens": [
"future",
"work"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
139
],
"text": "show",
"tokens": [
"show"
]
}
}
] |
[
"although",
"pretrained",
"transformers",
"such",
"as",
"bert",
"achieve",
"high",
"accuracy",
"on",
"in",
"-",
"distribution",
"examples",
",",
"do",
"they",
"generalize",
"to",
"new",
"distributions",
"?",
"we",
"systematically",
"measure",
"out",
"-",
"of",
"-",
"distribution",
"(",
"ood",
")",
"generalization",
"for",
"seven",
"nlp",
"datasets",
"by",
"constructing",
"a",
"new",
"robustness",
"benchmark",
"with",
"realistic",
"distribution",
"shifts",
".",
"we",
"measure",
"the",
"generalization",
"of",
"previous",
"models",
"including",
"bag",
"-",
"of",
"-",
"words",
"models",
",",
"convnets",
",",
"and",
"lstms",
",",
"and",
"we",
"show",
"that",
"pretrained",
"transformers",
"’",
"performance",
"declines",
"are",
"substantially",
"smaller",
".",
"pretrained",
"transformers",
"are",
"also",
"more",
"effective",
"at",
"detecting",
"anomalous",
"or",
"ood",
"examples",
",",
"while",
"many",
"previous",
"models",
"are",
"frequently",
"worse",
"than",
"chance",
".",
"we",
"examine",
"which",
"factors",
"affect",
"robustness",
",",
"finding",
"that",
"larger",
"models",
"are",
"not",
"necessarily",
"more",
"robust",
",",
"distillation",
"can",
"be",
"harmful",
",",
"and",
"more",
"diverse",
"pretraining",
"data",
"can",
"enhance",
"robustness",
".",
"finally",
",",
"we",
"show",
"where",
"future",
"work",
"can",
"improve",
"ood",
"robustness",
"."
] |
ACL
|
An Analysis of the Utility of Explicit Negative Examples to Improve the Syntactic Abilities of Neural Language Models
|
We explore the utilities of explicit negative examples in training neural language models. Negative examples here are incorrect words in a sentence, such as barks in *The dogs barks. Neural language models are commonly trained only on positive examples, a set of sentences in the training data, but recent studies suggest that the models trained in this way are not capable of robustly handling complex syntactic constructions, such as long-distance agreement. In this paper, we first demonstrate that appropriately using negative examples about particular constructions (e.g., subject-verb agreement) will boost the model’s robustness on them in English, with a negligible loss of perplexity. The key to our success is an additional margin loss between the log-likelihoods of a correct word and an incorrect word. We then provide a detailed analysis of the trained models. One of our findings is the difficulty of object-relative clauses for RNNs. We find that even with our direct learning signals the models still suffer from resolving agreement across an object-relative clause. Augmentation of training sentences involving the constructions somewhat helps, but the accuracy still does not reach the level of subject-relative clauses. Although not directly cognitively appealing, our method can be a tool to analyze the true architectural limitation of neural models on challenging linguistic constructions.
|
fcf1dd5d0bee3ecda0fa252d8f2f6896
| 2,020
|
[
"we explore the utilities of explicit negative examples in training neural language models .",
"negative examples here are incorrect words in a sentence , such as barks in * the dogs barks .",
"neural language models are commonly trained only on positive examples , a set of sentences in the training data , but recent studies suggest that the models trained in this way are not capable of robustly handling complex syntactic constructions , such as long - distance agreement .",
"in this paper , we first demonstrate that appropriately using negative examples about particular constructions ( e . g . , subject - verb agreement ) will boost the model ’ s robustness on them in english , with a negligible loss of perplexity .",
"the key to our success is an additional margin loss between the log - likelihoods of a correct word and an incorrect word .",
"we then provide a detailed analysis of the trained models .",
"one of our findings is the difficulty of object - relative clauses for rnns .",
"we find that even with our direct learning signals the models still suffer from resolving agreement across an object - relative clause .",
"augmentation of training sentences involving the constructions somewhat helps , but the accuracy still does not reach the level of subject - relative clauses .",
"although not directly cognitively appealing , our method can be a tool to analyze the true architectural limitation of neural models on challenging linguistic constructions ."
] |
[
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
0
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
3,
4,
5,
6,
7
],
"text": "utilities of explicit negative examples",
"tokens": [
"utilities",
"of",
"explicit",
"negative",
"examples"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
9
],
"text": "training",
"tokens": [
"training"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
1
],
"text": "explore",
"tokens": [
"explore"
]
}
},
{
"arguments": [
{
"argument_type": "Concern",
"nugget_type": "APP",
"offsets": [
33,
34,
35
],
"text": "neural language models",
"tokens": [
"neural",
"language",
"models"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
65,
66,
67,
68,
69
],
"text": "not capable of robustly handling",
"tokens": [
"not",
"capable",
"of",
"robustly",
"handling"
]
}
},
{
"arguments": [
{
"argument_type": "Finder",
"nugget_type": "OG",
"offsets": [
85
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "E-FAC",
"offsets": [
108
],
"text": "boost",
"tokens": [
"boost"
]
}
],
"event_type": "FIN",
"trigger": {
"offsets": [
87
],
"text": "demonstrate",
"tokens": [
"demonstrate"
]
}
},
{
"arguments": [
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
89,
90,
91,
92,
93,
94,
95
],
"text": "appropriately using negative examples about particular constructions",
"tokens": [
"appropriately",
"using",
"negative",
"examples",
"about",
"particular",
"constructions"
]
},
{
"argument_type": "Subject",
"nugget_type": "TAK",
"offsets": [
110,
111,
112,
113,
114,
115,
116,
117
],
"text": "model ’ s robustness on them in english",
"tokens": [
"model",
"’",
"s",
"robustness",
"on",
"them",
"in",
"english"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
108
],
"text": "boost",
"tokens": [
"boost"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
150
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
154,
155,
156,
157,
158,
159
],
"text": "detailed analysis of the trained models",
"tokens": [
"detailed",
"analysis",
"of",
"the",
"trained",
"models"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
152
],
"text": "provide",
"tokens": [
"provide"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "TAK",
"offsets": [
167,
168,
169,
170,
171,
172,
173,
174
],
"text": "difficulty of object - relative clauses for rnns",
"tokens": [
"difficulty",
"of",
"object",
"-",
"relative",
"clauses",
"for",
"rnns"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
164
],
"text": "findings",
"tokens": [
"findings"
]
}
},
{
"arguments": [
{
"argument_type": "Finder",
"nugget_type": "OG",
"offsets": [
176
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "E-FAC",
"offsets": [
187,
188,
189,
190
],
"text": "still suffer from resolving",
"tokens": [
"still",
"suffer",
"from",
"resolving"
]
}
],
"event_type": "FIN",
"trigger": {
"offsets": [
177
],
"text": "find",
"tokens": [
"find"
]
}
},
{
"arguments": [
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
179,
180,
181,
182,
183,
184
],
"text": "even with our direct learning signals",
"tokens": [
"even",
"with",
"our",
"direct",
"learning",
"signals"
]
},
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
158,
159
],
"text": "trained models",
"tokens": [
"trained",
"models"
]
},
{
"argument_type": "Object",
"nugget_type": "TAK",
"offsets": [
191,
192,
193,
194,
195,
196,
197
],
"text": "agreement across an object - relative clause",
"tokens": [
"agreement",
"across",
"an",
"object",
"-",
"relative",
"clause"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
187,
188,
189,
190
],
"text": "still suffer from resolving",
"tokens": [
"still",
"suffer",
"from",
"resolving"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "TAK",
"offsets": [
199,
200,
201,
202
],
"text": "augmentation of training sentences",
"tokens": [
"augmentation",
"of",
"training",
"sentences"
]
},
{
"argument_type": "Object",
"nugget_type": "TAK",
"offsets": [
205,
206,
207
],
"text": "constructions somewhat helps",
"tokens": [
"constructions",
"somewhat",
"helps"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
203
],
"text": "involving",
"tokens": [
"involving"
]
}
},
{
"arguments": [
{
"argument_type": "Arg1",
"nugget_type": "FEA",
"offsets": [
194,
195,
196,
197
],
"text": "object - relative clause",
"tokens": [
"object",
"-",
"relative",
"clause"
]
},
{
"argument_type": "Result",
"nugget_type": "WEA",
"offsets": [
212,
213,
214,
215
],
"text": "still does not reach",
"tokens": [
"still",
"does",
"not",
"reach"
]
},
{
"argument_type": "Arg2",
"nugget_type": "FEA",
"offsets": [
219,
220,
221,
222
],
"text": "subject - relative clauses",
"tokens": [
"subject",
"-",
"relative",
"clauses"
]
}
],
"event_type": "CMP",
"trigger": {
"offsets": [
212,
213,
214,
215
],
"text": "still does not reach",
"tokens": [
"still",
"does",
"not",
"reach"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
230,
231
],
"text": "our method",
"tokens": [
"our",
"method"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
237
],
"text": "analyze",
"tokens": [
"analyze"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
235
],
"text": "tool",
"tokens": [
"tool"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
239,
240,
241,
242,
243,
244
],
"text": "true architectural limitation of neural models",
"tokens": [
"true",
"architectural",
"limitation",
"of",
"neural",
"models"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
245,
246,
247,
248
],
"text": "on challenging linguistic constructions",
"tokens": [
"on",
"challenging",
"linguistic",
"constructions"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
237
],
"text": "analyze",
"tokens": [
"analyze"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "APP",
"offsets": [
10,
11,
12
],
"text": "neural language models",
"tokens": [
"neural",
"language",
"models"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
9
],
"text": "training",
"tokens": [
"training"
]
}
}
] |
[
"we",
"explore",
"the",
"utilities",
"of",
"explicit",
"negative",
"examples",
"in",
"training",
"neural",
"language",
"models",
".",
"negative",
"examples",
"here",
"are",
"incorrect",
"words",
"in",
"a",
"sentence",
",",
"such",
"as",
"barks",
"in",
"*",
"the",
"dogs",
"barks",
".",
"neural",
"language",
"models",
"are",
"commonly",
"trained",
"only",
"on",
"positive",
"examples",
",",
"a",
"set",
"of",
"sentences",
"in",
"the",
"training",
"data",
",",
"but",
"recent",
"studies",
"suggest",
"that",
"the",
"models",
"trained",
"in",
"this",
"way",
"are",
"not",
"capable",
"of",
"robustly",
"handling",
"complex",
"syntactic",
"constructions",
",",
"such",
"as",
"long",
"-",
"distance",
"agreement",
".",
"in",
"this",
"paper",
",",
"we",
"first",
"demonstrate",
"that",
"appropriately",
"using",
"negative",
"examples",
"about",
"particular",
"constructions",
"(",
"e",
".",
"g",
".",
",",
"subject",
"-",
"verb",
"agreement",
")",
"will",
"boost",
"the",
"model",
"’",
"s",
"robustness",
"on",
"them",
"in",
"english",
",",
"with",
"a",
"negligible",
"loss",
"of",
"perplexity",
".",
"the",
"key",
"to",
"our",
"success",
"is",
"an",
"additional",
"margin",
"loss",
"between",
"the",
"log",
"-",
"likelihoods",
"of",
"a",
"correct",
"word",
"and",
"an",
"incorrect",
"word",
".",
"we",
"then",
"provide",
"a",
"detailed",
"analysis",
"of",
"the",
"trained",
"models",
".",
"one",
"of",
"our",
"findings",
"is",
"the",
"difficulty",
"of",
"object",
"-",
"relative",
"clauses",
"for",
"rnns",
".",
"we",
"find",
"that",
"even",
"with",
"our",
"direct",
"learning",
"signals",
"the",
"models",
"still",
"suffer",
"from",
"resolving",
"agreement",
"across",
"an",
"object",
"-",
"relative",
"clause",
".",
"augmentation",
"of",
"training",
"sentences",
"involving",
"the",
"constructions",
"somewhat",
"helps",
",",
"but",
"the",
"accuracy",
"still",
"does",
"not",
"reach",
"the",
"level",
"of",
"subject",
"-",
"relative",
"clauses",
".",
"although",
"not",
"directly",
"cognitively",
"appealing",
",",
"our",
"method",
"can",
"be",
"a",
"tool",
"to",
"analyze",
"the",
"true",
"architectural",
"limitation",
"of",
"neural",
"models",
"on",
"challenging",
"linguistic",
"constructions",
"."
] |
ACL
|
Multi-Hop Paragraph Retrieval for Open-Domain Question Answering
|
This paper is concerned with the task of multi-hop open-domain Question Answering (QA). This task is particularly challenging since it requires the simultaneous performance of textual reasoning and efficient searching. We present a method for retrieving multiple supporting paragraphs, nested amidst a large knowledge base, which contain the necessary evidence to answer a given question. Our method iteratively retrieves supporting paragraphs by forming a joint vector representation of both a question and a paragraph. The retrieval is performed by considering contextualized sentence-level representations of the paragraphs in the knowledge source. Our method achieves state-of-the-art performance over two well-known datasets, SQuAD-Open and HotpotQA, which serve as our single- and multi-hop open-domain QA benchmarks, respectively.
|
082dd70500903fcc805528f618b2cd13
| 2,019
|
[
"this paper is concerned with the task of multi - hop open - domain question answering ( qa ) .",
"this task is particularly challenging since it requires the simultaneous performance of textual reasoning and efficient searching .",
"we present a method for retrieving multiple supporting paragraphs , nested amidst a large knowledge base , which contain the necessary evidence to answer a given question .",
"our method iteratively retrieves supporting paragraphs by forming a joint vector representation of both a question and a paragraph .",
"the retrieval is performed by considering contextualized sentence - level representations of the paragraphs in the knowledge source .",
"our method achieves state - of - the - art performance over two well - known datasets , squad - open and hotpotqa , which serve as our single - and multi - hop open - domain qa benchmarks , respectively ."
] |
[
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
8,
9,
10,
11,
12,
13,
14,
15
],
"text": "multi - hop open - domain question answering",
"tokens": [
"multi",
"-",
"hop",
"open",
"-",
"domain",
"question",
"answering"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
3
],
"text": "concerned",
"tokens": [
"concerned"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
38
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
41
],
"text": "method",
"tokens": [
"method"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
43
],
"text": "retrieving",
"tokens": [
"retrieving"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
39
],
"text": "present",
"tokens": [
"present"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "MOD",
"offsets": [
45,
46
],
"text": "supporting paragraphs",
"tokens": [
"supporting",
"paragraphs"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
43
],
"text": "retrieving",
"tokens": [
"retrieving"
]
}
},
{
"arguments": [
{
"argument_type": "TriedComponent",
"nugget_type": "FEA",
"offsets": [
70,
71
],
"text": "supporting paragraphs",
"tokens": [
"supporting",
"paragraphs"
]
},
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
75,
76,
77,
78,
79,
80,
81,
82,
83,
84
],
"text": "joint vector representation of both a question and a paragraph",
"tokens": [
"joint",
"vector",
"representation",
"of",
"both",
"a",
"question",
"and",
"a",
"paragraph"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
69
],
"text": "retrieves",
"tokens": [
"retrieves"
]
}
},
{
"arguments": [
{
"argument_type": "Object",
"nugget_type": "STR",
"offsets": [
108,
109,
110,
111,
112,
113,
114,
115
],
"text": "state - of - the - art performance",
"tokens": [
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"performance"
]
},
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
106
],
"text": "method",
"tokens": [
"method"
]
},
{
"argument_type": "Dataset",
"nugget_type": "DST",
"offsets": [
117,
118,
119,
120,
121
],
"text": "two well - known datasets",
"tokens": [
"two",
"well",
"-",
"known",
"datasets"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
107
],
"text": "achieves",
"tokens": [
"achieves"
]
}
}
] |
[
"this",
"paper",
"is",
"concerned",
"with",
"the",
"task",
"of",
"multi",
"-",
"hop",
"open",
"-",
"domain",
"question",
"answering",
"(",
"qa",
")",
".",
"this",
"task",
"is",
"particularly",
"challenging",
"since",
"it",
"requires",
"the",
"simultaneous",
"performance",
"of",
"textual",
"reasoning",
"and",
"efficient",
"searching",
".",
"we",
"present",
"a",
"method",
"for",
"retrieving",
"multiple",
"supporting",
"paragraphs",
",",
"nested",
"amidst",
"a",
"large",
"knowledge",
"base",
",",
"which",
"contain",
"the",
"necessary",
"evidence",
"to",
"answer",
"a",
"given",
"question",
".",
"our",
"method",
"iteratively",
"retrieves",
"supporting",
"paragraphs",
"by",
"forming",
"a",
"joint",
"vector",
"representation",
"of",
"both",
"a",
"question",
"and",
"a",
"paragraph",
".",
"the",
"retrieval",
"is",
"performed",
"by",
"considering",
"contextualized",
"sentence",
"-",
"level",
"representations",
"of",
"the",
"paragraphs",
"in",
"the",
"knowledge",
"source",
".",
"our",
"method",
"achieves",
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"performance",
"over",
"two",
"well",
"-",
"known",
"datasets",
",",
"squad",
"-",
"open",
"and",
"hotpotqa",
",",
"which",
"serve",
"as",
"our",
"single",
"-",
"and",
"multi",
"-",
"hop",
"open",
"-",
"domain",
"qa",
"benchmarks",
",",
"respectively",
"."
] |
ACL
|
MMGCN: Multimodal Fusion via Deep Graph Convolution Network for Emotion Recognition in Conversation
|
Emotion recognition in conversation (ERC) is a crucial component in affective dialogue systems, which helps the system understand users’ emotions and generate empathetic responses. However, most works focus on modeling speaker and contextual information primarily on the textual modality or simply leveraging multimodal information through feature concatenation. In order to explore a more effective way of utilizing both multimodal and long-distance contextual information, we propose a new model based on multimodal fused graph convolutional network, MMGCN, in this work. MMGCN can not only make use of multimodal dependencies effectively, but also leverage speaker information to model inter-speaker and intra-speaker dependency. We evaluate our proposed model on two public benchmark datasets, IEMOCAP and MELD, and the results prove the effectiveness of MMGCN, which outperforms other SOTA methods by a significant margin under the multimodal conversation setting.
|
0d6fcfb11b3cb40b77d0061f7aac74e3
| 2,021
|
[
"emotion recognition in conversation ( erc ) is a crucial component in affective dialogue systems , which helps the system understand users ’ emotions and generate empathetic responses .",
"however , most works focus on modeling speaker and contextual information primarily on the textual modality or simply leveraging multimodal information through feature concatenation .",
"in order to explore a more effective way of utilizing both multimodal and long - distance contextual information , we propose a new model based on multimodal fused graph convolutional network , mmgcn , in this work .",
"mmgcn can not only make use of multimodal dependencies effectively , but also leverage speaker information to model inter - speaker and intra - speaker dependency .",
"we evaluate our proposed model on two public benchmark datasets , iemocap and meld , and the results prove the effectiveness of mmgcn , which outperforms other sota methods by a significant margin under the multimodal conversation setting ."
] |
[
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
0,
1,
2,
3
],
"text": "emotion recognition in conversation",
"tokens": [
"emotion",
"recognition",
"in",
"conversation"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
11,
12,
13,
14
],
"text": "in affective dialogue systems",
"tokens": [
"in",
"affective",
"dialogue",
"systems"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
10
],
"text": "component",
"tokens": [
"component"
]
}
},
{
"arguments": [
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
41,
42,
43,
44
],
"text": "on the textual modality",
"tokens": [
"on",
"the",
"textual",
"modality"
]
},
{
"argument_type": "Concern",
"nugget_type": "APP",
"offsets": [
31,
32
],
"text": "most works",
"tokens": [
"most",
"works"
]
},
{
"argument_type": "Fault",
"nugget_type": "FEA",
"offsets": [
36,
39
],
"text": "speaker information",
"tokens": [
"speaker",
"information"
]
},
{
"argument_type": "Fault",
"nugget_type": "FEA",
"offsets": [
38,
39
],
"text": "contextual information",
"tokens": [
"contextual",
"information"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
35
],
"text": "modeling",
"tokens": [
"modeling"
]
}
},
{
"arguments": [
{
"argument_type": "Concern",
"nugget_type": "APP",
"offsets": [
31,
32
],
"text": "most works",
"tokens": [
"most",
"works"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
50,
51,
52
],
"text": "through feature concatenation",
"tokens": [
"through",
"feature",
"concatenation"
]
},
{
"argument_type": "Fault",
"nugget_type": "FEA",
"offsets": [
48,
49
],
"text": "multimodal information",
"tokens": [
"multimodal",
"information"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
47
],
"text": "leveraging",
"tokens": [
"leveraging"
]
}
},
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
63
],
"text": "utilizing",
"tokens": [
"utilizing"
]
},
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
73
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
80,
81,
82,
83,
84
],
"text": "multimodal fused graph convolutional network",
"tokens": [
"multimodal",
"fused",
"graph",
"convolutional",
"network"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
74
],
"text": "propose",
"tokens": [
"propose"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "FEA",
"offsets": [
65,
71
],
"text": "multimodal information",
"tokens": [
"multimodal",
"information"
]
},
{
"argument_type": "Aim",
"nugget_type": "FEA",
"offsets": [
67,
68,
69,
70,
71
],
"text": "long - distance contextual information",
"tokens": [
"long",
"-",
"distance",
"contextual",
"information"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
63
],
"text": "utilizing",
"tokens": [
"utilizing"
]
}
},
{
"arguments": [
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
99,
100
],
"text": "multimodal dependencies",
"tokens": [
"multimodal",
"dependencies"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
96,
97
],
"text": "make use",
"tokens": [
"make",
"use"
]
}
},
{
"arguments": [
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
106,
107
],
"text": "speaker information",
"tokens": [
"speaker",
"information"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
109
],
"text": "model",
"tokens": [
"model"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
105
],
"text": "leverage",
"tokens": [
"leverage"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "FEA",
"offsets": [
110,
111,
112,
117
],
"text": "inter - speaker dependency",
"tokens": [
"inter",
"-",
"speaker",
"dependency"
]
},
{
"argument_type": "Aim",
"nugget_type": "FEA",
"offsets": [
114,
115,
116,
117
],
"text": "intra - speaker dependency",
"tokens": [
"intra",
"-",
"speaker",
"dependency"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
109
],
"text": "model",
"tokens": [
"model"
]
}
},
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "E-CMP",
"offsets": [
144
],
"text": "outperforms",
"tokens": [
"outperforms"
]
}
],
"event_type": "FIN",
"trigger": {
"offsets": [
137
],
"text": "prove",
"tokens": [
"prove"
]
}
},
{
"arguments": [
{
"argument_type": "Dataset",
"nugget_type": "DST",
"offsets": [
130
],
"text": "iemocap",
"tokens": [
"iemocap"
]
},
{
"argument_type": "Dataset",
"nugget_type": "DST",
"offsets": [
132
],
"text": "meld",
"tokens": [
"meld"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
152,
153,
154,
155,
156
],
"text": "under the multimodal conversation setting",
"tokens": [
"under",
"the",
"multimodal",
"conversation",
"setting"
]
},
{
"argument_type": "Arg1",
"nugget_type": "APP",
"offsets": [
80,
81,
82,
83,
84
],
"text": "multimodal fused graph convolutional network",
"tokens": [
"multimodal",
"fused",
"graph",
"convolutional",
"network"
]
},
{
"argument_type": "Result",
"nugget_type": "STR",
"offsets": [
144
],
"text": "outperforms",
"tokens": [
"outperforms"
]
},
{
"argument_type": "Arg2",
"nugget_type": "APP",
"offsets": [
146,
147
],
"text": "sota methods",
"tokens": [
"sota",
"methods"
]
}
],
"event_type": "CMP",
"trigger": {
"offsets": [
144
],
"text": "outperforms",
"tokens": [
"outperforms"
]
}
}
] |
[
"emotion",
"recognition",
"in",
"conversation",
"(",
"erc",
")",
"is",
"a",
"crucial",
"component",
"in",
"affective",
"dialogue",
"systems",
",",
"which",
"helps",
"the",
"system",
"understand",
"users",
"’",
"emotions",
"and",
"generate",
"empathetic",
"responses",
".",
"however",
",",
"most",
"works",
"focus",
"on",
"modeling",
"speaker",
"and",
"contextual",
"information",
"primarily",
"on",
"the",
"textual",
"modality",
"or",
"simply",
"leveraging",
"multimodal",
"information",
"through",
"feature",
"concatenation",
".",
"in",
"order",
"to",
"explore",
"a",
"more",
"effective",
"way",
"of",
"utilizing",
"both",
"multimodal",
"and",
"long",
"-",
"distance",
"contextual",
"information",
",",
"we",
"propose",
"a",
"new",
"model",
"based",
"on",
"multimodal",
"fused",
"graph",
"convolutional",
"network",
",",
"mmgcn",
",",
"in",
"this",
"work",
".",
"mmgcn",
"can",
"not",
"only",
"make",
"use",
"of",
"multimodal",
"dependencies",
"effectively",
",",
"but",
"also",
"leverage",
"speaker",
"information",
"to",
"model",
"inter",
"-",
"speaker",
"and",
"intra",
"-",
"speaker",
"dependency",
".",
"we",
"evaluate",
"our",
"proposed",
"model",
"on",
"two",
"public",
"benchmark",
"datasets",
",",
"iemocap",
"and",
"meld",
",",
"and",
"the",
"results",
"prove",
"the",
"effectiveness",
"of",
"mmgcn",
",",
"which",
"outperforms",
"other",
"sota",
"methods",
"by",
"a",
"significant",
"margin",
"under",
"the",
"multimodal",
"conversation",
"setting",
"."
] |
ACL
|
ASSET: A Dataset for Tuning and Evaluation of Sentence Simplification Models with Multiple Rewriting Transformations
|
In order to simplify a sentence, human editors perform multiple rewriting transformations: they split it into several shorter sentences, paraphrase words (i.e. replacing complex words or phrases by simpler synonyms), reorder components, and/or delete information deemed unnecessary. Despite these varied range of possible text alterations, current models for automatic sentence simplification are evaluated using datasets that are focused on a single transformation, such as lexical paraphrasing or splitting. This makes it impossible to understand the ability of simplification models in more realistic settings. To alleviate this limitation, this paper introduces ASSET, a new dataset for assessing sentence simplification in English. ASSET is a crowdsourced multi-reference corpus where each simplification was produced by executing several rewriting transformations. Through quantitative and qualitative experiments, we show that simplifications in ASSET are better at capturing characteristics of simplicity when compared to other standard evaluation datasets for the task. Furthermore, we motivate the need for developing better methods for automatic evaluation using ASSET, since we show that current popular metrics may not be suitable when multiple simplification transformations are performed.
|
d1ba159388f1335ea21e065dd0201511
| 2,020
|
[
"in order to simplify a sentence , human editors perform multiple rewriting transformations : they split it into several shorter sentences , paraphrase words ( i . e . replacing complex words or phrases by simpler synonyms ) , reorder components , and / or delete information deemed unnecessary .",
"despite these varied range of possible text alterations , current models for automatic sentence simplification are evaluated using datasets that are focused on a single transformation , such as lexical paraphrasing or splitting .",
"this makes it impossible to understand the ability of simplification models in more realistic settings .",
"to alleviate this limitation , this paper introduces asset , a new dataset for assessing sentence simplification in english .",
"asset is a crowdsourced multi - reference corpus where each simplification was produced by executing several rewriting transformations .",
"through quantitative and qualitative experiments , we show that simplifications in asset are better at capturing characteristics of simplicity when compared to other standard evaluation datasets for the task .",
"furthermore , we motivate the need for developing better methods for automatic evaluation using asset , since we show that current popular metrics may not be suitable when multiple simplification transformations are performed ."
] |
[
{
"arguments": [
{
"argument_type": "BaseComponent",
"nugget_type": "MOD",
"offsets": [
10,
11,
12
],
"text": "multiple rewriting transformations",
"tokens": [
"multiple",
"rewriting",
"transformations"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
3
],
"text": "simplify",
"tokens": [
"simplify"
]
}
],
"event_type": "RWS",
"trigger": {
"offsets": [
9
],
"text": "perform",
"tokens": [
"perform"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "FEA",
"offsets": [
5
],
"text": "sentence",
"tokens": [
"sentence"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
3
],
"text": "simplify",
"tokens": [
"simplify"
]
}
},
{
"arguments": [
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
5
],
"text": "sentence",
"tokens": [
"sentence"
]
},
{
"argument_type": "TriedComponent",
"nugget_type": "FEA",
"offsets": [
19,
20
],
"text": "shorter sentences",
"tokens": [
"shorter",
"sentences"
]
}
],
"event_type": "RWS",
"trigger": {
"offsets": [
15
],
"text": "split",
"tokens": [
"split"
]
}
},
{
"arguments": [
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
23
],
"text": "words",
"tokens": [
"words"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
22
],
"text": "paraphrase",
"tokens": [
"paraphrase"
]
}
},
{
"arguments": [
{
"argument_type": "BaseComponent",
"nugget_type": "MOD",
"offsets": [
40
],
"text": "components",
"tokens": [
"components"
]
}
],
"event_type": "RWS",
"trigger": {
"offsets": [
39
],
"text": "reorder",
"tokens": [
"reorder"
]
}
},
{
"arguments": [
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
46,
47,
48
],
"text": "information deemed unnecessary",
"tokens": [
"information",
"deemed",
"unnecessary"
]
}
],
"event_type": "RWS",
"trigger": {
"offsets": [
45
],
"text": "delete",
"tokens": [
"delete"
]
}
},
{
"arguments": [
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
95,
96,
97,
98
],
"text": "in more realistic settings",
"tokens": [
"in",
"more",
"realistic",
"settings"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
91,
92,
93,
94
],
"text": "ability of simplification models",
"tokens": [
"ability",
"of",
"simplification",
"models"
]
},
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
87
],
"text": "impossible",
"tokens": [
"impossible"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
89
],
"text": "understand",
"tokens": [
"understand"
]
}
},
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "DST",
"offsets": [
108
],
"text": "asset",
"tokens": [
"asset"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
114
],
"text": "assessing",
"tokens": [
"assessing"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
107
],
"text": "introduces",
"tokens": [
"introduces"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "WEA",
"offsets": [
103
],
"text": "limitation",
"tokens": [
"limitation"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
101
],
"text": "alleviate",
"tokens": [
"alleviate"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
115,
116,
117,
118
],
"text": "sentence simplification in english",
"tokens": [
"sentence",
"simplification",
"in",
"english"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
114
],
"text": "assessing",
"tokens": [
"assessing"
]
}
},
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "E-CMP",
"offsets": [
152
],
"text": "better",
"tokens": [
"better"
]
},
{
"argument_type": "Finder",
"nugget_type": "OG",
"offsets": [
145
],
"text": "we",
"tokens": [
"we"
]
}
],
"event_type": "FIN",
"trigger": {
"offsets": [
146
],
"text": "show",
"tokens": [
"show"
]
}
},
{
"arguments": [
{
"argument_type": "Result",
"nugget_type": "STR",
"offsets": [
152
],
"text": "better",
"tokens": [
"better"
]
},
{
"argument_type": "Arg2",
"nugget_type": "DST",
"offsets": [
161,
162,
163,
164
],
"text": "other standard evaluation datasets",
"tokens": [
"other",
"standard",
"evaluation",
"datasets"
]
},
{
"argument_type": "Arg1",
"nugget_type": "DST",
"offsets": [
150
],
"text": "asset",
"tokens": [
"asset"
]
},
{
"argument_type": "Metrics",
"nugget_type": "TAK",
"offsets": [
148
],
"text": "simplifications",
"tokens": [
"simplifications"
]
}
],
"event_type": "CMP",
"trigger": {
"offsets": [
152
],
"text": "better",
"tokens": [
"better"
]
}
},
{
"arguments": [
{
"argument_type": "Finder",
"nugget_type": "OG",
"offsets": [
186
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "E-FAC",
"offsets": [
193,
194,
195
],
"text": "not be suitable",
"tokens": [
"not",
"be",
"suitable"
]
}
],
"event_type": "FIN",
"trigger": {
"offsets": [
187
],
"text": "show",
"tokens": [
"show"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
189,
190,
191
],
"text": "current popular metrics",
"tokens": [
"current",
"popular",
"metrics"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
196,
197,
198,
199,
200,
201
],
"text": "when multiple simplification transformations are performed",
"tokens": [
"when",
"multiple",
"simplification",
"transformations",
"are",
"performed"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
193,
194,
195
],
"text": "not be suitable",
"tokens": [
"not",
"be",
"suitable"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
171
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
174
],
"text": "need",
"tokens": [
"need"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
176
],
"text": "developing",
"tokens": [
"developing"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
172
],
"text": "motivate",
"tokens": [
"motivate"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "APP",
"offsets": [
177,
178,
179,
180,
181
],
"text": "better methods for automatic evaluation",
"tokens": [
"better",
"methods",
"for",
"automatic",
"evaluation"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
182,
183
],
"text": "using asset",
"tokens": [
"using",
"asset"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
176
],
"text": "developing",
"tokens": [
"developing"
]
}
}
] |
[
"in",
"order",
"to",
"simplify",
"a",
"sentence",
",",
"human",
"editors",
"perform",
"multiple",
"rewriting",
"transformations",
":",
"they",
"split",
"it",
"into",
"several",
"shorter",
"sentences",
",",
"paraphrase",
"words",
"(",
"i",
".",
"e",
".",
"replacing",
"complex",
"words",
"or",
"phrases",
"by",
"simpler",
"synonyms",
")",
",",
"reorder",
"components",
",",
"and",
"/",
"or",
"delete",
"information",
"deemed",
"unnecessary",
".",
"despite",
"these",
"varied",
"range",
"of",
"possible",
"text",
"alterations",
",",
"current",
"models",
"for",
"automatic",
"sentence",
"simplification",
"are",
"evaluated",
"using",
"datasets",
"that",
"are",
"focused",
"on",
"a",
"single",
"transformation",
",",
"such",
"as",
"lexical",
"paraphrasing",
"or",
"splitting",
".",
"this",
"makes",
"it",
"impossible",
"to",
"understand",
"the",
"ability",
"of",
"simplification",
"models",
"in",
"more",
"realistic",
"settings",
".",
"to",
"alleviate",
"this",
"limitation",
",",
"this",
"paper",
"introduces",
"asset",
",",
"a",
"new",
"dataset",
"for",
"assessing",
"sentence",
"simplification",
"in",
"english",
".",
"asset",
"is",
"a",
"crowdsourced",
"multi",
"-",
"reference",
"corpus",
"where",
"each",
"simplification",
"was",
"produced",
"by",
"executing",
"several",
"rewriting",
"transformations",
".",
"through",
"quantitative",
"and",
"qualitative",
"experiments",
",",
"we",
"show",
"that",
"simplifications",
"in",
"asset",
"are",
"better",
"at",
"capturing",
"characteristics",
"of",
"simplicity",
"when",
"compared",
"to",
"other",
"standard",
"evaluation",
"datasets",
"for",
"the",
"task",
".",
"furthermore",
",",
"we",
"motivate",
"the",
"need",
"for",
"developing",
"better",
"methods",
"for",
"automatic",
"evaluation",
"using",
"asset",
",",
"since",
"we",
"show",
"that",
"current",
"popular",
"metrics",
"may",
"not",
"be",
"suitable",
"when",
"multiple",
"simplification",
"transformations",
"are",
"performed",
"."
] |
ACL
|
ConSERT: A Contrastive Framework for Self-Supervised Sentence Representation Transfer
|
Learning high-quality sentence representations benefits a wide range of natural language processing tasks. Though BERT-based pre-trained language models achieve high performance on many downstream tasks, the native derived sentence representations are proved to be collapsed and thus produce a poor performance on the semantic textual similarity (STS) tasks. In this paper, we present ConSERT, a Contrastive Framework for Self-Supervised SEntence Representation Transfer, that adopts contrastive learning to fine-tune BERT in an unsupervised and effective way. By making use of unlabeled texts, ConSERT solves the collapse issue of BERT-derived sentence representations and make them more applicable for downstream tasks. Experiments on STS datasets demonstrate that ConSERT achieves an 8% relative improvement over the previous state-of-the-art, even comparable to the supervised SBERT-NLI. And when further incorporating NLI supervision, we achieve new state-of-the-art performance on STS tasks. Moreover, ConSERT obtains comparable results with only 1000 samples available, showing its robustness in data scarcity scenarios.
|
a4a3ea2eef957084ccb22132f29b6491
| 2,021
|
[
"learning high - quality sentence representations benefits a wide range of natural language processing tasks .",
"though bert - based pre - trained language models achieve high performance on many downstream tasks , the native derived sentence representations are proved to be collapsed and thus produce a poor performance on the semantic textual similarity ( sts ) tasks .",
"in this paper , we present consert , a contrastive framework for self - supervised sentence representation transfer , that adopts contrastive learning to fine - tune bert in an unsupervised and effective way .",
"by making use of unlabeled texts , consert solves the collapse issue of bert - derived sentence representations and make them more applicable for downstream tasks .",
"experiments on sts datasets demonstrate that consert achieves an 8 % relative improvement over the previous state - of - the - art , even comparable to the supervised sbert - nli .",
"and when further incorporating nli supervision , we achieve new state - of - the - art performance on sts tasks .",
"moreover , consert obtains comparable results with only 1000 samples available , showing its robustness in data scarcity scenarios ."
] |
[
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
1,
2,
3,
4,
5
],
"text": "high - quality sentence representations",
"tokens": [
"high",
"-",
"quality",
"sentence",
"representations"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
6
],
"text": "benefits",
"tokens": [
"benefits"
]
}
},
{
"arguments": [
{
"argument_type": "Concern",
"nugget_type": "FEA",
"offsets": [
34,
35,
36,
37
],
"text": "native derived sentence representations",
"tokens": [
"native",
"derived",
"sentence",
"representations"
]
},
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
42
],
"text": "collapsed",
"tokens": [
"collapsed"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
39
],
"text": "proved",
"tokens": [
"proved"
]
}
},
{
"arguments": [
{
"argument_type": "Concern",
"nugget_type": "FEA",
"offsets": [
34,
35,
36,
37
],
"text": "native derived sentence representations",
"tokens": [
"native",
"derived",
"sentence",
"representations"
]
},
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
47,
48
],
"text": "poor performance",
"tokens": [
"poor",
"performance"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
49,
50,
51,
52,
53,
57
],
"text": "on the semantic textual similarity ( sts ) tasks",
"tokens": [
"on",
"the",
"semantic",
"textual",
"similarity",
"tasks"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
45
],
"text": "produce",
"tokens": [
"produce"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
63
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
65
],
"text": "consert",
"tokens": [
"consert"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
71,
72,
73,
74,
75,
76
],
"text": "self - supervised sentence representation transfer",
"tokens": [
"self",
"-",
"supervised",
"sentence",
"representation",
"transfer"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
64
],
"text": "present",
"tokens": [
"present"
]
}
},
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
80,
81
],
"text": "contrastive learning",
"tokens": [
"contrastive",
"learning"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
83,
84,
85
],
"text": "fine - tune",
"tokens": [
"fine",
"-",
"tune"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
79
],
"text": "adopts",
"tokens": [
"adopts"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
86
],
"text": "bert",
"tokens": [
"bert"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
87,
88,
89,
90,
91,
92
],
"text": "in an unsupervised and effective way",
"tokens": [
"in",
"an",
"unsupervised",
"and",
"effective",
"way"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
83,
84,
85
],
"text": "fine - tune",
"tokens": [
"fine",
"-",
"tune"
]
}
},
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "FEA",
"offsets": [
98,
99
],
"text": "unlabeled texts",
"tokens": [
"unlabeled",
"texts"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
102
],
"text": "solves",
"tokens": [
"solves"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
96
],
"text": "use",
"tokens": [
"use"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
104,
105,
106,
107,
108,
109,
110,
111
],
"text": "collapse issue of bert - derived sentence representations",
"tokens": [
"collapse",
"issue",
"of",
"bert",
"-",
"derived",
"sentence",
"representations"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
102
],
"text": "solves",
"tokens": [
"solves"
]
}
},
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "E-CMP",
"offsets": [
128
],
"text": "achieves",
"tokens": [
"achieves"
]
}
],
"event_type": "FIN",
"trigger": {
"offsets": [
125
],
"text": "demonstrate",
"tokens": [
"demonstrate"
]
}
},
{
"arguments": [
{
"argument_type": "Dataset",
"nugget_type": "DST",
"offsets": [
123,
124
],
"text": "sts datasets",
"tokens": [
"sts",
"datasets"
]
},
{
"argument_type": "Arg1",
"nugget_type": "APP",
"offsets": [
127
],
"text": "consert",
"tokens": [
"consert"
]
},
{
"argument_type": "Extent",
"nugget_type": "DEG",
"offsets": [
130,
131
],
"text": "8 %",
"tokens": [
"8",
"%"
]
},
{
"argument_type": "Arg2",
"nugget_type": "APP",
"offsets": [
136,
137,
138,
139,
140,
141,
142,
143
],
"text": "previous state - of - the - art",
"tokens": [
"previous",
"state",
"-",
"of",
"-",
"the",
"-",
"art"
]
},
{
"argument_type": "Arg2",
"nugget_type": "APP",
"offsets": [
149,
150,
151,
152
],
"text": "supervised sbert - nli",
"tokens": [
"supervised",
"sbert",
"-",
"nli"
]
},
{
"argument_type": "Result",
"nugget_type": "STR",
"offsets": [
133
],
"text": "improvement",
"tokens": [
"improvement"
]
}
],
"event_type": "CMP",
"trigger": {
"offsets": [
128
],
"text": "achieves",
"tokens": [
"achieves"
]
}
},
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
51,
52,
53,
174
],
"text": "sts tasks",
"tokens": [
"semantic",
"textual",
"similarity",
"tasks"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
155,
156,
157,
158,
159
],
"text": "when further incorporating nli supervision",
"tokens": [
"when",
"further",
"incorporating",
"nli",
"supervision"
]
},
{
"argument_type": "Object",
"nugget_type": "STR",
"offsets": [
164,
165,
166,
167,
168,
169,
170,
171
],
"text": "state - of - the - art performance",
"tokens": [
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"performance"
]
},
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
65
],
"text": "consert",
"tokens": [
"consert"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
162
],
"text": "achieve",
"tokens": [
"achieve"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
178
],
"text": "consert",
"tokens": [
"consert"
]
},
{
"argument_type": "Object",
"nugget_type": "STR",
"offsets": [
180,
181
],
"text": "comparable results",
"tokens": [
"comparable",
"results"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
182,
183,
184,
185,
186
],
"text": "with only 1000 samples available",
"tokens": [
"with",
"only",
"1000",
"samples",
"available"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
179
],
"text": "obtains",
"tokens": [
"obtains"
]
}
},
{
"arguments": [
{
"argument_type": "Object",
"nugget_type": "STR",
"offsets": [
178,
190
],
"text": "its robustness",
"tokens": [
"consert",
"robustness"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
191,
192,
193,
194
],
"text": "in data scarcity scenarios",
"tokens": [
"in",
"data",
"scarcity",
"scenarios"
]
},
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
178
],
"text": "consert",
"tokens": [
"consert"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
188
],
"text": "showing",
"tokens": [
"showing"
]
}
}
] |
[
"learning",
"high",
"-",
"quality",
"sentence",
"representations",
"benefits",
"a",
"wide",
"range",
"of",
"natural",
"language",
"processing",
"tasks",
".",
"though",
"bert",
"-",
"based",
"pre",
"-",
"trained",
"language",
"models",
"achieve",
"high",
"performance",
"on",
"many",
"downstream",
"tasks",
",",
"the",
"native",
"derived",
"sentence",
"representations",
"are",
"proved",
"to",
"be",
"collapsed",
"and",
"thus",
"produce",
"a",
"poor",
"performance",
"on",
"the",
"semantic",
"textual",
"similarity",
"(",
"sts",
")",
"tasks",
".",
"in",
"this",
"paper",
",",
"we",
"present",
"consert",
",",
"a",
"contrastive",
"framework",
"for",
"self",
"-",
"supervised",
"sentence",
"representation",
"transfer",
",",
"that",
"adopts",
"contrastive",
"learning",
"to",
"fine",
"-",
"tune",
"bert",
"in",
"an",
"unsupervised",
"and",
"effective",
"way",
".",
"by",
"making",
"use",
"of",
"unlabeled",
"texts",
",",
"consert",
"solves",
"the",
"collapse",
"issue",
"of",
"bert",
"-",
"derived",
"sentence",
"representations",
"and",
"make",
"them",
"more",
"applicable",
"for",
"downstream",
"tasks",
".",
"experiments",
"on",
"sts",
"datasets",
"demonstrate",
"that",
"consert",
"achieves",
"an",
"8",
"%",
"relative",
"improvement",
"over",
"the",
"previous",
"state",
"-",
"of",
"-",
"the",
"-",
"art",
",",
"even",
"comparable",
"to",
"the",
"supervised",
"sbert",
"-",
"nli",
".",
"and",
"when",
"further",
"incorporating",
"nli",
"supervision",
",",
"we",
"achieve",
"new",
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"performance",
"on",
"sts",
"tasks",
".",
"moreover",
",",
"consert",
"obtains",
"comparable",
"results",
"with",
"only",
"1000",
"samples",
"available",
",",
"showing",
"its",
"robustness",
"in",
"data",
"scarcity",
"scenarios",
"."
] |
ACL
|
Don’t Say That! Making Inconsistent Dialogue Unlikely with Unlikelihood Training
|
Generative dialogue models currently suffer from a number of problems which standard maximum likelihood training does not address. They tend to produce generations that (i) rely too much on copying from the context, (ii) contain repetitions within utterances, (iii) overuse frequent words, and (iv) at a deeper level, contain logical flaws.In this work we show how all of these problems can be addressed by extending the recently introduced unlikelihood loss (Welleck et al., 2019) to these cases. We show that appropriate loss functions which regularize generated outputs to match human distributions are effective for the first three issues. For the last important general issue, we show applying unlikelihood to collected data of what a model should not do is effective for improving logical consistency, potentially paving the way to generative models with greater reasoning ability. We demonstrate the efficacy of our approach across several dialogue tasks.
|
77ac3b5405a06e6bfe38f1a2aae70d4c
| 2,020
|
[
"generative dialogue models currently suffer from a number of problems which standard maximum likelihood training does not address .",
"they tend to produce generations that ( i ) rely too much on copying from the context , ( ii ) contain repetitions within utterances , ( iii ) overuse frequent words , and ( iv ) at a deeper level , contain logical flaws .",
"in this work we show how all of these problems can be addressed by extending the recently introduced unlikelihood loss ( welleck et al . , 2019 ) to these cases .",
"we show that appropriate loss functions which regularize generated outputs to match human distributions are effective for the first three issues .",
"for the last important general issue , we show applying unlikelihood to collected data of what a model should not do is effective for improving logical consistency , potentially paving the way to generative models with greater reasoning ability .",
"we demonstrate the efficacy of our approach across several dialogue tasks ."
] |
[
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
0,
1,
2
],
"text": "generative dialogue models",
"tokens": [
"generative",
"dialogue",
"models"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
4
],
"text": "suffer",
"tokens": [
"suffer"
]
}
},
{
"arguments": [
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
28,
29,
30
],
"text": "rely too much",
"tokens": [
"rely",
"too",
"much"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
28,
29,
30
],
"text": "rely too much",
"tokens": [
"rely",
"too",
"much"
]
}
},
{
"arguments": [
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
41
],
"text": "repetitions",
"tokens": [
"repetitions"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
42,
43
],
"text": "within utterances",
"tokens": [
"within",
"utterances"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
40
],
"text": "contain",
"tokens": [
"contain"
]
}
},
{
"arguments": [
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
49,
50
],
"text": "frequent words",
"tokens": [
"frequent",
"words"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
48
],
"text": "overuse",
"tokens": [
"overuse"
]
}
},
{
"arguments": [
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
62,
63
],
"text": "logical flaws",
"tokens": [
"logical",
"flaws"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
56,
57,
58,
59
],
"text": "at a deeper level",
"tokens": [
"at",
"a",
"deeper",
"level"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
61
],
"text": "contain",
"tokens": [
"contain"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
68
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
77
],
"text": "addressed",
"tokens": [
"addressed"
]
},
{
"argument_type": "Content",
"nugget_type": "FEA",
"offsets": [
83,
84
],
"text": "unlikelihood loss",
"tokens": [
"unlikelihood",
"loss"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
79
],
"text": "extending",
"tokens": [
"extending"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "WEA",
"offsets": [
28,
29,
30
],
"text": "rely too much",
"tokens": [
"rely",
"too",
"much"
]
},
{
"argument_type": "Aim",
"nugget_type": "WEA",
"offsets": [
40,
41
],
"text": "contain repetitions",
"tokens": [
"contain",
"repetitions"
]
},
{
"argument_type": "Aim",
"nugget_type": "WEA",
"offsets": [
48,
49,
50
],
"text": "overuse frequent words",
"tokens": [
"overuse",
"frequent",
"words"
]
},
{
"argument_type": "Aim",
"nugget_type": "WEA",
"offsets": [
61,
62,
63
],
"text": "contain logical flaws",
"tokens": [
"contain",
"logical",
"flaws"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
77
],
"text": "addressed",
"tokens": [
"addressed"
]
}
},
{
"arguments": [
{
"argument_type": "Finder",
"nugget_type": "OG",
"offsets": [
97
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "E-FAC",
"offsets": [
112
],
"text": "effective",
"tokens": [
"effective"
]
}
],
"event_type": "FIN",
"trigger": {
"offsets": [
98
],
"text": "show",
"tokens": [
"show"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "MOD",
"offsets": [
100,
101,
102
],
"text": "appropriate loss functions",
"tokens": [
"appropriate",
"loss",
"functions"
]
},
{
"argument_type": "Target",
"nugget_type": "WEA",
"offsets": [
28,
29,
30
],
"text": "rely too much",
"tokens": [
"rely",
"too",
"much"
]
},
{
"argument_type": "Target",
"nugget_type": "WEA",
"offsets": [
40,
41
],
"text": "contain repetitions",
"tokens": [
"contain",
"repetitions"
]
},
{
"argument_type": "Target",
"nugget_type": "WEA",
"offsets": [
48,
49,
50
],
"text": "overuse frequent words",
"tokens": [
"overuse",
"frequent",
"words"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
112
],
"text": "effective",
"tokens": [
"effective"
]
}
},
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
143
],
"text": "improving",
"tokens": [
"improving"
]
},
{
"argument_type": "Subject",
"nugget_type": "TAK",
"offsets": [
128,
129
],
"text": "applying unlikelihood",
"tokens": [
"applying",
"unlikelihood"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
141
],
"text": "effective",
"tokens": [
"effective"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
144,
145
],
"text": "logical consistency",
"tokens": [
"logical",
"consistency"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
143
],
"text": "improving",
"tokens": [
"improving"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
126
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
131
],
"text": "collected",
"tokens": [
"collected"
]
},
{
"argument_type": "Content",
"nugget_type": "FEA",
"offsets": [
129
],
"text": "unlikelihood",
"tokens": [
"unlikelihood"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
128
],
"text": "applying",
"tokens": [
"applying"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "DST",
"offsets": [
132
],
"text": "data",
"tokens": [
"data"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
131
],
"text": "collected",
"tokens": [
"collected"
]
}
}
] |
[
"generative",
"dialogue",
"models",
"currently",
"suffer",
"from",
"a",
"number",
"of",
"problems",
"which",
"standard",
"maximum",
"likelihood",
"training",
"does",
"not",
"address",
".",
"they",
"tend",
"to",
"produce",
"generations",
"that",
"(",
"i",
")",
"rely",
"too",
"much",
"on",
"copying",
"from",
"the",
"context",
",",
"(",
"ii",
")",
"contain",
"repetitions",
"within",
"utterances",
",",
"(",
"iii",
")",
"overuse",
"frequent",
"words",
",",
"and",
"(",
"iv",
")",
"at",
"a",
"deeper",
"level",
",",
"contain",
"logical",
"flaws",
".",
"in",
"this",
"work",
"we",
"show",
"how",
"all",
"of",
"these",
"problems",
"can",
"be",
"addressed",
"by",
"extending",
"the",
"recently",
"introduced",
"unlikelihood",
"loss",
"(",
"welleck",
"et",
"al",
".",
",",
"2019",
")",
"to",
"these",
"cases",
".",
"we",
"show",
"that",
"appropriate",
"loss",
"functions",
"which",
"regularize",
"generated",
"outputs",
"to",
"match",
"human",
"distributions",
"are",
"effective",
"for",
"the",
"first",
"three",
"issues",
".",
"for",
"the",
"last",
"important",
"general",
"issue",
",",
"we",
"show",
"applying",
"unlikelihood",
"to",
"collected",
"data",
"of",
"what",
"a",
"model",
"should",
"not",
"do",
"is",
"effective",
"for",
"improving",
"logical",
"consistency",
",",
"potentially",
"paving",
"the",
"way",
"to",
"generative",
"models",
"with",
"greater",
"reasoning",
"ability",
".",
"we",
"demonstrate",
"the",
"efficacy",
"of",
"our",
"approach",
"across",
"several",
"dialogue",
"tasks",
"."
] |
ACL
|
It’s Easier to Translate out of English than into it: Measuring Neural Translation Difficulty by Cross-Mutual Information
|
The performance of neural machine translation systems is commonly evaluated in terms of BLEU. However, due to its reliance on target language properties and generation, the BLEU metric does not allow an assessment of which translation directions are more difficult to model. In this paper, we propose cross-mutual information (XMI): an asymmetric information-theoretic metric of machine translation difficulty that exploits the probabilistic nature of most neural machine translation models. XMI allows us to better evaluate the difficulty of translating text into the target language while controlling for the difficulty of the target-side generation component independent of the translation task. We then present the first systematic and controlled study of cross-lingual translation difficulties using modern neural translation systems. Code for replicating our experiments is available online at https://github.com/e-bug/nmt-difficulty.
|
8d3b76fb88d327a47637076aad29597f
| 2,020
|
[
"the performance of neural machine translation systems is commonly evaluated in terms of bleu .",
"however , due to its reliance on target language properties and generation , the bleu metric does not allow an assessment of which translation directions are more difficult to model .",
"in this paper , we propose cross - mutual information ( xmi ) : an asymmetric information - theoretic metric of machine translation difficulty that exploits the probabilistic nature of most neural machine translation models .",
"xmi allows us to better evaluate the difficulty of translating text into the target language while controlling for the difficulty of the target - side generation component independent of the translation task .",
"we then present the first systematic and controlled study of cross - lingual translation difficulties using modern neural translation systems .",
"code for replicating our experiments is available online at https : / / github . com / e - bug / nmt - difficulty ."
] |
[
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
13
],
"text": "bleu",
"tokens": [
"bleu"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
9
],
"text": "evaluated",
"tokens": [
"evaluated"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
50
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
52,
53,
54,
55
],
"text": "cross - mutual information",
"tokens": [
"cross",
"-",
"mutual",
"information"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
51
],
"text": "propose",
"tokens": [
"propose"
]
}
},
{
"arguments": [
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
73,
74,
75,
76,
77,
78,
79,
80
],
"text": "probabilistic nature of most neural machine translation models",
"tokens": [
"probabilistic",
"nature",
"of",
"most",
"neural",
"machine",
"translation",
"models"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
71
],
"text": "exploits",
"tokens": [
"exploits"
]
}
},
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
89,
90,
91,
92,
93,
94,
95,
96
],
"text": "difficulty of translating text into the target language",
"tokens": [
"difficulty",
"of",
"translating",
"text",
"into",
"the",
"target",
"language"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
87
],
"text": "evaluate",
"tokens": [
"evaluate"
]
}
},
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113
],
"text": "difficulty of the target - side generation component independent of the translation task",
"tokens": [
"difficulty",
"of",
"the",
"target",
"-",
"side",
"generation",
"component",
"independent",
"of",
"the",
"translation",
"task"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
98
],
"text": "controlling",
"tokens": [
"controlling"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
115
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
119,
120,
121,
122,
123
],
"text": "first systematic and controlled study",
"tokens": [
"first",
"systematic",
"and",
"controlled",
"study"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
125,
126,
127,
128,
129
],
"text": "cross - lingual translation difficulties",
"tokens": [
"cross",
"-",
"lingual",
"translation",
"difficulties"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
117
],
"text": "present",
"tokens": [
"present"
]
}
},
{
"arguments": [
{
"argument_type": "BaseComponent",
"nugget_type": "APP",
"offsets": [
131,
132,
133,
134
],
"text": "modern neural translation systems",
"tokens": [
"modern",
"neural",
"translation",
"systems"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
125,
126,
127,
128,
129
],
"text": "cross - lingual translation difficulties",
"tokens": [
"cross",
"-",
"lingual",
"translation",
"difficulties"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
130
],
"text": "using",
"tokens": [
"using"
]
}
}
] |
[
"the",
"performance",
"of",
"neural",
"machine",
"translation",
"systems",
"is",
"commonly",
"evaluated",
"in",
"terms",
"of",
"bleu",
".",
"however",
",",
"due",
"to",
"its",
"reliance",
"on",
"target",
"language",
"properties",
"and",
"generation",
",",
"the",
"bleu",
"metric",
"does",
"not",
"allow",
"an",
"assessment",
"of",
"which",
"translation",
"directions",
"are",
"more",
"difficult",
"to",
"model",
".",
"in",
"this",
"paper",
",",
"we",
"propose",
"cross",
"-",
"mutual",
"information",
"(",
"xmi",
")",
":",
"an",
"asymmetric",
"information",
"-",
"theoretic",
"metric",
"of",
"machine",
"translation",
"difficulty",
"that",
"exploits",
"the",
"probabilistic",
"nature",
"of",
"most",
"neural",
"machine",
"translation",
"models",
".",
"xmi",
"allows",
"us",
"to",
"better",
"evaluate",
"the",
"difficulty",
"of",
"translating",
"text",
"into",
"the",
"target",
"language",
"while",
"controlling",
"for",
"the",
"difficulty",
"of",
"the",
"target",
"-",
"side",
"generation",
"component",
"independent",
"of",
"the",
"translation",
"task",
".",
"we",
"then",
"present",
"the",
"first",
"systematic",
"and",
"controlled",
"study",
"of",
"cross",
"-",
"lingual",
"translation",
"difficulties",
"using",
"modern",
"neural",
"translation",
"systems",
".",
"code",
"for",
"replicating",
"our",
"experiments",
"is",
"available",
"online",
"at",
"https",
":",
"/",
"/",
"github",
".",
"com",
"/",
"e",
"-",
"bug",
"/",
"nmt",
"-",
"difficulty",
"."
] |
ACL
|
Influence Paths for Characterizing Subject-Verb Number Agreement in LSTM Language Models
|
LSTM-based recurrent neural networks are the state-of-the-art for many natural language processing (NLP) tasks. Despite their performance, it is unclear whether, or how, LSTMs learn structural features of natural languages such as subject-verb number agreement in English. Lacking this understanding, the generality of LSTM performance on this task and their suitability for related tasks remains uncertain. Further, errors cannot be properly attributed to a lack of structural capability, training data omissions, or other exceptional faults. We introduce *influence paths*, a causal account of structural properties as carried by paths across gates and neurons of a recurrent neural network. The approach refines the notion of influence (the subject’s grammatical number has influence on the grammatical number of the subsequent verb) into a set of gate or neuron-level paths. The set localizes and segments the concept (e.g., subject-verb agreement), its constituent elements (e.g., the subject), and related or interfering elements (e.g., attractors). We exemplify the methodology on a widely-studied multi-layer LSTM language model, demonstrating its accounting for subject-verb number agreement. The results offer both a finer and a more complete view of an LSTM’s handling of this structural aspect of the English language than prior results based on diagnostic classifiers and ablation.
|
16705d99c77ef2227291f2fce6e3e8b5
| 2,020
|
[
"lstm - based recurrent neural networks are the state - of - the - art for many natural language processing ( nlp ) tasks .",
"despite their performance , it is unclear whether , or how , lstms learn structural features of natural languages such as subject - verb number agreement in english .",
"lacking this understanding , the generality of lstm performance on this task and their suitability for related tasks remains uncertain .",
"further , errors cannot be properly attributed to a lack of structural capability , training data omissions , or other exceptional faults .",
"we introduce * influence paths * , a causal account of structural properties as carried by paths across gates and neurons of a recurrent neural network .",
"the approach refines the notion of influence ( the subject ’ s grammatical number has influence on the grammatical number of the subsequent verb ) into a set of gate or neuron - level paths .",
"the set localizes and segments the concept ( e . g . , subject - verb agreement ) , its constituent elements ( e . g . , the subject ) , and related or interfering elements ( e . g . , attractors ) .",
"we exemplify the methodology on a widely - studied multi - layer lstm language model , demonstrating its accounting for subject - verb number agreement .",
"the results offer both a finer and a more complete view of an lstm ’ s handling of this structural aspect of the english language than prior results based on diagnostic classifiers and ablation ."
] |
[
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
17,
18,
19
],
"text": "natural language processing",
"tokens": [
"natural",
"language",
"processing"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
8,
9,
10,
11,
12,
13,
14
],
"text": "state - of - the - art",
"tokens": [
"state",
"-",
"of",
"-",
"the",
"-",
"art"
]
}
},
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
38
],
"text": "learn",
"tokens": [
"learn"
]
},
{
"argument_type": "Concern",
"nugget_type": "APP",
"offsets": [
37
],
"text": "lstms",
"tokens": [
"lstms"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
31
],
"text": "unclear",
"tokens": [
"unclear"
]
}
},
{
"arguments": [
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
73
],
"text": "uncertain",
"tokens": [
"uncertain"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
59,
60,
61,
62
],
"text": "generality of lstm performance",
"tokens": [
"generality",
"of",
"lstm",
"performance"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
68,
69,
70,
71
],
"text": "suitability for related tasks",
"tokens": [
"suitability",
"for",
"related",
"tasks"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
72
],
"text": "remains",
"tokens": [
"remains"
]
}
},
{
"arguments": [
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
78,
79,
80,
81
],
"text": "cannot be properly attributed",
"tokens": [
"cannot",
"be",
"properly",
"attributed"
]
},
{
"argument_type": "Concern",
"nugget_type": "FEA",
"offsets": [
77
],
"text": "errors",
"tokens": [
"errors"
]
},
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
84,
85,
86,
87,
88,
89,
90,
91,
92
],
"text": "lack of structural capability , training data omissions ,",
"tokens": [
"lack",
"of",
"structural",
"capability",
",",
"training",
"data",
"omissions",
","
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
78,
79,
80,
81
],
"text": "cannot be properly attributed",
"tokens": [
"cannot",
"be",
"properly",
"attributed"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
207
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
210
],
"text": "methodology",
"tokens": [
"methodology"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221
],
"text": "on a widely - studied multi - layer lstm language model",
"tokens": [
"on",
"a",
"widely",
"-",
"studied",
"multi",
"-",
"layer",
"lstm",
"language",
"model"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
208
],
"text": "exemplify",
"tokens": [
"exemplify"
]
}
},
{
"arguments": [
{
"argument_type": "Arg1",
"nugget_type": "FEA",
"offsets": [
234
],
"text": "results",
"tokens": [
"results"
]
},
{
"argument_type": "Arg2",
"nugget_type": "FEA",
"offsets": [
259,
260,
261,
262,
263,
264,
265,
266
],
"text": "prior results based on diagnostic classifiers and ablation",
"tokens": [
"prior",
"results",
"based",
"on",
"diagnostic",
"classifiers",
"and",
"ablation"
]
},
{
"argument_type": "Result",
"nugget_type": "STR",
"offsets": [
238,
243
],
"text": "finer view",
"tokens": [
"finer",
"view"
]
},
{
"argument_type": "Result",
"nugget_type": "STR",
"offsets": [
241,
242,
243
],
"text": "more complete view",
"tokens": [
"more",
"complete",
"view"
]
}
],
"event_type": "CMP",
"trigger": {
"offsets": [
235
],
"text": "offer",
"tokens": [
"offer"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "FEA",
"offsets": [
39,
40,
41,
42,
43
],
"text": "structural features of natural languages",
"tokens": [
"structural",
"features",
"of",
"natural",
"languages"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
38
],
"text": "learn",
"tokens": [
"learn"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
98
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "FEA",
"offsets": [
101,
102
],
"text": "influence paths",
"tokens": [
"influence",
"paths"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
99
],
"text": "introduce",
"tokens": [
"introduce"
]
}
},
{
"arguments": [
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
129,
130,
131
],
"text": "notion of influence",
"tokens": [
"notion",
"of",
"influence"
]
},
{
"argument_type": "TriedComponent",
"nugget_type": "FEA",
"offsets": [
154,
157,
158,
159
],
"text": "gate - level paths",
"tokens": [
"gate",
"-",
"level",
"paths"
]
},
{
"argument_type": "TriedComponent",
"nugget_type": "FEA",
"offsets": [
156,
157,
158,
159
],
"text": "neuron - level paths",
"tokens": [
"neuron",
"-",
"level",
"paths"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
127
],
"text": "refines",
"tokens": [
"refines"
]
}
},
{
"arguments": [
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
167
],
"text": "concept",
"tokens": [
"concept"
]
},
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
181,
182
],
"text": "constituent elements",
"tokens": [
"constituent",
"elements"
]
},
{
"argument_type": "TriedComponent",
"nugget_type": "FEA",
"offsets": [
162
],
"text": "set",
"tokens": [
"set"
]
},
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
194,
197
],
"text": "related elements",
"tokens": [
"related",
"elements"
]
},
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
196,
197
],
"text": "interfering elements",
"tokens": [
"interfering",
"elements"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
163,
164,
165
],
"text": "localizes and segments",
"tokens": [
"localizes",
"and",
"segments"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "TAK",
"offsets": [
225,
226,
227,
228,
229,
230,
231
],
"text": "accounting for subject - verb number agreement",
"tokens": [
"accounting",
"for",
"subject",
"-",
"verb",
"number",
"agreement"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
223
],
"text": "demonstrating",
"tokens": [
"demonstrating"
]
}
}
] |
[
"lstm",
"-",
"based",
"recurrent",
"neural",
"networks",
"are",
"the",
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"for",
"many",
"natural",
"language",
"processing",
"(",
"nlp",
")",
"tasks",
".",
"despite",
"their",
"performance",
",",
"it",
"is",
"unclear",
"whether",
",",
"or",
"how",
",",
"lstms",
"learn",
"structural",
"features",
"of",
"natural",
"languages",
"such",
"as",
"subject",
"-",
"verb",
"number",
"agreement",
"in",
"english",
".",
"lacking",
"this",
"understanding",
",",
"the",
"generality",
"of",
"lstm",
"performance",
"on",
"this",
"task",
"and",
"their",
"suitability",
"for",
"related",
"tasks",
"remains",
"uncertain",
".",
"further",
",",
"errors",
"cannot",
"be",
"properly",
"attributed",
"to",
"a",
"lack",
"of",
"structural",
"capability",
",",
"training",
"data",
"omissions",
",",
"or",
"other",
"exceptional",
"faults",
".",
"we",
"introduce",
"*",
"influence",
"paths",
"*",
",",
"a",
"causal",
"account",
"of",
"structural",
"properties",
"as",
"carried",
"by",
"paths",
"across",
"gates",
"and",
"neurons",
"of",
"a",
"recurrent",
"neural",
"network",
".",
"the",
"approach",
"refines",
"the",
"notion",
"of",
"influence",
"(",
"the",
"subject",
"’",
"s",
"grammatical",
"number",
"has",
"influence",
"on",
"the",
"grammatical",
"number",
"of",
"the",
"subsequent",
"verb",
")",
"into",
"a",
"set",
"of",
"gate",
"or",
"neuron",
"-",
"level",
"paths",
".",
"the",
"set",
"localizes",
"and",
"segments",
"the",
"concept",
"(",
"e",
".",
"g",
".",
",",
"subject",
"-",
"verb",
"agreement",
")",
",",
"its",
"constituent",
"elements",
"(",
"e",
".",
"g",
".",
",",
"the",
"subject",
")",
",",
"and",
"related",
"or",
"interfering",
"elements",
"(",
"e",
".",
"g",
".",
",",
"attractors",
")",
".",
"we",
"exemplify",
"the",
"methodology",
"on",
"a",
"widely",
"-",
"studied",
"multi",
"-",
"layer",
"lstm",
"language",
"model",
",",
"demonstrating",
"its",
"accounting",
"for",
"subject",
"-",
"verb",
"number",
"agreement",
".",
"the",
"results",
"offer",
"both",
"a",
"finer",
"and",
"a",
"more",
"complete",
"view",
"of",
"an",
"lstm",
"’",
"s",
"handling",
"of",
"this",
"structural",
"aspect",
"of",
"the",
"english",
"language",
"than",
"prior",
"results",
"based",
"on",
"diagnostic",
"classifiers",
"and",
"ablation",
"."
] |
ACL
|
He said “who’s gonna take care of your children when you are at ACL?”: Reported Sexist Acts are Not Sexist
|
In a context of offensive content mediation on social media now regulated by European laws, it is important not only to be able to automatically detect sexist content but also to identify if a message with a sexist content is really sexist or is a story of sexism experienced by a woman. We propose: (1) a new characterization of sexist content inspired by speech acts theory and discourse analysis studies, (2) the first French dataset annotated for sexism detection, and (3) a set of deep learning experiments trained on top of a combination of several tweet’s vectorial representations (word embeddings, linguistic features, and various generalization strategies). Our results are encouraging and constitute a first step towards offensive content moderation.
|
cfb5b96ab6b77dba61d425558c28d923
| 2,020
|
[
"in a context of offensive content mediation on social media now regulated by european laws , it is important not only to be able to automatically detect sexist content but also to identify if a message with a sexist content is really sexist or is a story of sexism experienced by a woman .",
"we propose : ( 1 ) a new characterization of sexist content inspired by speech acts theory and discourse analysis studies , ( 2 ) the first french dataset annotated for sexism detection , and ( 3 ) a set of deep learning experiments trained on top of a combination of several tweet ’ s vectorial representations ( word embeddings , linguistic features , and various generalization strategies ) .",
"our results are encouraging and constitute a first step towards offensive content moderation ."
] |
[
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
54
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
62,
63,
64,
65
],
"text": "characterization of sexist content",
"tokens": [
"characterization",
"of",
"sexist",
"content"
]
},
{
"argument_type": "Content",
"nugget_type": "DST",
"offsets": [
81,
82
],
"text": "french dataset",
"tokens": [
"french",
"dataset"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
93,
94,
95,
96,
97
],
"text": "set of deep learning experiments",
"tokens": [
"set",
"of",
"deep",
"learning",
"experiments"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
55
],
"text": "propose",
"tokens": [
"propose"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "TAK",
"offsets": [
125
],
"text": "results",
"tokens": [
"results"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
127
],
"text": "encouraging",
"tokens": [
"encouraging"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "TAK",
"offsets": [
125
],
"text": "results",
"tokens": [
"results"
]
},
{
"argument_type": "Object",
"nugget_type": "STR",
"offsets": [
131,
132
],
"text": "first step",
"tokens": [
"first",
"step"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
134,
135,
136
],
"text": "offensive content moderation",
"tokens": [
"offensive",
"content",
"moderation"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
129
],
"text": "constitute",
"tokens": [
"constitute"
]
}
}
] |
[
"in",
"a",
"context",
"of",
"offensive",
"content",
"mediation",
"on",
"social",
"media",
"now",
"regulated",
"by",
"european",
"laws",
",",
"it",
"is",
"important",
"not",
"only",
"to",
"be",
"able",
"to",
"automatically",
"detect",
"sexist",
"content",
"but",
"also",
"to",
"identify",
"if",
"a",
"message",
"with",
"a",
"sexist",
"content",
"is",
"really",
"sexist",
"or",
"is",
"a",
"story",
"of",
"sexism",
"experienced",
"by",
"a",
"woman",
".",
"we",
"propose",
":",
"(",
"1",
")",
"a",
"new",
"characterization",
"of",
"sexist",
"content",
"inspired",
"by",
"speech",
"acts",
"theory",
"and",
"discourse",
"analysis",
"studies",
",",
"(",
"2",
")",
"the",
"first",
"french",
"dataset",
"annotated",
"for",
"sexism",
"detection",
",",
"and",
"(",
"3",
")",
"a",
"set",
"of",
"deep",
"learning",
"experiments",
"trained",
"on",
"top",
"of",
"a",
"combination",
"of",
"several",
"tweet",
"’",
"s",
"vectorial",
"representations",
"(",
"word",
"embeddings",
",",
"linguistic",
"features",
",",
"and",
"various",
"generalization",
"strategies",
")",
".",
"our",
"results",
"are",
"encouraging",
"and",
"constitute",
"a",
"first",
"step",
"towards",
"offensive",
"content",
"moderation",
"."
] |
ACL
|
Boosting Neural Machine Translation with Similar Translations
|
This paper explores data augmentation methods for training Neural Machine Translation to make use of similar translations, in a comparable way a human translator employs fuzzy matches. In particular, we show how we can simply present the neural model with information of both source and target sides of the fuzzy matches, we also extend the similarity to include semantically related translations retrieved using sentence distributed representations. We show that translations based on fuzzy matching provide the model with “copy” information while translations based on embedding similarities tend to extend the translation “context”. Results indicate that the effect from both similar sentences are adding up to further boost accuracy, combine naturally with model fine-tuning and are providing dynamic adaptation for unseen translation pairs. Tests on multiple data sets and domains show consistent accuracy improvements. To foster research around these techniques, we also release an Open-Source toolkit with efficient and flexible fuzzy-match implementation.
|
18fb4a983d773a8106e8dd9dd29776da
| 2,020
|
[
"this paper explores data augmentation methods for training neural machine translation to make use of similar translations , in a comparable way a human translator employs fuzzy matches .",
"in particular , we show how we can simply present the neural model with information of both source and target sides of the fuzzy matches , we also extend the similarity to include semantically related translations retrieved using sentence distributed representations .",
"we show that translations based on fuzzy matching provide the model with “ copy ” information while translations based on embedding similarities tend to extend the translation “ context ” .",
"results indicate that the effect from both similar sentences are adding up to further boost accuracy , combine naturally with model fine - tuning and are providing dynamic adaptation for unseen translation pairs .",
"tests on multiple data sets and domains show consistent accuracy improvements .",
"to foster research around these techniques , we also release an open - source toolkit with efficient and flexible fuzzy - match implementation ."
] |
[
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
3,
4,
5
],
"text": "data augmentation methods",
"tokens": [
"data",
"augmentation",
"methods"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
7
],
"text": "training",
"tokens": [
"training"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
2
],
"text": "explores",
"tokens": [
"explores"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
32
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
40,
41
],
"text": "neural model",
"tokens": [
"neural",
"model"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
42,
43,
44,
45,
46,
47,
48,
49
],
"text": "with information of both source and target sides",
"tokens": [
"with",
"information",
"of",
"both",
"source",
"and",
"target",
"sides"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
38
],
"text": "present",
"tokens": [
"present"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
55
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "FEA",
"offsets": [
62,
63,
64
],
"text": "semantically related translations",
"tokens": [
"semantically",
"related",
"translations"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
66,
67,
68,
69
],
"text": "using sentence distributed representations",
"tokens": [
"using",
"sentence",
"distributed",
"representations"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
61
],
"text": "include",
"tokens": [
"include"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
8,
9,
10
],
"text": "neural machine translation",
"tokens": [
"neural",
"machine",
"translation"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
7
],
"text": "training",
"tokens": [
"training"
]
}
},
{
"arguments": [
{
"argument_type": "Finder",
"nugget_type": "OG",
"offsets": [
71
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "E-FAC",
"offsets": [
79
],
"text": "provide",
"tokens": [
"provide"
]
},
{
"argument_type": "Content",
"nugget_type": "E-FAC",
"offsets": [
93,
94,
95
],
"text": "tend to extend",
"tokens": [
"tend",
"to",
"extend"
]
}
],
"event_type": "FIN",
"trigger": {
"offsets": [
72
],
"text": "show",
"tokens": [
"show"
]
}
},
{
"arguments": [
{
"argument_type": "Object",
"nugget_type": "APP",
"offsets": [
81
],
"text": "model",
"tokens": [
"model"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
82,
83,
84,
85,
86
],
"text": "with “ copy ” information",
"tokens": [
"with",
"“",
"copy",
"”",
"information"
]
},
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
74
],
"text": "translations",
"tokens": [
"translations"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
79
],
"text": "provide",
"tokens": [
"provide"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "TAK",
"offsets": [
88,
89,
90,
91,
92
],
"text": "translations based on embedding similarities",
"tokens": [
"translations",
"based",
"on",
"embedding",
"similarities"
]
},
{
"argument_type": "Object",
"nugget_type": "TAK",
"offsets": [
97,
98,
99,
100
],
"text": "translation “ context ”",
"tokens": [
"translation",
"“",
"context",
"”"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
93,
94,
95
],
"text": "tend to extend",
"tokens": [
"tend",
"to",
"extend"
]
}
},
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "E-FAC",
"offsets": [
112,
113
],
"text": "adding up",
"tokens": [
"adding",
"up"
]
}
],
"event_type": "FIN",
"trigger": {
"offsets": [
103
],
"text": "indicate",
"tokens": [
"indicate"
]
}
},
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
106
],
"text": "effect",
"tokens": [
"effect"
]
},
{
"argument_type": "Subject",
"nugget_type": "MOD",
"offsets": [
108,
109,
110
],
"text": "both similar sentences",
"tokens": [
"both",
"similar",
"sentences"
]
},
{
"argument_type": "Object",
"nugget_type": "TAK",
"offsets": [
115,
116,
117
],
"text": "further boost accuracy",
"tokens": [
"further",
"boost",
"accuracy"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
112,
113
],
"text": "adding up",
"tokens": [
"adding",
"up"
]
}
},
{
"arguments": [
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
137,
138,
139,
140,
141,
142
],
"text": "on multiple data sets and domains",
"tokens": [
"on",
"multiple",
"data",
"sets",
"and",
"domains"
]
},
{
"argument_type": "Subject",
"nugget_type": "STR",
"offsets": [
144,
145,
146
],
"text": "consistent accuracy improvements",
"tokens": [
"consistent",
"accuracy",
"improvements"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
143
],
"text": "show",
"tokens": [
"show"
]
}
},
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
149
],
"text": "foster",
"tokens": [
"foster"
]
},
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
155
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
159,
160,
161,
162
],
"text": "open - source toolkit",
"tokens": [
"open",
"-",
"source",
"toolkit"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
163,
164,
165,
166,
167,
168,
169,
170
],
"text": "with efficient and flexible fuzzy - match implementation",
"tokens": [
"with",
"efficient",
"and",
"flexible",
"fuzzy",
"-",
"match",
"implementation"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
157
],
"text": "release",
"tokens": [
"release"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
150,
151,
152,
153
],
"text": "research around these techniques",
"tokens": [
"research",
"around",
"these",
"techniques"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
149
],
"text": "foster",
"tokens": [
"foster"
]
}
}
] |
[
"this",
"paper",
"explores",
"data",
"augmentation",
"methods",
"for",
"training",
"neural",
"machine",
"translation",
"to",
"make",
"use",
"of",
"similar",
"translations",
",",
"in",
"a",
"comparable",
"way",
"a",
"human",
"translator",
"employs",
"fuzzy",
"matches",
".",
"in",
"particular",
",",
"we",
"show",
"how",
"we",
"can",
"simply",
"present",
"the",
"neural",
"model",
"with",
"information",
"of",
"both",
"source",
"and",
"target",
"sides",
"of",
"the",
"fuzzy",
"matches",
",",
"we",
"also",
"extend",
"the",
"similarity",
"to",
"include",
"semantically",
"related",
"translations",
"retrieved",
"using",
"sentence",
"distributed",
"representations",
".",
"we",
"show",
"that",
"translations",
"based",
"on",
"fuzzy",
"matching",
"provide",
"the",
"model",
"with",
"“",
"copy",
"”",
"information",
"while",
"translations",
"based",
"on",
"embedding",
"similarities",
"tend",
"to",
"extend",
"the",
"translation",
"“",
"context",
"”",
".",
"results",
"indicate",
"that",
"the",
"effect",
"from",
"both",
"similar",
"sentences",
"are",
"adding",
"up",
"to",
"further",
"boost",
"accuracy",
",",
"combine",
"naturally",
"with",
"model",
"fine",
"-",
"tuning",
"and",
"are",
"providing",
"dynamic",
"adaptation",
"for",
"unseen",
"translation",
"pairs",
".",
"tests",
"on",
"multiple",
"data",
"sets",
"and",
"domains",
"show",
"consistent",
"accuracy",
"improvements",
".",
"to",
"foster",
"research",
"around",
"these",
"techniques",
",",
"we",
"also",
"release",
"an",
"open",
"-",
"source",
"toolkit",
"with",
"efficient",
"and",
"flexible",
"fuzzy",
"-",
"match",
"implementation",
"."
] |
ACL
|
Using Automatically Extracted Minimum Spans to Disentangle Coreference Evaluation from Boundary Detection
|
The common practice in coreference resolution is to identify and evaluate the maximum span of mentions. The use of maximum spans tangles coreference evaluation with the challenges of mention boundary detection like prepositional phrase attachment. To address this problem, minimum spans are manually annotated in smaller corpora. However, this additional annotation is costly and therefore, this solution does not scale to large corpora. In this paper, we propose the MINA algorithm for automatically extracting minimum spans to benefit from minimum span evaluation in all corpora. We show that the extracted minimum spans by MINA are consistent with those that are manually annotated by experts. Our experiments show that using minimum spans is in particular important in cross-dataset coreference evaluation, in which detected mention boundaries are noisier due to domain shift. We have integrated MINA into https://github.com/ns-moosavi/coval for reporting standard coreference scores based on both maximum and automatically detected minimum spans.
|
24fd2785f45d9c92068fc83f5b93a7a6
| 2,019
|
[
"the common practice in coreference resolution is to identify and evaluate the maximum span of mentions .",
"the use of maximum spans tangles coreference evaluation with the challenges of mention boundary detection like prepositional phrase attachment .",
"to address this problem , minimum spans are manually annotated in smaller corpora .",
"however , this additional annotation is costly and therefore , this solution does not scale to large corpora .",
"in this paper , we propose the mina algorithm for automatically extracting minimum spans to benefit from minimum span evaluation in all corpora .",
"we show that the extracted minimum spans by mina are consistent with those that are manually annotated by experts .",
"our experiments show that using minimum spans is in particular important in cross - dataset coreference evaluation , in which detected mention boundaries are noisier due to domain shift .",
"we have integrated mina into https : / / github . com / ns - moosavi / coval for reporting standard coreference scores based on both maximum and automatically detected minimum spans ."
] |
[
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
4,
5
],
"text": "coreference resolution",
"tokens": [
"coreference",
"resolution"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
8,
9,
10
],
"text": "identify and evaluate",
"tokens": [
"identify",
"and",
"evaluate"
]
}
},
{
"arguments": [
{
"argument_type": "Concern",
"nugget_type": "APP",
"offsets": [
20,
21
],
"text": "maximum spans",
"tokens": [
"maximum",
"spans"
]
},
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
22
],
"text": "tangles",
"tokens": [
"tangles"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
22
],
"text": "tangles",
"tokens": [
"tangles"
]
}
},
{
"arguments": [
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
57
],
"text": "costly",
"tokens": [
"costly"
]
},
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
64,
65,
66,
67,
68
],
"text": "not scale to large corpora",
"tokens": [
"not",
"scale",
"to",
"large",
"corpora"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
57
],
"text": "costly",
"tokens": [
"costly"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
74
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
77,
78
],
"text": "mina algorithm",
"tokens": [
"mina",
"algorithm"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
80,
81
],
"text": "automatically extracting",
"tokens": [
"automatically",
"extracting"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
75
],
"text": "propose",
"tokens": [
"propose"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
82,
83
],
"text": "minimum spans",
"tokens": [
"minimum",
"spans"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
80,
81
],
"text": "automatically extracting",
"tokens": [
"automatically",
"extracting"
]
}
},
{
"arguments": [
{
"argument_type": "Finder",
"nugget_type": "OG",
"offsets": [
94
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "E-CMP",
"offsets": [
104
],
"text": "consistent",
"tokens": [
"consistent"
]
}
],
"event_type": "FIN",
"trigger": {
"offsets": [
95
],
"text": "show",
"tokens": [
"show"
]
}
},
{
"arguments": [
{
"argument_type": "Result",
"nugget_type": "STR",
"offsets": [
104
],
"text": "consistent",
"tokens": [
"consistent"
]
},
{
"argument_type": "Arg1",
"nugget_type": "APP",
"offsets": [
98,
99,
100,
101,
102
],
"text": "extracted minimum spans by mina",
"tokens": [
"extracted",
"minimum",
"spans",
"by",
"mina"
]
}
],
"event_type": "CMP",
"trigger": {
"offsets": [
104
],
"text": "consistent",
"tokens": [
"consistent"
]
}
},
{
"arguments": [
{
"argument_type": "Finder",
"nugget_type": "OG",
"offsets": [
114,
115
],
"text": "our experiments",
"tokens": [
"our",
"experiments"
]
},
{
"argument_type": "Content",
"nugget_type": "E-FAC",
"offsets": [
123,
124
],
"text": "particular important",
"tokens": [
"particular",
"important"
]
}
],
"event_type": "FIN",
"trigger": {
"offsets": [
116
],
"text": "show",
"tokens": [
"show"
]
}
},
{
"arguments": [
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
125,
126,
127,
128,
129,
130
],
"text": "in cross - dataset coreference evaluation",
"tokens": [
"in",
"cross",
"-",
"dataset",
"coreference",
"evaluation"
]
},
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
118,
119,
120
],
"text": "using minimum spans",
"tokens": [
"using",
"minimum",
"spans"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
123,
124
],
"text": "particular important",
"tokens": [
"particular",
"important"
]
}
}
] |
[
"the",
"common",
"practice",
"in",
"coreference",
"resolution",
"is",
"to",
"identify",
"and",
"evaluate",
"the",
"maximum",
"span",
"of",
"mentions",
".",
"the",
"use",
"of",
"maximum",
"spans",
"tangles",
"coreference",
"evaluation",
"with",
"the",
"challenges",
"of",
"mention",
"boundary",
"detection",
"like",
"prepositional",
"phrase",
"attachment",
".",
"to",
"address",
"this",
"problem",
",",
"minimum",
"spans",
"are",
"manually",
"annotated",
"in",
"smaller",
"corpora",
".",
"however",
",",
"this",
"additional",
"annotation",
"is",
"costly",
"and",
"therefore",
",",
"this",
"solution",
"does",
"not",
"scale",
"to",
"large",
"corpora",
".",
"in",
"this",
"paper",
",",
"we",
"propose",
"the",
"mina",
"algorithm",
"for",
"automatically",
"extracting",
"minimum",
"spans",
"to",
"benefit",
"from",
"minimum",
"span",
"evaluation",
"in",
"all",
"corpora",
".",
"we",
"show",
"that",
"the",
"extracted",
"minimum",
"spans",
"by",
"mina",
"are",
"consistent",
"with",
"those",
"that",
"are",
"manually",
"annotated",
"by",
"experts",
".",
"our",
"experiments",
"show",
"that",
"using",
"minimum",
"spans",
"is",
"in",
"particular",
"important",
"in",
"cross",
"-",
"dataset",
"coreference",
"evaluation",
",",
"in",
"which",
"detected",
"mention",
"boundaries",
"are",
"noisier",
"due",
"to",
"domain",
"shift",
".",
"we",
"have",
"integrated",
"mina",
"into",
"https",
":",
"/",
"/",
"github",
".",
"com",
"/",
"ns",
"-",
"moosavi",
"/",
"coval",
"for",
"reporting",
"standard",
"coreference",
"scores",
"based",
"on",
"both",
"maximum",
"and",
"automatically",
"detected",
"minimum",
"spans",
"."
] |
ACL
|
End-to-End Neural Pipeline for Goal-Oriented Dialogue Systems using GPT-2
|
The goal-oriented dialogue system needs to be optimized for tracking the dialogue flow and carrying out an effective conversation under various situations to meet the user goal. The traditional approach to build such a dialogue system is to take a pipelined modular architecture, where its modules are optimized individually. However, such an optimization scheme does not necessarily yield the overall performance improvement of the whole system. On the other hand, end-to-end dialogue systems with monolithic neural architecture are often trained only with input-output utterances, without taking into account the entire annotations available in the corpus. This scheme makes it difficult for goal-oriented dialogues where the system needs to integrate with external systems or to provide interpretable information about why the system generated a particular response. In this paper, we present an end-to-end neural architecture for dialogue systems that addresses both challenges above. In the human evaluation, our dialogue system achieved the success rate of 68.32%, the language understanding score of 4.149, and the response appropriateness score of 4.287, which ranked the system at the top position in the end-to-end multi-domain dialogue system task in the 8th dialogue systems technology challenge (DSTC8).
|
2404839b071bfa1cc48ff2742de35495
| 2,020
|
[
"the goal - oriented dialogue system needs to be optimized for tracking the dialogue flow and carrying out an effective conversation under various situations to meet the user goal .",
"the traditional approach to build such a dialogue system is to take a pipelined modular architecture , where its modules are optimized individually .",
"however , such an optimization scheme does not necessarily yield the overall performance improvement of the whole system .",
"on the other hand , end - to - end dialogue systems with monolithic neural architecture are often trained only with input - output utterances , without taking into account the entire annotations available in the corpus .",
"this scheme makes it difficult for goal - oriented dialogues where the system needs to integrate with external systems or to provide interpretable information about why the system generated a particular response .",
"in this paper , we present an end - to - end neural architecture for dialogue systems that addresses both challenges above .",
"in the human evaluation , our dialogue system achieved the success rate of 68 . 32 % , the language understanding score of 4 . 149 , and the response appropriateness score of 4 . 287 , which ranked the system at the top position in the end - to - end multi - domain dialogue system task in the 8th dialogue systems technology challenge ( dstc8 ) ."
] |
[
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
1,
2,
3,
4,
5
],
"text": "goal - oriented dialogue system",
"tokens": [
"goal",
"-",
"oriented",
"dialogue",
"system"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
9
],
"text": "optimized",
"tokens": [
"optimized"
]
}
},
{
"arguments": [
{
"argument_type": "BaseComponent",
"nugget_type": "APP",
"offsets": [
43,
44,
45
],
"text": "pipelined modular architecture",
"tokens": [
"pipelined",
"modular",
"architecture"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
34
],
"text": "build",
"tokens": [
"build"
]
},
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
31,
32
],
"text": "traditional approach",
"tokens": [
"traditional",
"approach"
]
}
],
"event_type": "RWS",
"trigger": {
"offsets": [
41
],
"text": "take",
"tokens": [
"take"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "APP",
"offsets": [
37,
38
],
"text": "dialogue system",
"tokens": [
"dialogue",
"system"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
34
],
"text": "build",
"tokens": [
"build"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
148
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
151,
152,
153,
154,
155,
156,
157
],
"text": "end - to - end neural architecture",
"tokens": [
"end",
"-",
"to",
"-",
"end",
"neural",
"architecture"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
159,
160
],
"text": "dialogue systems",
"tokens": [
"dialogue",
"systems"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
149
],
"text": "present",
"tokens": [
"present"
]
}
},
{
"arguments": [
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
167,
168,
169,
170
],
"text": "in the human evaluation",
"tokens": [
"in",
"the",
"human",
"evaluation"
]
},
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
151,
152,
153,
154,
155,
156,
157
],
"text": "end - to - end neural architecture",
"tokens": [
"end",
"-",
"to",
"-",
"end",
"neural",
"architecture"
]
},
{
"argument_type": "Extent",
"nugget_type": "DEG",
"offsets": [
180,
181,
182,
183
],
"text": "68 . 32 %",
"tokens": [
"68",
".",
"32",
"%"
]
},
{
"argument_type": "Object",
"nugget_type": "TAK",
"offsets": [
177,
178
],
"text": "success rate",
"tokens": [
"success",
"rate"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
175
],
"text": "achieved",
"tokens": [
"achieved"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
151,
152,
153,
154,
155,
156,
157
],
"text": "end - to - end neural architecture",
"tokens": [
"end",
"-",
"to",
"-",
"end",
"neural",
"architecture"
]
},
{
"argument_type": "Object",
"nugget_type": "TAK",
"offsets": [
186,
187,
188
],
"text": "language understanding score",
"tokens": [
"language",
"understanding",
"score"
]
},
{
"argument_type": "Extent",
"nugget_type": "DEG",
"offsets": [
190,
191,
192
],
"text": "4 . 149",
"tokens": [
"4",
".",
"149"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
167,
168,
169,
170
],
"text": "in the human evaluation",
"tokens": [
"in",
"the",
"human",
"evaluation"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
175
],
"text": "achieved",
"tokens": [
"achieved"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "APP",
"offsets": [
151,
152,
153,
154,
155,
156,
157
],
"text": "end - to - end neural architecture",
"tokens": [
"end",
"-",
"to",
"-",
"end",
"neural",
"architecture"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
167,
168,
169,
170
],
"text": "in the human evaluation",
"tokens": [
"in",
"the",
"human",
"evaluation"
]
},
{
"argument_type": "Object",
"nugget_type": "TAK",
"offsets": [
196,
197,
198
],
"text": "response appropriateness score",
"tokens": [
"response",
"appropriateness",
"score"
]
},
{
"argument_type": "Extent",
"nugget_type": "DEG",
"offsets": [
200,
201,
202
],
"text": "4 . 287",
"tokens": [
"4",
".",
"287"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
175
],
"text": "achieved",
"tokens": [
"achieved"
]
}
},
{
"arguments": [
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
113,
114,
115
],
"text": "makes it difficult",
"tokens": [
"makes",
"it",
"difficult"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
117,
118,
119,
120
],
"text": "goal - oriented dialogues",
"tokens": [
"goal",
"-",
"oriented",
"dialogues"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
113,
114,
115
],
"text": "makes it difficult",
"tokens": [
"makes",
"it",
"difficult"
]
}
},
{
"arguments": [
{
"argument_type": "Concern",
"nugget_type": "APP",
"offsets": [
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88
],
"text": "end - to - end dialogue systems with monolithic neural architecture",
"tokens": [
"end",
"-",
"to",
"-",
"end",
"dialogue",
"systems",
"with",
"monolithic",
"neural",
"architecture"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
92,
93,
94,
95,
96,
97
],
"text": "only with input - output utterances",
"tokens": [
"only",
"with",
"input",
"-",
"output",
"utterances"
]
},
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109
],
"text": "without taking into account the entire annotations available in the corpus",
"tokens": [
"without",
"taking",
"into",
"account",
"the",
"entire",
"annotations",
"available",
"in",
"the",
"corpus"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
91
],
"text": "trained",
"tokens": [
"trained"
]
}
}
] |
[
"the",
"goal",
"-",
"oriented",
"dialogue",
"system",
"needs",
"to",
"be",
"optimized",
"for",
"tracking",
"the",
"dialogue",
"flow",
"and",
"carrying",
"out",
"an",
"effective",
"conversation",
"under",
"various",
"situations",
"to",
"meet",
"the",
"user",
"goal",
".",
"the",
"traditional",
"approach",
"to",
"build",
"such",
"a",
"dialogue",
"system",
"is",
"to",
"take",
"a",
"pipelined",
"modular",
"architecture",
",",
"where",
"its",
"modules",
"are",
"optimized",
"individually",
".",
"however",
",",
"such",
"an",
"optimization",
"scheme",
"does",
"not",
"necessarily",
"yield",
"the",
"overall",
"performance",
"improvement",
"of",
"the",
"whole",
"system",
".",
"on",
"the",
"other",
"hand",
",",
"end",
"-",
"to",
"-",
"end",
"dialogue",
"systems",
"with",
"monolithic",
"neural",
"architecture",
"are",
"often",
"trained",
"only",
"with",
"input",
"-",
"output",
"utterances",
",",
"without",
"taking",
"into",
"account",
"the",
"entire",
"annotations",
"available",
"in",
"the",
"corpus",
".",
"this",
"scheme",
"makes",
"it",
"difficult",
"for",
"goal",
"-",
"oriented",
"dialogues",
"where",
"the",
"system",
"needs",
"to",
"integrate",
"with",
"external",
"systems",
"or",
"to",
"provide",
"interpretable",
"information",
"about",
"why",
"the",
"system",
"generated",
"a",
"particular",
"response",
".",
"in",
"this",
"paper",
",",
"we",
"present",
"an",
"end",
"-",
"to",
"-",
"end",
"neural",
"architecture",
"for",
"dialogue",
"systems",
"that",
"addresses",
"both",
"challenges",
"above",
".",
"in",
"the",
"human",
"evaluation",
",",
"our",
"dialogue",
"system",
"achieved",
"the",
"success",
"rate",
"of",
"68",
".",
"32",
"%",
",",
"the",
"language",
"understanding",
"score",
"of",
"4",
".",
"149",
",",
"and",
"the",
"response",
"appropriateness",
"score",
"of",
"4",
".",
"287",
",",
"which",
"ranked",
"the",
"system",
"at",
"the",
"top",
"position",
"in",
"the",
"end",
"-",
"to",
"-",
"end",
"multi",
"-",
"domain",
"dialogue",
"system",
"task",
"in",
"the",
"8th",
"dialogue",
"systems",
"technology",
"challenge",
"(",
"dstc8",
")",
"."
] |
ACL
|
Constructing Multi-Modal Dialogue Dataset by Replacing Text with Semantically Relevant Images
|
In multi-modal dialogue systems, it is important to allow the use of images as part of a multi-turn conversation. Training such dialogue systems generally requires a large-scale dataset consisting of multi-turn dialogues that involve images, but such datasets rarely exist. In response, this paper proposes a 45k multi-modal dialogue dataset created with minimal human intervention. Our method to create such a dataset consists of (1) preparing and pre-processing text dialogue datasets, (2) creating image-mixed dialogues by using a text-to-image replacement technique, and (3) employing a contextual-similarity-based filtering step to ensure the contextual coherence of the dataset. To evaluate the validity of our dataset, we devise a simple retrieval model for dialogue sentence prediction tasks. Automatic metrics and human evaluation results on such tasks show that our dataset can be effectively used as training data for multi-modal dialogue systems which require an understanding of images and text in a context-aware manner. Our dataset and generation code is available at https://github.com/shh1574/multi-modal-dialogue-dataset.
|
9af469cdb232e938e1b8894d6aa00b79
| 2,021
|
[
"in multi - modal dialogue systems , it is important to allow the use of images as part of a multi - turn conversation .",
"training such dialogue systems generally requires a large - scale dataset consisting of multi - turn dialogues that involve images , but such datasets rarely exist .",
"in response , this paper proposes a 45k multi - modal dialogue dataset created with minimal human intervention .",
"our method to create such a dataset consists of ( 1 ) preparing and pre - processing text dialogue datasets , ( 2 ) creating image - mixed dialogues by using a text - to - image replacement technique , and ( 3 ) employing a contextual - similarity - based filtering step to ensure the contextual coherence of the dataset .",
"to evaluate the validity of our dataset , we devise a simple retrieval model for dialogue sentence prediction tasks .",
"automatic metrics and human evaluation results on such tasks show that our dataset can be effectively used as training data for multi - modal dialogue systems which require an understanding of images and text in a context - aware manner .",
"our dataset and generation code is available at https : / / github . com / shh1574 / multi - modal - dialogue - dataset ."
] |
[
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
1,
2,
3,
4,
5
],
"text": "multi - modal dialogue systems",
"tokens": [
"multi",
"-",
"modal",
"dialogue",
"systems"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
9
],
"text": "important",
"tokens": [
"important"
]
}
},
{
"arguments": [
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
49,
50
],
"text": "rarely exist",
"tokens": [
"rarely",
"exist"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
49,
50
],
"text": "rarely exist",
"tokens": [
"rarely",
"exist"
]
}
},
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "DST",
"offsets": [
138,
139
],
"text": "45k multi - modal dialogue dataset",
"tokens": [
"our",
"dataset"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
57
],
"text": "proposes",
"tokens": [
"proposes"
]
}
},
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
134
],
"text": "evaluate",
"tokens": [
"evaluate"
]
},
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
141
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
144,
145,
146
],
"text": "simple retrieval model",
"tokens": [
"simple",
"retrieval",
"model"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
148,
149,
150,
151
],
"text": "dialogue sentence prediction tasks",
"tokens": [
"dialogue",
"sentence",
"prediction",
"tasks"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
142
],
"text": "devise",
"tokens": [
"devise"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
136,
137,
138,
139
],
"text": "validity of our dataset",
"tokens": [
"validity",
"of",
"our",
"dataset"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
134
],
"text": "evaluate",
"tokens": [
"evaluate"
]
}
},
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "E-FAC",
"offsets": [
169
],
"text": "used",
"tokens": [
"used"
]
}
],
"event_type": "FIN",
"trigger": {
"offsets": [
162
],
"text": "show",
"tokens": [
"show"
]
}
},
{
"arguments": [
{
"argument_type": "Subject",
"nugget_type": "DST",
"offsets": [
138,
139
],
"text": "45k multi - modal dialogue dataset",
"tokens": [
"our",
"dataset"
]
},
{
"argument_type": "Extent",
"nugget_type": "DEG",
"offsets": [
168
],
"text": "effectively",
"tokens": [
"effectively"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
171,
172
],
"text": "training data",
"tokens": [
"training",
"data"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
174,
175,
176,
177,
178
],
"text": "multi - modal dialogue systems",
"tokens": [
"multi",
"-",
"modal",
"dialogue",
"systems"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192
],
"text": "which require an understanding of images and text in a context - aware manner",
"tokens": [
"which",
"require",
"an",
"understanding",
"of",
"images",
"and",
"text",
"in",
"a",
"context",
"-",
"aware",
"manner"
]
}
],
"event_type": "FAC",
"trigger": {
"offsets": [
169
],
"text": "used",
"tokens": [
"used"
]
}
},
{
"arguments": [
{
"argument_type": "BaseComponent",
"nugget_type": "DST",
"offsets": [
88,
89,
90
],
"text": "text dialogue datasets",
"tokens": [
"text",
"dialogue",
"datasets"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
83,
84,
85,
86,
87
],
"text": "preparing and pre - processing",
"tokens": [
"preparing",
"and",
"pre",
"-",
"processing"
]
}
},
{
"arguments": [
{
"argument_type": "BaseComponent",
"nugget_type": "APP",
"offsets": [
103,
104,
105,
106,
107,
108,
109
],
"text": "text - to - image replacement technique",
"tokens": [
"text",
"-",
"to",
"-",
"image",
"replacement",
"technique"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
96,
97,
98,
99
],
"text": "image - mixed dialogues",
"tokens": [
"image",
"-",
"mixed",
"dialogues"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
95
],
"text": "creating",
"tokens": [
"creating"
]
}
},
{
"arguments": [
{
"argument_type": "BaseComponent",
"nugget_type": "MOD",
"offsets": [
117,
118,
119,
120,
121,
122,
123
],
"text": "contextual - similarity - based filtering step",
"tokens": [
"contextual",
"-",
"similarity",
"-",
"based",
"filtering",
"step"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
125,
126,
127,
128,
129,
130,
131
],
"text": "ensure the contextual coherence of the dataset",
"tokens": [
"ensure",
"the",
"contextual",
"coherence",
"of",
"the",
"dataset"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
115
],
"text": "employing",
"tokens": [
"employing"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
127,
128,
129,
130,
131
],
"text": "contextual coherence of the dataset",
"tokens": [
"contextual",
"coherence",
"of",
"the",
"dataset"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
125
],
"text": "ensure",
"tokens": [
"ensure"
]
}
}
] |
[
"in",
"multi",
"-",
"modal",
"dialogue",
"systems",
",",
"it",
"is",
"important",
"to",
"allow",
"the",
"use",
"of",
"images",
"as",
"part",
"of",
"a",
"multi",
"-",
"turn",
"conversation",
".",
"training",
"such",
"dialogue",
"systems",
"generally",
"requires",
"a",
"large",
"-",
"scale",
"dataset",
"consisting",
"of",
"multi",
"-",
"turn",
"dialogues",
"that",
"involve",
"images",
",",
"but",
"such",
"datasets",
"rarely",
"exist",
".",
"in",
"response",
",",
"this",
"paper",
"proposes",
"a",
"45k",
"multi",
"-",
"modal",
"dialogue",
"dataset",
"created",
"with",
"minimal",
"human",
"intervention",
".",
"our",
"method",
"to",
"create",
"such",
"a",
"dataset",
"consists",
"of",
"(",
"1",
")",
"preparing",
"and",
"pre",
"-",
"processing",
"text",
"dialogue",
"datasets",
",",
"(",
"2",
")",
"creating",
"image",
"-",
"mixed",
"dialogues",
"by",
"using",
"a",
"text",
"-",
"to",
"-",
"image",
"replacement",
"technique",
",",
"and",
"(",
"3",
")",
"employing",
"a",
"contextual",
"-",
"similarity",
"-",
"based",
"filtering",
"step",
"to",
"ensure",
"the",
"contextual",
"coherence",
"of",
"the",
"dataset",
".",
"to",
"evaluate",
"the",
"validity",
"of",
"our",
"dataset",
",",
"we",
"devise",
"a",
"simple",
"retrieval",
"model",
"for",
"dialogue",
"sentence",
"prediction",
"tasks",
".",
"automatic",
"metrics",
"and",
"human",
"evaluation",
"results",
"on",
"such",
"tasks",
"show",
"that",
"our",
"dataset",
"can",
"be",
"effectively",
"used",
"as",
"training",
"data",
"for",
"multi",
"-",
"modal",
"dialogue",
"systems",
"which",
"require",
"an",
"understanding",
"of",
"images",
"and",
"text",
"in",
"a",
"context",
"-",
"aware",
"manner",
".",
"our",
"dataset",
"and",
"generation",
"code",
"is",
"available",
"at",
"https",
":",
"/",
"/",
"github",
".",
"com",
"/",
"shh1574",
"/",
"multi",
"-",
"modal",
"-",
"dialogue",
"-",
"dataset",
"."
] |
ACL
|
GlobalWoZ: Globalizing MultiWoZ to Develop Multilingual Task-Oriented Dialogue Systems
|
Over the last few years, there has been a move towards data curation for multilingual task-oriented dialogue (ToD) systems that can serve people speaking different languages. However, existing multilingual ToD datasets either have a limited coverage of languages due to the high cost of data curation, or ignore the fact that dialogue entities barely exist in countries speaking these languages. To tackle these limitations, we introduce a novel data curation method that generates GlobalWoZ — a large-scale multilingual ToD dataset globalized from an English ToD dataset for three unexplored use cases of multilingual ToD systems. Our method is based on translating dialogue templates and filling them with local entities in the target-language countries. Besides, we extend the coverage of target languages to 20 languages. We will release our dataset and a set of strong baselines to encourage research on multilingual ToD systems for real use cases.
|
16ece471bfb74fda115cb8beec46dd7d
| 2,022
|
[
"over the last few years , there has been a move towards data curation for multilingual task - oriented dialogue ( tod ) systems that can serve people speaking different languages .",
"however , existing multilingual tod datasets either have a limited coverage of languages due to the high cost of data curation , or ignore the fact that dialogue entities barely exist in countries speaking these languages .",
"to tackle these limitations , we introduce a novel data curation method that generates globalwoz — a large - scale multilingual tod dataset globalized from an english tod dataset for three unexplored use cases of multilingual tod systems .",
"our method is based on translating dialogue templates and filling them with local entities in the target - language countries .",
"besides , we extend the coverage of target languages to 20 languages .",
"we will release our dataset and a set of strong baselines to encourage research on multilingual tod systems for real use cases ."
] |
[
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
15,
16,
17,
18,
19,
23
],
"text": "multilingual task - oriented dialogue ( tod ) systems",
"tokens": [
"multilingual",
"task",
"-",
"oriented",
"dialogue",
"systems"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
10
],
"text": "move",
"tokens": [
"move"
]
}
},
{
"arguments": [
{
"argument_type": "Concern",
"nugget_type": "DST",
"offsets": [
34,
35,
36,
37
],
"text": "existing multilingual tod datasets",
"tokens": [
"existing",
"multilingual",
"tod",
"datasets"
]
},
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
41,
42,
43,
44
],
"text": "limited coverage of languages",
"tokens": [
"limited",
"coverage",
"of",
"languages"
]
},
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
48,
49,
50,
51,
52
],
"text": "high cost of data curation",
"tokens": [
"high",
"cost",
"of",
"data",
"curation"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
39
],
"text": "have",
"tokens": [
"have"
]
}
},
{
"arguments": [
{
"argument_type": "Concern",
"nugget_type": "DST",
"offsets": [
34,
35,
36,
37
],
"text": "existing multilingual tod datasets",
"tokens": [
"existing",
"multilingual",
"tod",
"datasets"
]
},
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
55
],
"text": "ignore",
"tokens": [
"ignore"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
55
],
"text": "ignore",
"tokens": [
"ignore"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
74
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
78,
79,
80
],
"text": "data curation method",
"tokens": [
"data",
"curation",
"method"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
70
],
"text": "tackle",
"tokens": [
"tackle"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
75
],
"text": "introduce",
"tokens": [
"introduce"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "WEA",
"offsets": [
72
],
"text": "limitations",
"tokens": [
"limitations"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
70
],
"text": "tackle",
"tokens": [
"tackle"
]
}
},
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "DST",
"offsets": [
83
],
"text": "globalwoz",
"tokens": [
"globalwoz"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
99,
100,
101,
102,
103,
104,
105,
106
],
"text": "three unexplored use cases of multilingual tod systems",
"tokens": [
"three",
"unexplored",
"use",
"cases",
"of",
"multilingual",
"tod",
"systems"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
82
],
"text": "generates",
"tokens": [
"generates"
]
}
},
{
"arguments": [
{
"argument_type": "BaseComponent",
"nugget_type": "MOD",
"offsets": [
113,
114,
115
],
"text": "translating dialogue templates",
"tokens": [
"translating",
"dialogue",
"templates"
]
},
{
"argument_type": "TriedComponent",
"nugget_type": "FEA",
"offsets": [
120,
121
],
"text": "local entities",
"tokens": [
"local",
"entities"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
117
],
"text": "filling",
"tokens": [
"filling"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
131
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
134,
135,
136,
137,
138,
139,
140
],
"text": "coverage of target languages to 20 languages",
"tokens": [
"coverage",
"of",
"target",
"languages",
"to",
"20",
"languages"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
132
],
"text": "extend",
"tokens": [
"extend"
]
}
}
] |
[
"over",
"the",
"last",
"few",
"years",
",",
"there",
"has",
"been",
"a",
"move",
"towards",
"data",
"curation",
"for",
"multilingual",
"task",
"-",
"oriented",
"dialogue",
"(",
"tod",
")",
"systems",
"that",
"can",
"serve",
"people",
"speaking",
"different",
"languages",
".",
"however",
",",
"existing",
"multilingual",
"tod",
"datasets",
"either",
"have",
"a",
"limited",
"coverage",
"of",
"languages",
"due",
"to",
"the",
"high",
"cost",
"of",
"data",
"curation",
",",
"or",
"ignore",
"the",
"fact",
"that",
"dialogue",
"entities",
"barely",
"exist",
"in",
"countries",
"speaking",
"these",
"languages",
".",
"to",
"tackle",
"these",
"limitations",
",",
"we",
"introduce",
"a",
"novel",
"data",
"curation",
"method",
"that",
"generates",
"globalwoz",
"—",
"a",
"large",
"-",
"scale",
"multilingual",
"tod",
"dataset",
"globalized",
"from",
"an",
"english",
"tod",
"dataset",
"for",
"three",
"unexplored",
"use",
"cases",
"of",
"multilingual",
"tod",
"systems",
".",
"our",
"method",
"is",
"based",
"on",
"translating",
"dialogue",
"templates",
"and",
"filling",
"them",
"with",
"local",
"entities",
"in",
"the",
"target",
"-",
"language",
"countries",
".",
"besides",
",",
"we",
"extend",
"the",
"coverage",
"of",
"target",
"languages",
"to",
"20",
"languages",
".",
"we",
"will",
"release",
"our",
"dataset",
"and",
"a",
"set",
"of",
"strong",
"baselines",
"to",
"encourage",
"research",
"on",
"multilingual",
"tod",
"systems",
"for",
"real",
"use",
"cases",
"."
] |
ACL
|
QuASE: Question-Answer Driven Sentence Encoding
|
Question-answering (QA) data often encodes essential information in many facets. This paper studies a natural question: Can we get supervision from QA data for other tasks (typically, non-QA ones)? For example, can we use QAMR (Michael et al., 2017) to improve named entity recognition? We suggest that simply further pre-training BERT is often not the best option, and propose the question-answer driven sentence encoding (QuASE) framework. QuASE learns representations from QA data, using BERT or other state-of-the-art contextual language models. In particular, we observe the need to distinguish between two types of sentence encodings, depending on whether the target task is a single- or multi-sentence input; in both cases, the resulting encoding is shown to be an easy-to-use plugin for many downstream tasks. This work may point out an alternative way to supervise NLP tasks.
|
711ed4b3a433a56a613515d5040038c3
| 2,020
|
[
"question - answering ( qa ) data often encodes essential information in many facets .",
"this paper studies a natural question : can we get supervision from qa data for other tasks ( typically , non - qa ones ) ?",
"for example , can we use qamr ( michael et al . , 2017 ) to improve named entity recognition ?",
"we suggest that simply further pre - training bert is often not the best option , and propose the question - answer driven sentence encoding ( quase ) framework .",
"quase learns representations from qa data , using bert or other state - of - the - art contextual language models .",
"in particular , we observe the need to distinguish between two types of sentence encodings , depending on whether the target task is a single - or multi - sentence input ; in both cases , the resulting encoding is shown to be an easy - to - use plugin for many downstream tasks .",
"this work may point out an alternative way to supervise nlp tasks ."
] |
[
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
0,
1,
2
],
"text": "question - answering",
"tokens": [
"question",
"-",
"answering"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
8
],
"text": "encodes",
"tokens": [
"encodes"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
62
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
81,
82,
83,
84,
85,
86
],
"text": "question - answer driven sentence encoding",
"tokens": [
"question",
"-",
"answer",
"driven",
"sentence",
"encoding"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
79
],
"text": "propose",
"tokens": [
"propose"
]
}
},
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
117
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "FEA",
"offsets": [
123,
124,
125,
126,
127,
128
],
"text": "between two types of sentence encodings",
"tokens": [
"between",
"two",
"types",
"of",
"sentence",
"encodings"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
122
],
"text": "distinguish",
"tokens": [
"distinguish"
]
}
},
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
175,
176
],
"text": "alternative way",
"tokens": [
"alternative",
"way"
]
},
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
178
],
"text": "supervise",
"tokens": [
"supervise"
]
}
],
"event_type": "WKS",
"trigger": {
"offsets": [
172
],
"text": "point",
"tokens": [
"point"
]
}
},
{
"arguments": [
{
"argument_type": "Aim",
"nugget_type": "TAK",
"offsets": [
179,
180
],
"text": "nlp tasks",
"tokens": [
"nlp",
"tasks"
]
}
],
"event_type": "PUR",
"trigger": {
"offsets": [
178
],
"text": "supervise",
"tokens": [
"supervise"
]
}
}
] |
[
"question",
"-",
"answering",
"(",
"qa",
")",
"data",
"often",
"encodes",
"essential",
"information",
"in",
"many",
"facets",
".",
"this",
"paper",
"studies",
"a",
"natural",
"question",
":",
"can",
"we",
"get",
"supervision",
"from",
"qa",
"data",
"for",
"other",
"tasks",
"(",
"typically",
",",
"non",
"-",
"qa",
"ones",
")",
"?",
"for",
"example",
",",
"can",
"we",
"use",
"qamr",
"(",
"michael",
"et",
"al",
".",
",",
"2017",
")",
"to",
"improve",
"named",
"entity",
"recognition",
"?",
"we",
"suggest",
"that",
"simply",
"further",
"pre",
"-",
"training",
"bert",
"is",
"often",
"not",
"the",
"best",
"option",
",",
"and",
"propose",
"the",
"question",
"-",
"answer",
"driven",
"sentence",
"encoding",
"(",
"quase",
")",
"framework",
".",
"quase",
"learns",
"representations",
"from",
"qa",
"data",
",",
"using",
"bert",
"or",
"other",
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"contextual",
"language",
"models",
".",
"in",
"particular",
",",
"we",
"observe",
"the",
"need",
"to",
"distinguish",
"between",
"two",
"types",
"of",
"sentence",
"encodings",
",",
"depending",
"on",
"whether",
"the",
"target",
"task",
"is",
"a",
"single",
"-",
"or",
"multi",
"-",
"sentence",
"input",
";",
"in",
"both",
"cases",
",",
"the",
"resulting",
"encoding",
"is",
"shown",
"to",
"be",
"an",
"easy",
"-",
"to",
"-",
"use",
"plugin",
"for",
"many",
"downstream",
"tasks",
".",
"this",
"work",
"may",
"point",
"out",
"an",
"alternative",
"way",
"to",
"supervise",
"nlp",
"tasks",
"."
] |
ACL
|
Can Unsupervised Knowledge Transfer from Social Discussions Help Argument Mining?
|
Identifying argument components from unstructured texts and predicting the relationships expressed among them are two primary steps of argument mining. The intrinsic complexity of these tasks demands powerful learning models. While pretrained Transformer-based Language Models (LM) have been shown to provide state-of-the-art results over different NLP tasks, the scarcity of manually annotated data and the highly domain-dependent nature of argumentation restrict the capabilities of such models. In this work, we propose a novel transfer learning strategy to overcome these challenges. We utilize argumentation-rich social discussions from the ChangeMyView subreddit as a source of unsupervised, argumentative discourse-aware knowledge by finetuning pretrained LMs on a selectively masked language modeling task. Furthermore, we introduce a novel prompt-based strategy for inter-component relation prediction that compliments our proposed finetuning method while leveraging on the discourse context. Exhaustive experiments show the generalization capability of our method on these two tasks over within-domain as well as out-of-domain datasets, outperforming several existing and employed strong baselines.
|
1c3c37fcf68d673153d9e3568b58b6d9
| 2,022
|
[
"identifying argument components from unstructured texts and predicting the relationships expressed among them are two primary steps of argument mining .",
"the intrinsic complexity of these tasks demands powerful learning models .",
"while pretrained transformer - based language models ( lm ) have been shown to provide state - of - the - art results over different nlp tasks , the scarcity of manually annotated data and the highly domain - dependent nature of argumentation restrict the capabilities of such models .",
"in this work , we propose a novel transfer learning strategy to overcome these challenges .",
"we utilize argumentation - rich social discussions from the changemyview subreddit as a source of unsupervised , argumentative discourse - aware knowledge by finetuning pretrained lms on a selectively masked language modeling task .",
"furthermore , we introduce a novel prompt - based strategy for inter - component relation prediction that compliments our proposed finetuning method while leveraging on the discourse context .",
"exhaustive experiments show the generalization capability of our method on these two tasks over within - domain as well as out - of - domain datasets , outperforming several existing and employed strong baselines ."
] |
[
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
0,
1,
2,
3,
4,
5
],
"text": "identifying argument components from unstructured texts",
"tokens": [
"identifying",
"argument",
"components",
"from",
"unstructured",
"texts"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
7,
8,
9,
10,
11,
1,
2
],
"text": "predicting the relationships expressed among them",
"tokens": [
"predicting",
"the",
"relationships",
"expressed",
"among",
"argument",
"components"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
16
],
"text": "steps",
"tokens": [
"steps"
]
}
},
{
"arguments": [
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
75
],
"text": "restrict",
"tokens": [
"restrict"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
77,
78,
79,
80
],
"text": "capabilities of such models",
"tokens": [
"capabilities",
"of",
"such",
"models"
]
},
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
61,
62,
63,
64,
65
],
"text": "scarcity of manually annotated data",
"tokens": [
"scarcity",
"of",
"manually",
"annotated",
"data"
]
}
],
"event_type": "RWF",
"trigger": {
"offsets": [
75
],
"text": "restrict",
"tokens": [
"restrict"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
86
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
90,
91,
92
],
"text": "transfer learning strategy",
"tokens": [
"transfer",
"learning",
"strategy"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
87
],
"text": "propose",
"tokens": [
"propose"
]
}
},
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
111,
112,
113,
114,
115,
116,
117,
118,
119
],
"text": "source of unsupervised , argumentative discourse - aware knowledge",
"tokens": [
"source",
"of",
"unsupervised",
",",
"argumentative",
"discourse",
"-",
"aware",
"knowledge"
]
},
{
"argument_type": "BaseComponent",
"nugget_type": "FEA",
"offsets": [
100,
101,
102,
103,
104
],
"text": "argumentation - rich social discussions",
"tokens": [
"argumentation",
"-",
"rich",
"social",
"discussions"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
105,
106,
107,
108
],
"text": "from the changemyview subreddit",
"tokens": [
"from",
"the",
"changemyview",
"subreddit"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
99
],
"text": "utilize",
"tokens": [
"utilize"
]
}
},
{
"arguments": [
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
124,
125,
126,
127,
128,
129,
130
],
"text": "on a selectively masked language modeling task",
"tokens": [
"on",
"a",
"selectively",
"masked",
"language",
"modeling",
"task"
]
},
{
"argument_type": "BaseComponent",
"nugget_type": "APP",
"offsets": [
122,
123
],
"text": "pretrained lms",
"tokens": [
"pretrained",
"lms"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
121
],
"text": "finetuning",
"tokens": [
"finetuning"
]
}
},
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
134
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
138,
139,
140,
141
],
"text": "prompt - based strategy",
"tokens": [
"prompt",
"-",
"based",
"strategy"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
143,
144,
145,
146,
147
],
"text": "inter - component relation prediction",
"tokens": [
"inter",
"-",
"component",
"relation",
"prediction"
]
}
],
"event_type": "PRP",
"trigger": {
"offsets": [
135
],
"text": "introduce",
"tokens": [
"introduce"
]
}
},
{
"arguments": [
{
"argument_type": "BaseComponent",
"nugget_type": "APP",
"offsets": [
151,
152,
153
],
"text": "proposed finetuning method",
"tokens": [
"proposed",
"finetuning",
"method"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
154,
155,
156,
157,
158,
159
],
"text": "while leveraging on the discourse context",
"tokens": [
"while",
"leveraging",
"on",
"the",
"discourse",
"context"
]
}
],
"event_type": "MDS",
"trigger": {
"offsets": [
149
],
"text": "compliments",
"tokens": [
"compliments"
]
}
},
{
"arguments": [
{
"argument_type": "Arg2",
"nugget_type": "APP",
"offsets": [
189,
190,
191,
192,
193,
194
],
"text": "several existing and employed strong baselines",
"tokens": [
"several",
"existing",
"and",
"employed",
"strong",
"baselines"
]
},
{
"argument_type": "Arg1",
"nugget_type": "APP",
"offsets": [
138,
139,
140,
141
],
"text": "prompt - based strategy",
"tokens": [
"prompt",
"-",
"based",
"strategy"
]
},
{
"argument_type": "Result",
"nugget_type": "STR",
"offsets": [
188
],
"text": "outperforming",
"tokens": [
"outperforming"
]
},
{
"argument_type": "Dataset",
"nugget_type": "DST",
"offsets": [
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186
],
"text": "within - domain as well as out - of - domain datasets",
"tokens": [
"within",
"-",
"domain",
"as",
"well",
"as",
"out",
"-",
"of",
"-",
"domain",
"datasets"
]
}
],
"event_type": "CMP",
"trigger": {
"offsets": [
188
],
"text": "outperforming",
"tokens": [
"outperforming"
]
}
},
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "E-CMP",
"offsets": [
188
],
"text": "outperforming",
"tokens": [
"outperforming"
]
}
],
"event_type": "FIN",
"trigger": {
"offsets": [
163
],
"text": "show",
"tokens": [
"show"
]
}
}
] |
[
"identifying",
"argument",
"components",
"from",
"unstructured",
"texts",
"and",
"predicting",
"the",
"relationships",
"expressed",
"among",
"them",
"are",
"two",
"primary",
"steps",
"of",
"argument",
"mining",
".",
"the",
"intrinsic",
"complexity",
"of",
"these",
"tasks",
"demands",
"powerful",
"learning",
"models",
".",
"while",
"pretrained",
"transformer",
"-",
"based",
"language",
"models",
"(",
"lm",
")",
"have",
"been",
"shown",
"to",
"provide",
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"results",
"over",
"different",
"nlp",
"tasks",
",",
"the",
"scarcity",
"of",
"manually",
"annotated",
"data",
"and",
"the",
"highly",
"domain",
"-",
"dependent",
"nature",
"of",
"argumentation",
"restrict",
"the",
"capabilities",
"of",
"such",
"models",
".",
"in",
"this",
"work",
",",
"we",
"propose",
"a",
"novel",
"transfer",
"learning",
"strategy",
"to",
"overcome",
"these",
"challenges",
".",
"we",
"utilize",
"argumentation",
"-",
"rich",
"social",
"discussions",
"from",
"the",
"changemyview",
"subreddit",
"as",
"a",
"source",
"of",
"unsupervised",
",",
"argumentative",
"discourse",
"-",
"aware",
"knowledge",
"by",
"finetuning",
"pretrained",
"lms",
"on",
"a",
"selectively",
"masked",
"language",
"modeling",
"task",
".",
"furthermore",
",",
"we",
"introduce",
"a",
"novel",
"prompt",
"-",
"based",
"strategy",
"for",
"inter",
"-",
"component",
"relation",
"prediction",
"that",
"compliments",
"our",
"proposed",
"finetuning",
"method",
"while",
"leveraging",
"on",
"the",
"discourse",
"context",
".",
"exhaustive",
"experiments",
"show",
"the",
"generalization",
"capability",
"of",
"our",
"method",
"on",
"these",
"two",
"tasks",
"over",
"within",
"-",
"domain",
"as",
"well",
"as",
"out",
"-",
"of",
"-",
"domain",
"datasets",
",",
"outperforming",
"several",
"existing",
"and",
"employed",
"strong",
"baselines",
"."
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.