The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Error code: DatasetGenerationCastError
Exception: DatasetGenerationCastError
Message: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 53 new columns ({'col_2,480', 'col_xiaohua', 'col_alla', 'col_cj', 'col_â', 'col_tm', 'col_715', 'col_andre', 'col_6:', 'col_dependency-driven', 'col_it', 'col_is', 'col_y', 'col_dynamic', 'col_robert', 'col_s2d', 'col_content', 'col_dt', 'col_10', 'col_local', 'col_daniel', 'col_vasin', 'col_pm', 'col_appendix:', 'col_agreement', 'col_computational', 'col_90', 'col_an', 'col_to', 'col_625', 'col_d2:', 'col_x', 'col_as', 'col_i', 'col_joint', 'col_conference', 'col_a', 'col_empirical', 'col_30', 'col_francis', 'col_h', 'col_60', 'col_na-rae', 'col_b', 'col_koby', 'col_joel', 'col_north', 'col_pl', 'col_a1', 'col_b1', 'col_sebastian', 'col_kevin', 'col_p3'}) and 14 missing columns ({'col_p', 'col_for', 'col_other', 'col_25%', 'col_he', 'col_in', 'col_arabic', 'col_latin', 'col_10%', 'col_3+', 'col_devanagari', 'col_appendix', 'col_cyrillic', 'col_50%'}).
This happened while the csv dataset builder was generating data using
hf://datasets/iaadlab/FutureGen/ACL_13_updated.csv (at revision 9d0d4d25b86d544f247161375abc3e9b4af1f385)
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback: Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1831, in _prepare_split_single
writer.write_table(table)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 644, in write_table
pa_table = table_cast(pa_table, self._schema)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2272, in table_cast
return cast_table_to_schema(table, schema)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
col_1: string
col_2: string
col_3: string
col_4: string
col_5: string
col_6: string
col_acknowledgments: string
abstract: string
authors: string
id: string
references: string
col_acknowledgements: string
col_7: string
col_8: string
col_9: string
col_acknowledgement: string
col_acknowledgment: string
col_10: string
col_i: string
col_joint: string
col_conference: string
col_empirical: string
col_north: string
col_computational: string
col_â: string
col_a: string
col_b1: string
col_715: string
col_625: string
col_30: string
col_60: string
col_90: string
col_an: string
col_francis: string
col_koby: string
col_daniel: string
col_robert: string
col_na-rae: string
col_kevin: string
col_xiaohua: string
col_andre: string
col_y: string
col_vasin: string
col_sebastian: string
col_alla: string
col_joel: string
col_2,480: string
col_s2d: string
col_cj: string
col_is: string
col_pm: string
col_pl: string
col_appendix:: string
col_h: string
col_d2:: string
col_6:: string
col_to: string
col_x: string
col_agreement: string
col_a1: string
col_p3: string
col_dt: string
col_it: string
col_b: string
col_content: string
col_local: string
col_dependency-driven: string
col_dynamic: string
col_tm: string
col_as: string
Concatenated Text: string
Future_Work: string
-- schema metadata --
pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 8697
to
{'col_1': Value('string'), 'col_2': Value('string'), 'col_3': Value('string'), 'col_4': Value('string'), 'col_5': Value('string'), 'abstract': Value('string'), 'authors': Value('string'), 'id': Value('string'), 'references': Value('string'), 'col_6': Value('string'), 'col_acknowledgements': Value('string'), 'col_7': Value('string'), 'col_acknowledgments': Value('string'), 'col_8': Value('string'), 'col_acknowledgement': Value('string'), 'col_10%': Value('string'), 'col_25%': Value('string'), 'col_50%': Value('string'), 'col_9': Value('string'), 'col_in': Value('string'), 'col_he': Value('string'), 'col_p': Value('string'), 'col_appendix': Value('string'), 'col_for': Value('string'), 'col_cyrillic': Value('string'), 'col_arabic': Value('string'), 'col_other': Value('string'), 'col_latin': Value('string'), 'col_devanagari': Value('string'), 'col_3+': Value('string'), 'col_acknowledgment': Value('string'), 'Concatenated Text': Value('string'), 'Future_Work': Value('string')}
because column names don't match
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1456, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1055, in convert_to_parquet
builder.download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 894, in download_and_prepare
self._download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 970, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1702, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1833, in _prepare_split_single
raise DatasetGenerationCastError.from_cast_error(
datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 53 new columns ({'col_2,480', 'col_xiaohua', 'col_alla', 'col_cj', 'col_â', 'col_tm', 'col_715', 'col_andre', 'col_6:', 'col_dependency-driven', 'col_it', 'col_is', 'col_y', 'col_dynamic', 'col_robert', 'col_s2d', 'col_content', 'col_dt', 'col_10', 'col_local', 'col_daniel', 'col_vasin', 'col_pm', 'col_appendix:', 'col_agreement', 'col_computational', 'col_90', 'col_an', 'col_to', 'col_625', 'col_d2:', 'col_x', 'col_as', 'col_i', 'col_joint', 'col_conference', 'col_a', 'col_empirical', 'col_30', 'col_francis', 'col_h', 'col_60', 'col_na-rae', 'col_b', 'col_koby', 'col_joel', 'col_north', 'col_pl', 'col_a1', 'col_b1', 'col_sebastian', 'col_kevin', 'col_p3'}) and 14 missing columns ({'col_p', 'col_for', 'col_other', 'col_25%', 'col_he', 'col_in', 'col_arabic', 'col_latin', 'col_10%', 'col_3+', 'col_devanagari', 'col_appendix', 'col_cyrillic', 'col_50%'}).
This happened while the csv dataset builder was generating data using
hf://datasets/iaadlab/FutureGen/ACL_13_updated.csv (at revision 9d0d4d25b86d544f247161375abc3e9b4af1f385)
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
col_1
string | col_2
string | col_3
string | col_4
string | col_5
string | abstract
string | authors
string | id
string | references
string | col_6
string | col_acknowledgements
string | col_7
string | col_acknowledgments
null | col_8
null | col_acknowledgement
null | col_10%
null | col_25%
null | col_50%
null | col_9
null | col_in
null | col_he
null | col_p
null | col_appendix
null | col_for
null | col_cyrillic
null | col_arabic
null | col_other
null | col_latin
null | col_devanagari
null | col_3+
null | col_acknowledgment
null | Concatenated Text
string | Future_Work
string |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 introduction :Since its first appearance in (Huang and Chiang, 2005), the Cube Pruning (CP) algorithm has quickly gained popularity in statistical natural language processing. Informally, this algorithm applies to scenarios in which we have thek-best solutions for two input sub-problems, and we need to compute thekbest solutions for the new problem representing the combination of the two sub-problems.
CP has applications in tree and phrase based machine translation (Chiang, 2007; Huang and Chiang, 2007; Pust and Knight, 2009), parsing (Huang and Chiang, 2005), sentence alignment (Riesa and Marcu, 2010), and in general in all systems combining inexact beam decoding with dynamic programming under certain monotonic conditions on the definition of the scores in the search space.
Standard implementations of CP run in time O(k log(k)), with k being the size of the input/output beams (Huang and Chiang, 2005). Gesmundo and Henderson (2010) propose Faster CP (FCP) which optimizes the algorithm but keeps the O(k log(k)) time complexity. Here, we propose a novel heuristic algorithm for CP running in time O(k) and evaluate its impact on the efficiency and performance of a real-world machine translation system.
|
2 preliminaries :Let L = 〈x0, . . . , xk−1〉 be a list overR, that is, an ordered sequence of real numbers, possibly with repetitions. We write|L| = k to denote the length of L. We say thatL is descending if xi ≥ xj for every i, j with 0 ≤ i < j < k. Let L1 = 〈x0, . . . , xk−1〉 andL2 = 〈y0, . . . , yk′−1〉 be two descending lists overR. We writeL1 ⊕ L2 to denote the descending list with elementsxi +yj for everyi, j with 0 ≤ i < k and0 ≤ j < k′.
In cube pruning (CP) we are given as input two descending listsL1, L2 overR with |L1| = |L2| = k, and we are asked to compute the descending list consisting of the firstk elements ofL1 ⊕L2.
A problem related to CP is thek-way merge problem (Horowitz and Sahni, 1983). Given descending listsLi for every i with 0 ≤ i < k, we write mergek−1i=0 Li to denote the “merge” of all the listsLi, that is, the descending list with all elements from the listsLi, including repetitions.
For∆ ∈ R we defineshift(L,∆) = L ⊕ 〈∆〉. In words,shift(L,∆) is the descending list whose elements are obtained by “shifting” the elements ofL by ∆, preserving the order. LetL1,L2 be descending lists of lengthk, with L2 = 〈y0, . . . , yk−1〉. Then we can express the output of CP onL1,L2 as the list
mergek−1i=0 shift(L1, yi) (1)
truncated after the firstk elements. This shows that the CP problem is a particular instance of thek-way merge problem, in which all input lists are related by k independent shifts.
296
Computation of the solution of thek-way merge problem takes timeO(q log(k)), where q is the size of the output list. In case each input list has lengthk this becomesO(k2 log(k)), and by restricting the computation to the firstk elements, as required by the CP problem, we can further reduce to O(k log(k)). This is the already known upper bound on the CP problem (Huang and Chiang, 2005; Gesmundo and Henderson, 2010). Unfortunately, there seems to be no way to achieve an asymptotically faster algorithm by exploiting the restriction that the input lists are all related by some shifts. Nonetheless, in the next sections we use the above ideas to develop a heuristic algorithm running in time linear in k.
|
3 cube pruning with constant slope :Consider listsL1,L2 defined as in section 2. We say thatL2 hasconstant slope if yi−1− yi = ∆ > 0 for everyi with 0 < i < k. Throughout this section we assume thatL2 has constant slope, and we develop an (exact) linear time algorithm for solving the CP problem under this assumption.
For eachi ≥ 0, let Ii be the left-open interval (x0 − (i + 1) · ∆, x0 − i · ∆] of R. Let alsos = ⌊(x0 − xk−1)/∆⌋ + 1. We splitL1 into (possibly empty) sublistsσi, 0 ≤ i < s, calledsegments, such that eachσi is the descending sublist consisting of all elements fromL1 that belong toIi. Thus, moving down one segment inL1 is the closest equivalent to moving down one element inL2.
Let t = min{k, s}; we define descending lists Mi, 0 ≤ i < t, as follows. We setM0 = shift(σ0, y0), and for1 ≤ i < t we let
Mi = merge{shift(σi, y0), shift(Mi−1,−∆)} (2)
We claim that the ordered concatenation ofM0, M1, . . . , Mt−1 truncated after the firstk elements is exactly the output of CP on inputL1,L2.
To prove our claim, it helps to visualize the descending listL1 ⊕ L2 (of sizek2) as ak × k matrix L whosej-th column isshift(L1, yj), 0 ≤ j < k. For an intervalI = (x, x′], we defineshift(I, y) = (x+ y, x′ + y]. Similarly to what we have done with L1, we can split each column ofL into s segments. For eachi, j with 0 ≤ i < s and0 ≤ j < k, we define thei-th segment of thej-th column, writtenσi,j,
as the descending sublist consisting of all elements of that column that belong toshift(Ii, yj). Then we haveσi,j = shift(σi, yj).
For any d with 0 ≤ d < t, consider now all segmentsσi,j with i + j = d, forming a subantidiagonal inL. We observe that these segments containall and only those elements ofL that belong to the intervalId. It is not difficult to show by induction that these elements are exactly the elements that appear in descending order in the listMi defined in (2).
We can then directly use relation (2) to iteratively c mpute CP on two lists of lengthk, under our assumption that one of the two lists has constant slope. Using the fact that the merge of two lists as in (2) can be computed in time linear in the size of the output list, it is not difficult to implement the above algorithm to run in timeO(k).
|
4 linear time heuristic solution :In this section we further elaborate on the exact algorithm of section 3 for the constant slope case, and develop a heuristic solution for the general CP problem. LetL1,L2, L andk be defined as in sections 2 and 3. Despite the fact thatL2 does not have a constant slope, we can still split each column ofL into segments, as follows.
Let Ĩi, 0 ≤ i < k − 1, be the left-open interval (x0 + yi+1, x0 + yi] of R. Note that, unlike the case of section 3, intervals̃Ii’s are not all of the same size now. Let alsoĨk−1 = [xk−1 + yk−1, x0 + yk−1]. For eachi, j with 0 ≤ j < k and0 ≤ i < k − j, we define segment̃σi,j as the descending sublist consisting of all elements of thej-th column ofL that belong toĨi+j. In this way, thej-th column of L is split into segments̃Ij , Ĩj+1, . . . , Ĩk−1, and we have a variable number of segments per column. Note that segments̃σi,j with a constant value ofi+j containall and only those elements ofL that belong to the left-open interval̃Ii+j .
Similarly to section 3, we define descending lists M̃i, 0 ≤ i < k, by settingM̃0 = σ̃0,0 and, for 1 ≤ i < k, by letting
M̃i = merge{σ̃i,0 , path(M̃i−1, L)} (3)
Note that the functionpath(M̃i−1, L) should not return shift(M̃i−1,−∆), for some value∆, as in the
1: Algorithm 1 (L1, L2) : L̃⋆ 2: L̃⋆.insert(L[0, 0]); 3: referColumn← 0; 4: xfollow ← L[0, 1]; 5: xdeviate ← L[1, 0]; 6: C ← CircularList([0, 1]); 7: C-iterator← C.begin(); 8: while |L̃⋆| < k do 9: if xfollow > xdeviate then
10: L̃⋆.insert(xfollow ); 11: if C-iterator.current()=[0, 1] then 12: referColumn++; 13: [i, j]← C-iterator.next(); 14: xfollow ← L[i,referColumn+j]; 15: else 16: L̃⋆.insert(xdeviate ); 17: i← xdeviate .row(); 18: C-iterator.insert([i,−referColumn]); 19: xdeviate ← L[i + 1, 0];
case of (2). This is because input listL2 does not have constant slope in general. In an exact algorithm, path(M̃i−1, L) should return the descending list L⋆i−1 = merge i j=1 σ̃i−j,j: Unfortunately, we do not know how to compute such ai-way merge without introducing a logarithmic factor.
Our solution is to definepath(M̃i−1, L) in such a way that it computes a list̃Li−1 which is a permutation of the correct solutionL⋆i−1. To do this, we consider the “relative” path starting atx0+yi−1 that we need to follow inL in order to collect all the elements ofM̃i−1 in the given order. We then apply such a path starting atx0 + yi and return the list of collected elements. Finally, we compute the output list L̃⋆ as the concatenation of all lists̃Mi up to the first k elements.
It is not difficult to see that whenL2 has constant slope we havẽMi = Mi for all i with 0 ≤ i < k, and list L̃⋆ is the exact solution to the CP problem. WhenL2 does not have a constant slope, list L̃⋆ might depart from the exact solution in two respects: it might not be a descending list, because of local variations in the ordering of the elements; and it might not be a permutation of the exact solution, because of local variations at the end of the list. In the next section we evaluate the impact that
Figure 1: A running example for Algorithm 1.
our heuristic solution has on the performance of a real-world machine translation system.
Algorithm 1 implements the idea presented in (3). The algorithm takes as input two descending lists L1,L2 of length k and outputs the list̃L⋆ which approximates the desired solution. ElementL[i, j] denotes the combined valuexi + yj, and is always computed on demand.
We encode a relative path (mentioned above) as a sequence of elements, calleddisplacements, each of the form[i, δ]. Herei is the index of the next row, andδ represents therelative displacement needed to reach the next column, to be summed to a variable called referColumn denoting the index of the column of the first element of the path. The reason why only the second coordinate is a relative value is that we shift paths only horizontally (row indices are preserved). The relative path is stored in a circular list C, with displacement[0, 1] marking the starting point (paths are always shifted one element to the right). When merging the list obtained through he path forM̃i−1 with segment̃σi,0, as specified in (3), we updateC accordingly, so that the new relative path can be used at the next round forM̃i. The merge operator is implemented by the while cycle at lines 8 to 19 of algorithm 1. The if statement at line 9 tests whether the next step should follow the relative path for̃Mi−1 stored inC (lines 10 to 14) or
-5
0
5
10
15
20
25
30
35
40
45
1 10 100 1000
sc or
e lo
ss (
% )
beam size
Baseline score loss over CP LCP score loss over CP FCP score loss over CP
Figure 2: Search-score loss relative to standard CP.
else depart visiting an element from̃σi,0 in the first column ofL (lines 16 to 19). In the latter case, we updateC with the new displacement (line 18), where the function insert() inserts a new element before the one currently pointed to. The function next() at line 13 moves the iterator to the next element and then returns its value.
A running example of algorithm 1 is reported in Figure 1. The input lists areL1 = 〈12, 7, 5, 0〉, L2 = 〈9, 6, 3, 0〉. Each of the picture in the sequence represents the state of the algorithm when the test at line 9 is executed. The value in the shaded cell in the first column isxdeviate , while the value in the other shaded cell isxfollow .
|
5 experiments :We implement Linear CP (LCP) on top of Cdec (Dyer et al., 2010), a widely-used hierarchical MT system that includes implementations of standard CP and FCP algorithms. The experiments were executed on the NIST 2003 Chinese-English parallel corpus. The training corpus contains 239k sentence pairs. A binary translation grammar was extracted using a suffix array rule extractor (Lopez, 2007). The model was tuned using MERT (Och, 2003). The algorithms are compared on the NIST-03 test set, which contains 919 sentence pairs. The features used are basic lexical features, word penalty and a 3-gram Language Model (Heafield, 2011).
Since we compare decoding algorithms on the same search space, the accuracy comparison is done in terms of search score. For each algorithm we
0
5
10
15
20
25
1 10 100 1000
sp ee
d ga
in (
% )
beam size
LCP speed gain over CP LCP speed gain over FCP
Figure 3: Linear CP relative speed gain.
compute the average score of the best translation found for the test sentences. In Figure 2 we plot the score-loss relative to standard CP average score. Note that the FCP loss is always< 3%, and the LCP loss is always< 7%. The dotted line plots the loss of a baseline linear time heuristic algorithm which assumes that both input lists have constant slope, and that scansL along parallel lines whose steep is the ratio of the average slope of each input list. The baseline greatly deteriorates the accuracy: this shows that finding a reasonable linear time heuristic algorithm is not trivial. We can assume a bounded loss in accuracy, because for larger beam size all the algorithms tend to converge to exhaustive search.
We found that these differences in search score resulted in no significant variations in BLEU score (e.g. withk = 30, CP reaches 32.2 while LCP 32.3).
The speed comparison is done in terms of algorithm run-time. Figure 3 plots the relative speed gain of LCP over standard CP and over FCP. Given the log-scale used for the beam sizek, the linear shape of the speed gain over FCP (and CP) in Figure 3 empirically confirms that LCP has alog(k) asymptotic advantage over FCP and CP.
In addition to Chinese-English, we ran experiments on translating English to French (from Europarl corpus (Koehn, 2005)), and find that the LCP score-loss relative to CP is< 9% while the speed relative advantage of LCP over CP increases in average by11.4% every time the beam size is multiplied by 10 (e.g. withk = 1000 the speed advantage is 34.3%). These results confirm the bounded accuracy loss andlog(k) speed advantage of LCP.
|
We propose a novel heuristic algorithm for Cube Pruning running in linear time in the beam size. Empirically, we show a gain in running time of a standard machine translation system, at a small loss in accuracy.
|
[{"affiliations": [], "name": "Andrea Gesmundo"}, {"affiliations": [], "name": "Giorgio Satta"}, {"affiliations": [], "name": "James Henderson"}]
|
SP:314f6dada911571f98ada4fc471cd0d2da046314
|
[{"authors": ["David Chiang."], "title": "Hierarchical phrase-based translation", "venue": "Computational Linguistics, 33(2):201\u2013228.", "year": 2007}, {"authors": ["Chris Dyer", "Adam Lopez", "Juri Ganitkevitch", "Jonathan Weese", "Hendra Setiawan", "Ferhan Ture", "Vladimir Eidelman", "Phil Blunsom", "Philip Resnik"], "title": "cdec: A decoder, alignment, and learning framework for finite-state and context-free translation models", "year": 2010}, {"authors": ["Andrea Gesmundo", "James Henderson."], "title": "Faster Cube Pruning", "venue": "InIWSLT \u201910: Proceedings of the 7th International Workshop on Spoken Language Translation, Paris, France.", "year": 2010}, {"authors": ["Kenneth Heafield."], "title": "KenLM: Faster and smaller language model queries", "venue": "In", "year": 2011}, {"authors": ["E. Horowitz", "S. Sahni"], "title": "1983.Fundamentals of data structures. Computer software engineering series", "year": 1983}, {"authors": ["Liang Huang", "David Chiang."], "title": "Better k-best parsing", "venue": "InIWPT \u201905: Proceedings of the 9th International Workshop on Parsing Technology, Vancouver, British Columbia, Canada.", "year": 2005}, {"authors": ["Liang Huang", "David Chiang."], "title": "Forest rescoring: Faster decoding with integrated language models", "venue": "ACL \u201907: Proceedings of the 45th Conference of the Association for Computational Linguistics, Prague, Czech Republic.", "year": 2007}, {"authors": ["Philipp Koehn."], "title": "Europarl: A parallel corpus for statistical machine translation", "venue": "In", "year": 2005}, {"authors": ["Adam Lopez"], "title": "Hierarchical phrase-based transla", "year": 2007}, {"authors": ["Franz Josef Och"], "title": "Minimum error rate training", "year": 2003}, {"authors": ["Jason Riesa", "Daniel Marcu."], "title": "Hierarchical search for word alignment", "venue": "In", "year": 2010}]
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 introduction :Since its first appearance in (Huang and Chiang, 2005), the Cube Pruning (CP) algorithm has quickly gained popularity in statistical natural language processing. Informally, this algorithm applies to scenarios in which we have thek-best solutions for two input sub-problems, and we need to compute thekbest solutions for the new problem representing the combination of the two sub-problems.
CP has applications in tree and phrase based machine translation (Chiang, 2007; Huang and Chiang, 2007; Pust and Knight, 2009), parsing (Huang and Chiang, 2005), sentence alignment (Riesa and Marcu, 2010), and in general in all systems combining inexact beam decoding with dynamic programming under certain monotonic conditions on the definition of the scores in the search space.
Standard implementations of CP run in time O(k log(k)), with k being the size of the input/output beams (Huang and Chiang, 2005). Gesmundo and Henderson (2010) propose Faster CP (FCP) which optimizes the algorithm but keeps the O(k log(k)) time complexity. Here, we propose a novel heuristic algorithm for CP running in time O(k) and evaluate its impact on the efficiency and performance of a real-world machine translation system. 2 preliminaries :Let L = 〈x0, . . . , xk−1〉 be a list overR, that is, an ordered sequence of real numbers, possibly with repetitions. We write|L| = k to denote the length of L. We say thatL is descending if xi ≥ xj for every i, j with 0 ≤ i < j < k. Let L1 = 〈x0, . . . , xk−1〉 andL2 = 〈y0, . . . , yk′−1〉 be two descending lists overR. We writeL1 ⊕ L2 to denote the descending list with elementsxi +yj for everyi, j with 0 ≤ i < k and0 ≤ j < k′.
In cube pruning (CP) we are given as input two descending listsL1, L2 overR with |L1| = |L2| = k, and we are asked to compute the descending list consisting of the firstk elements ofL1 ⊕L2.
A problem related to CP is thek-way merge problem (Horowitz and Sahni, 1983). Given descending listsLi for every i with 0 ≤ i < k, we write mergek−1i=0 Li to denote the “merge” of all the listsLi, that is, the descending list with all elements from the listsLi, including repetitions.
For∆ ∈ R we defineshift(L,∆) = L ⊕ 〈∆〉. In words,shift(L,∆) is the descending list whose elements are obtained by “shifting” the elements ofL by ∆, preserving the order. LetL1,L2 be descending lists of lengthk, with L2 = 〈y0, . . . , yk−1〉. Then we can express the output of CP onL1,L2 as the list
mergek−1i=0 shift(L1, yi) (1)
truncated after the firstk elements. This shows that the CP problem is a particular instance of thek-way merge problem, in which all input lists are related by k independent shifts.
296
Computation of the solution of thek-way merge problem takes timeO(q log(k)), where q is the size of the output list. In case each input list has lengthk this becomesO(k2 log(k)), and by restricting the computation to the firstk elements, as required by the CP problem, we can further reduce to O(k log(k)). This is the already known upper bound on the CP problem (Huang and Chiang, 2005; Gesmundo and Henderson, 2010). Unfortunately, there seems to be no way to achieve an asymptotically faster algorithm by exploiting the restriction that the input lists are all related by some shifts. Nonetheless, in the next sections we use the above ideas to develop a heuristic algorithm running in time linear in k. 3 cube pruning with constant slope :Consider listsL1,L2 defined as in section 2. We say thatL2 hasconstant slope if yi−1− yi = ∆ > 0 for everyi with 0 < i < k. Throughout this section we assume thatL2 has constant slope, and we develop an (exact) linear time algorithm for solving the CP problem under this assumption.
For eachi ≥ 0, let Ii be the left-open interval (x0 − (i + 1) · ∆, x0 − i · ∆] of R. Let alsos = ⌊(x0 − xk−1)/∆⌋ + 1. We splitL1 into (possibly empty) sublistsσi, 0 ≤ i < s, calledsegments, such that eachσi is the descending sublist consisting of all elements fromL1 that belong toIi. Thus, moving down one segment inL1 is the closest equivalent to moving down one element inL2.
Let t = min{k, s}; we define descending lists Mi, 0 ≤ i < t, as follows. We setM0 = shift(σ0, y0), and for1 ≤ i < t we let
Mi = merge{shift(σi, y0), shift(Mi−1,−∆)} (2)
We claim that the ordered concatenation ofM0, M1, . . . , Mt−1 truncated after the firstk elements is exactly the output of CP on inputL1,L2.
To prove our claim, it helps to visualize the descending listL1 ⊕ L2 (of sizek2) as ak × k matrix L whosej-th column isshift(L1, yj), 0 ≤ j < k. For an intervalI = (x, x′], we defineshift(I, y) = (x+ y, x′ + y]. Similarly to what we have done with L1, we can split each column ofL into s segments. For eachi, j with 0 ≤ i < s and0 ≤ j < k, we define thei-th segment of thej-th column, writtenσi,j,
as the descending sublist consisting of all elements of that column that belong toshift(Ii, yj). Then we haveσi,j = shift(σi, yj).
For any d with 0 ≤ d < t, consider now all segmentsσi,j with i + j = d, forming a subantidiagonal inL. We observe that these segments containall and only those elements ofL that belong to the intervalId. It is not difficult to show by induction that these elements are exactly the elements that appear in descending order in the listMi defined in (2).
We can then directly use relation (2) to iteratively c mpute CP on two lists of lengthk, under our assumption that one of the two lists has constant slope. Using the fact that the merge of two lists as in (2) can be computed in time linear in the size of the output list, it is not difficult to implement the above algorithm to run in timeO(k). 4 linear time heuristic solution :In this section we further elaborate on the exact algorithm of section 3 for the constant slope case, and develop a heuristic solution for the general CP problem. LetL1,L2, L andk be defined as in sections 2 and 3. Despite the fact thatL2 does not have a constant slope, we can still split each column ofL into segments, as follows.
Let Ĩi, 0 ≤ i < k − 1, be the left-open interval (x0 + yi+1, x0 + yi] of R. Note that, unlike the case of section 3, intervals̃Ii’s are not all of the same size now. Let alsoĨk−1 = [xk−1 + yk−1, x0 + yk−1]. For eachi, j with 0 ≤ j < k and0 ≤ i < k − j, we define segment̃σi,j as the descending sublist consisting of all elements of thej-th column ofL that belong toĨi+j. In this way, thej-th column of L is split into segments̃Ij , Ĩj+1, . . . , Ĩk−1, and we have a variable number of segments per column. Note that segments̃σi,j with a constant value ofi+j containall and only those elements ofL that belong to the left-open interval̃Ii+j .
Similarly to section 3, we define descending lists M̃i, 0 ≤ i < k, by settingM̃0 = σ̃0,0 and, for 1 ≤ i < k, by letting
M̃i = merge{σ̃i,0 , path(M̃i−1, L)} (3)
Note that the functionpath(M̃i−1, L) should not return shift(M̃i−1,−∆), for some value∆, as in the
1: Algorithm 1 (L1, L2) : L̃⋆ 2: L̃⋆.insert(L[0, 0]); 3: referColumn← 0; 4: xfollow ← L[0, 1]; 5: xdeviate ← L[1, 0]; 6: C ← CircularList([0, 1]); 7: C-iterator← C.begin(); 8: while |L̃⋆| < k do 9: if xfollow > xdeviate then
10: L̃⋆.insert(xfollow ); 11: if C-iterator.current()=[0, 1] then 12: referColumn++; 13: [i, j]← C-iterator.next(); 14: xfollow ← L[i,referColumn+j]; 15: else 16: L̃⋆.insert(xdeviate ); 17: i← xdeviate .row(); 18: C-iterator.insert([i,−referColumn]); 19: xdeviate ← L[i + 1, 0];
case of (2). This is because input listL2 does not have constant slope in general. In an exact algorithm, path(M̃i−1, L) should return the descending list L⋆i−1 = merge i j=1 σ̃i−j,j: Unfortunately, we do not know how to compute such ai-way merge without introducing a logarithmic factor.
Our solution is to definepath(M̃i−1, L) in such a way that it computes a list̃Li−1 which is a permutation of the correct solutionL⋆i−1. To do this, we consider the “relative” path starting atx0+yi−1 that we need to follow inL in order to collect all the elements ofM̃i−1 in the given order. We then apply such a path starting atx0 + yi and return the list of collected elements. Finally, we compute the output list L̃⋆ as the concatenation of all lists̃Mi up to the first k elements.
It is not difficult to see that whenL2 has constant slope we havẽMi = Mi for all i with 0 ≤ i < k, and list L̃⋆ is the exact solution to the CP problem. WhenL2 does not have a constant slope, list L̃⋆ might depart from the exact solution in two respects: it might not be a descending list, because of local variations in the ordering of the elements; and it might not be a permutation of the exact solution, because of local variations at the end of the list. In the next section we evaluate the impact that
Figure 1: A running example for Algorithm 1.
our heuristic solution has on the performance of a real-world machine translation system.
Algorithm 1 implements the idea presented in (3). The algorithm takes as input two descending lists L1,L2 of length k and outputs the list̃L⋆ which approximates the desired solution. ElementL[i, j] denotes the combined valuexi + yj, and is always computed on demand.
We encode a relative path (mentioned above) as a sequence of elements, calleddisplacements, each of the form[i, δ]. Herei is the index of the next row, andδ represents therelative displacement needed to reach the next column, to be summed to a variable called referColumn denoting the index of the column of the first element of the path. The reason why only the second coordinate is a relative value is that we shift paths only horizontally (row indices are preserved). The relative path is stored in a circular list C, with displacement[0, 1] marking the starting point (paths are always shifted one element to the right). When merging the list obtained through he path forM̃i−1 with segment̃σi,0, as specified in (3), we updateC accordingly, so that the new relative path can be used at the next round forM̃i. The merge operator is implemented by the while cycle at lines 8 to 19 of algorithm 1. The if statement at line 9 tests whether the next step should follow the relative path for̃Mi−1 stored inC (lines 10 to 14) or
-5
0
5
10
15
20
25
30
35
40
45
1 10 100 1000
sc or
e lo
ss (
% )
beam size
Baseline score loss over CP LCP score loss over CP FCP score loss over CP
Figure 2: Search-score loss relative to standard CP.
else depart visiting an element from̃σi,0 in the first column ofL (lines 16 to 19). In the latter case, we updateC with the new displacement (line 18), where the function insert() inserts a new element before the one currently pointed to. The function next() at line 13 moves the iterator to the next element and then returns its value.
A running example of algorithm 1 is reported in Figure 1. The input lists areL1 = 〈12, 7, 5, 0〉, L2 = 〈9, 6, 3, 0〉. Each of the picture in the sequence represents the state of the algorithm when the test at line 9 is executed. The value in the shaded cell in the first column isxdeviate , while the value in the other shaded cell isxfollow . 5 experiments :We implement Linear CP (LCP) on top of Cdec (Dyer et al., 2010), a widely-used hierarchical MT system that includes implementations of standard CP and FCP algorithms. The experiments were executed on the NIST 2003 Chinese-English parallel corpus. The training corpus contains 239k sentence pairs. A binary translation grammar was extracted using a suffix array rule extractor (Lopez, 2007). The model was tuned using MERT (Och, 2003). The algorithms are compared on the NIST-03 test set, which contains 919 sentence pairs. The features used are basic lexical features, word penalty and a 3-gram Language Model (Heafield, 2011).
Since we compare decoding algorithms on the same search space, the accuracy comparison is done in terms of search score. For each algorithm we
0
5
10
15
20
25
1 10 100 1000
sp ee
d ga
in (
% )
beam size
LCP speed gain over CP LCP speed gain over FCP
Figure 3: Linear CP relative speed gain.
compute the average score of the best translation found for the test sentences. In Figure 2 we plot the score-loss relative to standard CP average score. Note that the FCP loss is always< 3%, and the LCP loss is always< 7%. The dotted line plots the loss of a baseline linear time heuristic algorithm which assumes that both input lists have constant slope, and that scansL along parallel lines whose steep is the ratio of the average slope of each input list. The baseline greatly deteriorates the accuracy: this shows that finding a reasonable linear time heuristic algorithm is not trivial. We can assume a bounded loss in accuracy, because for larger beam size all the algorithms tend to converge to exhaustive search.
We found that these differences in search score resulted in no significant variations in BLEU score (e.g. withk = 30, CP reaches 32.2 while LCP 32.3).
The speed comparison is done in terms of algorithm run-time. Figure 3 plots the relative speed gain of LCP over standard CP and over FCP. Given the log-scale used for the beam sizek, the linear shape of the speed gain over FCP (and CP) in Figure 3 empirically confirms that LCP has alog(k) asymptotic advantage over FCP and CP.
In addition to Chinese-English, we ran experiments on translating English to French (from Europarl corpus (Koehn, 2005)), and find that the LCP score-loss relative to CP is< 9% while the speed relative advantage of LCP over CP increases in average by11.4% every time the beam size is multiplied by 10 (e.g. withk = 1000 the speed advantage is 34.3%). These results confirm the bounded accuracy loss andlog(k) speed advantage of LCP. We propose a novel heuristic algorithm for Cube Pruning running in linear time in the beam size. Empirically, we show a gain in running time of a standard machine translation system, at a small loss in accuracy. [{"affiliations": [], "name": "Andrea Gesmundo"}, {"affiliations": [], "name": "Giorgio Satta"}, {"affiliations": [], "name": "James Henderson"}] SP:314f6dada911571f98ada4fc471cd0d2da046314 [{"authors": ["David Chiang."], "title": "Hierarchical phrase-based translation", "venue": "Computational Linguistics, 33(2):201\u2013228.", "year": 2007}, {"authors": ["Chris Dyer", "Adam Lopez", "Juri Ganitkevitch", "Jonathan Weese", "Hendra Setiawan", "Ferhan Ture", "Vladimir Eidelman", "Phil Blunsom", "Philip Resnik"], "title": "cdec: A decoder, alignment, and learning framework for finite-state and context-free translation models", "year": 2010}, {"authors": ["Andrea Gesmundo", "James Henderson."], "title": "Faster Cube Pruning", "venue": "InIWSLT \u201910: Proceedings of the 7th International Workshop on Spoken Language Translation, Paris, France.", "year": 2010}, {"authors": ["Kenneth Heafield."], "title": "KenLM: Faster and smaller language model queries", "venue": "In", "year": 2011}, {"authors": ["E. Horowitz", "S. Sahni"], "title": "1983.Fundamentals of data structures. Computer software engineering series", "year": 1983}, {"authors": ["Liang Huang", "David Chiang."], "title": "Better k-best parsing", "venue": "InIWPT \u201905: Proceedings of the 9th International Workshop on Parsing Technology, Vancouver, British Columbia, Canada.", "year": 2005}, {"authors": ["Liang Huang", "David Chiang."], "title": "Forest rescoring: Faster decoding with integrated language models", "venue": "ACL \u201907: Proceedings of the 45th Conference of the Association for Computational Linguistics, Prague, Czech Republic.", "year": 2007}, {"authors": ["Philipp Koehn."], "title": "Europarl: A parallel corpus for statistical machine translation", "venue": "In", "year": 2005}, {"authors": ["Adam Lopez"], "title": "Hierarchical phrase-based transla", "year": 2007}, {"authors": ["Franz Josef Och"], "title": "Minimum error rate training", "year": 2003}, {"authors": ["Jason Riesa", "Daniel Marcu."], "title": "Hierarchical search for word alignment", "venue": "In", "year": 2010}]
| null |
1 introduction :Speakers present already known and yet to be established information according to principles referred to as information structure (Prince, 1981; Lambrecht, 1994; Kruijff-Korbayová and Steedman, 2003, inter alia). While information structure affects all kinds of constituents in a sentence, we here adopt the more restricted notion of information status which concerns only discourse entities realized as noun phrases, i.e. mentions1. Information status (IS henceforth) describes the degree to which a discourse entity is available to the hearer with regard to the speaker’s assumptions about the hearer’s knowledge and beliefs (Nissim et al., 2004). Old mentions are known to the hearer and have been referred
1Since not all noun phrases are referential, we call noun
phrases which carry information status mentions.
to previously. Mediated mentions have not been mentioned before but are also not autonomous, i.e., they can only be correctly interpreted by reference to another mention or to prior world knowledge. All other mentions are new.
IS can be beneficial for a number of NLP tasks, though the results have been mixed. Nenkova et al. (2007) used IS as a feature for generating pitch accent in conversational speech. As IS is restricted to noun phrases, while pitch accent can be assigned to any word in an utterance, the experiments were not conclusive. For determining constituent order of German sentences, Cahill and Riester (2009) incorporate features modeling IS to good effect. Rahman and Ng (2011) showed that IS is a useful feature for coreference resolution.
Previous work on learning IS (Nissim, 2006; Rahman and Ng, 2011) is restricted in several ways. It deals with conversational dialogue, in particular with the corpus annotated by Nissim et al. (2004). However, many applications that can profit from IS concentrate on written texts, such as summarization. For example, Siddharthan et al. (2011) show that solving the IS subproblem of whether a person proper name is already known to the reader improves automatic summarization of news. Therefore, we here model IS in written text, creating a new dataset which adds an IS layer to the already existing comprehensive annotation in the OntoNotes corpus (Weischedel et al., 2011). We also report the first results on fine-grained IS classification by modelling further distinctions within the category of mediated mentions, such as comparative and bridging anaphora (see Examples 1 and 2, re-
795
spectively).2 Fine-grained IS is a prerequisite to full bridging/comparative anaphora resolution, and therefore necessary to fill gaps in entity grids (Barzilay and Lapata, 2008) based on coreference only. Thus, Examples 1 and 2 do not exhibit any coreferential entity coherence but coherence can be established when the comparative anaphor others is resolved to others than freeway survivor Buck Helm, and the bridging anaphor the streets is resolved to the streets of Oranjemund, respectively.
(1) the condition of freeway survivor Buck
Helm . . . , improved, hospital officials said. Rescue crews, however, gave up hope that others would be found.
(2) Oranjemund, the mine headquarters, is a
lonely corporate oasis of 9,000 residents. Jackals roam the streets at night . . .
We approach the challenge of modeling IS via collective classification, using several novel linguistically motivated features. We reimplement Nissim’s (2006) and Rahman and Ng’s (2011) approaches as baselines and show that our approach outperforms these by a large margin for both coarse- and finegrained IS classification.
|
2 related work :IS annotation schemes and corpora. We enhance the approach in Nissim et al. (2004) in two major ways (see also Section 3.1). First, comparative anaphora are not specifically handled in Nissim et al. (2004) (and follow-on work such as Ritz et al. (2008) and Riester et al. (2010)), although some of them might be included in their respective bridging subcategories. Second, we apply the annotation scheme reliably to a new genre, namely news. This is a non-trivial extension: Ritz et al. (2008) applied a variation of the Nissim et al. (2004) scheme to a small set of 220 NPs in a German news/commentary corpus but found that reliability then dropped significantly to the range of κ = 0.55 to 0.60. They attributed this to the higher syntactic complexity and semantic vagueness in the commentary corpus. Riester et al. (2010) annotated a
2All examples in this paper are from the OntoNotes corpus. The mention in question is typed in boldface; antecedents, where applicable, are displayed in italics.
German news corpus marginally reliable (κ = 0.66) for their overall scheme but their confusion matrix shows even lower reliability for several subcategories, most importantly deixis and bridging.
While standard coreference corpora do not contain IS annotation, some corpora annotated for bridging are emerging (Poesio, 2004; Korzen and Buch-Kromann, 2011) but they are (i) not annotated for comparative anaphora or other IS categories, (ii) often not tested for reliability or reach only low reliability, (iii) often very small (Poesio, 2004).
To the best of our knowledge, we therefore present the first English corpus reliably annotated for a wide range of IS categories as well as full anaphoric information for three main anaphora types (coreference, bridging, comparative).
Automatic recognition of IS. Vieira and Poesio (2000) describe heuristics for processing definite descriptions in news text. As their approach is restricted to definites, they only analyse a subset of the mentions we consider carrying IS. Siddharthan et al. (2011) also concentrate on a subproblem of IS only, namely the hearer-old/hearer-new distinctions for person proper names.
Nissim (2006) and Rahman and Ng (2011) both present algorithms for IS detection on Nissim et al.’s (2004) Switchboard corpus. Both papers treat IS classification as a local classification problem whereas we look at dependencies between the IS status of different mentions, leading to collective classification. In addition, they only distinguish the three main categories old, mediated and new. Finally, we work on news corpora which poses different problems from dialogue.
Anaphoricity determination (Ng, 2009; Zhou and Kong, 2009) identifies many or most old mentions. However, no distinction between mediated and new mentions is made. Most approaches to bridging resolution (Meyer and Dale, 2002; Poesio et al., 2004) or comparative anaphora (Modjeska et al., 2003; Markert and Nissim, 2005) address only the selection of the antecedent for the bridging/comparative anaphor, not its recognition. Sasano and Kurohashi (2009) do also tackle bridging recognition, but they depend on languagespecific non-transferrable features for Japanese.
|
3 corpus creation : Our scheme follows Nissim et al. (2004) in distinguishing three major IS categories old, new and mediated. A mention is old if it is either coreferential with an already introduced entity or a generic or deictic pronoun. We follow the OntoNotes (Weischedel et al., 2011) definition of coreference to be able to integrate our annotations with it. This definition includes coreference with noun phrase as well as verb phrase antecedents3 .
Mediated refers to entities which have not yet been introduced in the text but are inferrable via other mentions or are known via world knowledge. We distinguish the following six subcategories: The category mediated/comparative comprises mentions compared via either a contrast or similarity to another one (see Example 1). This category is novel in our scheme. We also include a category mediated/bridging (see Examples 2, 3 and 4). Bridging anaphora can be any noun phrase and are not limited to definite NPs as in Poesio et al. (2004), Gardent and Manuélian (2005), Riester et al. (2010). In contrast to Nissim et al. (2004), antecedents for both comparative and bridging categories are annotated and can be noun phrases, verb phrases or even clauses. The category mediated/knowledge is inspired by the hearerold distinction introduced by Prince (1992) and covers entities generally known to the hearer. It includes many proper names, such as Poland.4 Mentions that are syntactically linked via a possessive relation or a PP modification to other, old or mediated mentions fall into the type mediated/synt (see Examples 5 and 6).5 With no change to Nissim et al.’s scheme, coordinated mentions where at least one element in the conjunction is old or mediated are covered by the category mediated/aggregate, and mentions referring to a value of a previously mentioned function by the type mediated/func.
All other mentions are annotated as new, includ-
3In contrast to Nissim et al. (2004), but in accordance with OntoNotes, we do not consider generics for coreference. 4This class corresponds roughly to Nissim et al.’s (2004) mediated/general. 5This class expands Nissim et al.’s (2004) poss category that only considers possessives but not PP modification.
ing most generics as well as newly introduced, specific mentions such as Example 7.
(3) Initial steps were taken at Poland’s first en-
vironmental conference, which I attended last month. . . . it was no accident that participants urged the free flow of information
(4) The Bakersfield supermarket went out of
business last May. The reason was . . .
(5) One Washington couple sold their liquor
store
(6) the main artery into San Francisco
(7) the owner was murdered by robbers We carried out an agreement study with 3 annotators, of which Annotator A was the scheme developer and first author of this paper. All texts used were from the Wall Street Journal (WSJ) portion of OntoNotes. There were no restrictions on which texts to include apart from (i) exclusion of letters to the editor as they contain cross-document links and (ii) a preference for longer texts with potentially richer discourse structure.
Mentions were automatically preselected for the annotators using the gold-standard syntactic annotation.6 The existing coreference annotation was automatically carried over to the IS task by marking all mentions in a coreference chain (apart from the first mention in the chain) as old. The annotation task consisted of marking all mentions for their IS (old, mediated or new) as well as marking mediated subcategories (see Section 3.1) and the antecedents for comparative and bridging anaphora.
The scheme was developed on 9 texts, which were also used for training the annotators. Inter-annotator agreement was measured on 26 new texts, which included 5905 pre-marked potential mentions. The annotations of 1499 of these were carried over from OntoNotes, leaving 4406 potential mentions for annotation and agreement measurement. In addition to
6Some non-mentions such as idioms could not be filtered out via the syntactic annotation and had to be excluded during human annotation.
percentage agreement, we measured Cohen’s κ (Artstein and Poesio, 2008) between all 3 possible annotator pairings. We also report single-category agreement for each category, where all categories but one are merged and then κ is computed as usual. Table 1 shows agreement results for the overall scheme at the coarse-grained (4 categories: non-mention, old, new, mediated) and the fine-grained level (9 categories: non-mention, old, new and the 6 mediated subtypes). The results show that the scheme is overall reliable, with not too many differences between the different annotator pairings.7
Table 2 shows the individual category agreement for all 9 categories. We achieve high reliability for most categories.8 Particularly interesting is the fact that hearer-old entities (mediated/knowledge) can be identified reliably although all annotators had substantially different backgrounds. The reliability of the category bridging is more annotatordependent, although still higher, sometimes considerably, than other previous attempts at bridg-
7Often, annotation is considered highly reliable when κ exceeds 0.80 and marginally reliable when between 0.67 and 0.80 (Carletta, 1996). However, the interpretation of κ is still under discussion (Artstein and Poesio, 2008). 8The low reliability of the rare category func, when involving Annotator B, was explained by Annotator B forgetting about this category after having used it once. Pair A-C achieved high reliability (κ 83.2 for pair A-C).
ing annotation (Poesio et al., 2004; Gardent and Manuélian, 2005; Riester et al., 2010). Our final gold standard corpus consists of 50 texts from the WSJ portion of the OntoNotes corpusThe corpus will be made publically available as OntoNotes annotation layer via http://www. h-its.org/nlp/download.
Disagreements in the 35 texts used for annotator training (9 texts) and testing (26 texts) were resolved via discussion between the annotators. An additional 15 texts were annotated by Annotator A. Finally, Annotator A carried out consistency checks over all texts. – The gold standard includes 10,980 true mentions (see Table 3).
|
4 features :In this Section, we describe both the local as well as the relational features we use. We use the following local features, including the features in Nissim (2006) and Rahman and Ng (2011) to be able to gauge how their systems fare on our corpus and as a comparison point for our novel collective classification approach.
The features developed by Nissim (2006) are shown in Table 4. Nissim shows clearly that these features are useful for IS classification. Thus, subjects are more likely to be old as assumed by, e.g., centering theory (Grosz et al.,
1995). Also, previously unmentioned proper names are more likely to be hearer-old and therefore mediated/knowledge, although their exact status will depend on how well known a particular proper name is.
Rahman and Ng (2011) add all unigrams appearing in any mention in the training set as features. They also integrated (via a convolution tree-kernel SVM (Collins and Duffy, 2001)) partial parse trees that capture the generalised syntactic context of a mention e and include the mention’s parent and sibling nodes without lexical leaves. However, they use no structure underneath the mention node e itself, assuming that “any NP-internal information has presumably been captured by the flat features”.
To these feature sets, we add a small set of other local features otherlocal. These track partial previous mentions by also counting partial previous mention time as well as the previous mention of content words only. We also add a mention’s number as one of singular, plural or unknown, and whether the mention is modified by an adjective. Another feature encapsulates whether the mention is modified by a comparative marker, using a small set of 10 markers such as another, such, similar . . . and the presence of adjectives or adverbs in the comparative. Finally, we include the mention’s semantic class as one of 12 coarse-grained classes, including location, organisation, person and several classes for numbers (such as date, money or percent). Both Nissim (2006) and Rahman and Ng (2011) classify each mention individually in a standard supervised ML setting, not considering potential dependencies between the IS categories of different
9We changed the value of “full prev mention” from “nu-
meric’ to {yes, no, NA}.
mentions. However, collective or joint classification has made substantial impact in other NLP tasks, such as opinion mining (Pang and Lee, 2004; Somasundaran et al., 2009), text categorization (Yang et al., 2002; Taskar et al., 2002) and the related task of coreference resolution (Denis and Baldridge, 2007). We investigate two types of relations between mentions that might impact on IS classification.
Syntactic parent-child relations. Two mediated subcategories account for accessibility via syntactic links to another old or mediated mention: mediated/synt is used when at least one child of a mention is mediated or old, with child relations restricted to pre- or postnominal possessives as well as PP children in our scheme (see Section 3.1). mediated/aggregate is for coordinations in which at least one of the children is old or mediated. In these two cases, a mention’s IS depends directly on the IS of its children. We therefore link a mention m1 to a mention m2 via a hasChild relation if (i) m2 is a possessive or prepositional modification of m1, or (ii) m1 is a coordination and m2 is one of its children.
Using such a relational feature catches two birds with one stone: firstly, it integrates the internal structure of a mention into the algorithm, which Rahman and Ng (2011) ignore; secondly, it captures dependencies between parent and child classification, which would not be possible if we integrated the internal structure via flat features or additional tree kernels. We hypothesise that the higher syntactic complexity of our news genre (14.5% of all mentions are mediated/synt) will make this feature highly effective in distinguishing between new and mediated categories.
Syntactic precedence relations. IS is said to influence word order (Birner and Ward, 1998; Cahill and Riester, 2009) and this fact has been exploited in work on generation (Prevost, 1996; Filippova and Strube, 2007; Cahill and Riester, 2009). Therefore, we integrate dependencies between the IS classification of mentions in precedence relations.
m1 precedes m2 if (i) m1 and m2 are in the same clause, allowing for trace subjects in gerund and infinitive constructions, (ii) m1 and m2 are dependent on the same verb or noun, allowing for intervening nodes via modal, auxiliary, gerund and infinitive
constructions, (iii) m1 is neither a child nor a parent of m2, and (iv) m1 occurs before m2.
For Example 8 (slightly simplified) we extract the
precedence relations shown in Table 5.
(8) She was sent by her mother to a white
Proper names behave differently from common nouns. For example, they can occur at many different places in the clause when functioning as spatial or temporal scene-setting elements, such as In New York. We therefore exclude all precedence relations where one element of the pair is a proper name.
We extract 2855 precedence relations. Table 6 shows the statistics on precedence with the first mention in a pair in rows and the second in columns. Mediated and new mentions indeed rarely precede old mentions, so that precedence should improve separating of old vs other mentions.
|
5 experiments : We use our gold standard corpus (see Section 3.3) via 10-fold cross-validation on documents for all experiments. Following Nissim (2006) and Rahman and Ng (2011), we perform all experiments on gold standard mentions and use the human WSJ syntactic annotation for feature extraction, when necessary. For the extraction of semantic class, we use
OntoNotes entity type annotation for proper names and an automatic assignment of semantic class via WordNet hypernyms for common nouns.
Coarse-grained versions of all algorithms distinguish only between the three old, mediated, new categories. Fine-grained versions distinguish between the categories old, the six mediated subtypes, and new. We report overall accuracy as well as precision, recall and F-measure per category. Significance tests are conducted using McNemar’s test on overall algorithm accuracy, at the level of 1%. We reimplemented the algorithms in Nissim (2006) and Rahman and Ng (2011) as comparison baselines, using their feature and algorithm choices. Algorithm Nissim is therefore a decision tree J48 with standard settings in WEKA with the features in Table 4. Algorithm RahmanNg is an SVMwith a composite kernel and one-vs-all training/testing (toolkit SVMLight). They use the features in Table 4 plus unigram and tree kernel features, described in Section 4.1. We add our additional set of otherlocal features to both baseline algorithms (yielding Nissim+ol and RahmanNg+ol) as they aim specifically at improving fine-grained classification. For incorporating our inter-mention links, we use a variant of Iterative Collective classification (ICA), which has shown good performance over a variety of tasks (Lu and Getoor, 2003) and has been used in NLP for example for opinion mining (Somasundaran et al., 2009). ICA is normally faster than Gibbs sampling and — in initial experiments — did not yield significantly different results from it.
ICA initializes each mention with its most likely IS, according to the local classifier and features. It then iterates a relational classifier, which uses both local and relational features (our hasChild and precedes features) taking IS assignments to neighbouring mentions into account. We use the exist aggregator to define the dependence between mentions.
We use NetKit (Macskassy and Provost, 2007) with its standard ICA settings for collective inference, as it allows direct comparison between local and collective classification. The relational classifiers are always exactly the same classifiers as the
local ones with the relational features added: thus, if the local classifier is a tree kernel SVM so is the relational one. One problem when using the SVM Tree kernel as relational classifier is that it allows only for binary classification so that we need to train several binary networks in a one-vs-all paradigm (see also (Rahman and Ng, 2011)), which will not be able to use the multiclass dependencies of the relational features to optimum effect. Table 7 shows the comparison of collective classification to local classification, using Nissim’s framework and features, and Table 8 the equivalent table for Rahman and Ng’s approach.
The improvements using the additional local features over the original local classifiers are statistically significant in all cases. In particular, the inclusion of semantic classes improves mediated/knowledge and mediated/func, and comparative anaphora are recognised highly reliably via a small set of comparative markers.
The hasChild relation leads to significant improvement in accuracy over local classification in all cases, showing the value of collective classification. The improvement here is centered on the categories of mediated/synt (for both cases) and mediated/aggregate (for Nissim+ol+hasChild) as well as their distinction from
new.10 It is also interesting that collective classification with a concise feature set and a simple decision tree as used in Nissim+ol+hasChild, performs equally well as RahmanNg+ol+hasChild, which uses thousands of unigram and tree features and a more sophisticated local classifier. It also shows more consistent improvements over all finegrained classes.
The precedes relation does not lead to any further improvement. We investigated several variations of the precedence link, such as restricting it to certain grammatical relations, taking into account definiteness or NP type but none of them led to any improvement. We think there are two reasons for this lack of success. First, the precedence of mediated vs. new mentions does not follow a clear order and is therefore not a very predictive feature (see Table 6). At first, this seems to contradict studies such as Cahill and Riester (2009) that find a variety of precedences according to information status. However, many of the clearest precedences they find are more specific variants of the old >p mediated or old >p new precedence or they are preferences at an even finer level than the one we annotate, including for example the identification of generics. Second, the clear old >p mediated
10For RhamanNg+ol+hasChild, the aggregate class suffers from collective classification. We hypothesise that this is an artefact of the one-vs-all training/testing for rare categories.
and old >p new preferences are partially already captured by the local features, especially the grammatical role, as, for example, subjects are often both old as well as early on in a sentence.
With regard to fine-grained classification, many categories including comparative anaphora, are identified quite reliably, especially in the multiclass classification setting (Nissim+ol+hasChild). Bridging seems to be the by far most difficult category to identify with final best F-measures still very low. Most bridging mentions do not have any clear internal structure or external syntactic contexts that signal their presence. Instead, they rely more on lexical and world knowledge for recognition. Unigrams could potentially encapsulate some of this lexical knowledge but — without generalization — are too sparse for a relatively rare category such as bridging (6% of all mentions) to perform well. The difficulty of bridging recognition is an important insight of this paper as it casts doubt on the strategy in previous research to concentrate almost exclusively on antecedent selection (see Section 2).
|
Previous work on classifying information status (Nissim, 2006; Rahman and Ng, 2011) is restricted to coarse-grained classification and focuses on conversational dialogue. We here introduce the task of classifying finegrained information status and work on written text. We add a fine-grained information status layer to the Wall Street Journal portion of the OntoNotes corpus. We claim that the information status of a mention depends not only on the mention itself but also on other mentions in the vicinity and solve the task by collectively classifying the information status of all mentions. Our approach strongly outperforms reimplementations of previous work.
|
[{"affiliations": [], "name": "Katja Markert"}, {"affiliations": [], "name": "Yufang Hou"}, {"affiliations": [], "name": "Michael Strube"}]
|
SP:1c5cf913e5539237754494a20f57450888547f41
|
[{"authors": ["Ron Artstein", "Massimo Poesio."], "title": "Inter-coder agreement for computational linguistics", "venue": "Computational Linguistics, 34(4):555\u2013596.", "year": 2008}, {"authors": ["Regina Barzilay", "Mirella Lapata."], "title": "Modeling local coherence: An entity-based approach", "venue": "Computational Linguistics, 34(1):1\u201334.", "year": 2008}, {"authors": ["Betty J. Birner", "Gregory Ward."], "title": "Information Status and NoncanonicalWord Order in English", "venue": "John Benjamins, Amsterdam, The Netherlands.", "year": 1998}, {"authors": ["Aoife Cahill", "Arndt Riester."], "title": "Incorporating information status into generation ranking", "venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the Association for Computational Linguistics and the 4th International Joint Conference on Natural", "year": 2009}, {"authors": ["Jie Cai", "\u00c9va M\u00fajdricza-Maydt", "Michael Strube."], "title": "Unrestricted coreference resolution via global hypergraph partitioning", "venue": "Proceedings of the Shared Task of the 15th Conference on Computational Natural Language Learning, Portland, Oreg., 23\u201324 June", "year": 2011}, {"authors": ["Jean Carletta."], "title": "Assessing agreement on classification tasks: The kappa statistic", "venue": "Computational Linguistics, 22(2):249\u2013254.", "year": 1996}, {"authors": ["Michael Collins", "Nigel Duffy."], "title": "Convolution kernels for natural language", "venue": "Advances in Neural Information Processing Systems 14, Vancouver, B.C., Canada, 3\u20138 December, 2001, pages 625\u2013632, Cambridge, Mass. MIT Press.", "year": 2001}, {"authors": ["Pascal Denis", "Jason Baldridge."], "title": "Joint determination of anaphoricity and coreference resolution using integer programming", "venue": "Proceedings of Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Com-", "year": 2007}, {"authors": ["Katja Filippova", "Michael Strube."], "title": "Generating constituent order in German clauses", "venue": "Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics, Prague, Czech Republic, 23\u201330 June 2007, pages 320\u2013327.", "year": 2007}, {"authors": ["Claire Gardent", "H\u00e9l\u00e8ne Manu\u00e9lian."], "title": "Cr\u00e9ation d\u2019un corpus annot\u00e9 pour le traitement des descriptions d\u00e9finies", "venue": "Traitement Automatique des Langues, 46(1):115\u2013140.", "year": 2005}, {"authors": ["Barbara J. Grosz", "Aravind K. Joshi", "Scott Weinstein."], "title": "Centering: A framework for modeling the local coherence of discourse", "venue": "Computational Linguistics, 21(2):203\u2013225.", "year": 1995}, {"authors": ["Ivana Kruijff-Korbayov\u00e1", "Mark Steedman."], "title": "Discourse and information structure", "venue": "Journal of Logic, Language and Information. Special Issue on Discource and Information Structure, 12(3):149\u2013259.", "year": 2003}, {"authors": ["Knud Lambrecht."], "title": "Information Structure and Sentence Form", "venue": "Cambridge, U.K.: Cambridge University Press.", "year": 1994}, {"authors": ["Qing Lu", "Lise Getoor."], "title": "Link-based classification", "venue": "Proceedings of the 20th International Conference on Machine Learning, Washington, D.C., 21\u201324 August 2003, pages 496\u2013503.", "year": 2003}, {"authors": ["Sofus A. Macskassy", "Foster Provost."], "title": "Classification in networked data: A toolkit and a univariate case study", "venue": "Journal of Machine Learning Research, 8:935\u2013983.", "year": 2007}, {"authors": ["Katja Markert", "Malvina Nissim."], "title": "Comparing knowledge sources for nominal anaphora resolution", "venue": "Computational Linguistics, 31(3):367\u2013401.", "year": 2005}, {"authors": ["Josef Meyer", "Robert Dale."], "title": "Mining a corpus to support associative anaphora resolution", "venue": "Proceedings of the 4th International Conference on Discourse Anaphora and Anaphor Resolution, Lisbon, Portugal, 18\u201320 September, 2002.", "year": 2002}, {"authors": ["Natalia M. Modjeska", "Katja Markert", "Malvina Nissim."], "title": "Using the web in machine learning for other-anaphora resolution", "venue": "Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing, Sapporo, Japan, 11\u201312 July 2003,", "year": 2003}, {"authors": ["Ani Nenkova", "Jason Brenier", "Anubha Kothari", "Sasha Calhoun", "LauraWhitton", "David Beaver", "Dan Jurafsky."], "title": "To memorize or to predict: Prominence labeling in conversational speech", "venue": "Proceedings of Human Language Technologies 2007: The Conference of the", "year": 2007}, {"authors": ["Vincent Ng."], "title": "Graph-cut-based anaphoricity determination for coreference resolution", "venue": "Proceedings of Human Language Technologies 2009: The Conference of the North American Chapter of the Association for Computational Linguistics, Boulder, Col., 31 May \u2013 5", "year": 2009}, {"authors": ["Malvina Nissim", "Shipara Dingare", "Jean Carletta", "Mark Steedman."], "title": "An annotation scheme for information status in dialogue", "venue": "Proceedings of the 4th International Conference on Language Resources and Evaluation, Lisbon, Portugal, 26\u201328 May 2004, pages", "year": 2004}, {"authors": ["Malvina Nissim."], "title": "Learning information status of discourse entities", "venue": "Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, Sydney, Australia, 22\u201323 July 2006, pages 94\u2013012.", "year": 2006}, {"authors": ["Bo Pang", "Lillian Lee."], "title": "A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts", "venue": "Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics, Barcelona, Spain, 21\u201326 July 2004, pages", "year": 2004}, {"authors": ["Massimo Poesio", "Rahul Mehta", "Axel Maroudas", "Janet Hitzeman."], "title": "Learning to resolve bridging references", "venue": "Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics, Barcelona, Spain, 21\u201326 July 2004, pages 143\u2013150.", "year": 2004}, {"authors": ["Massimo Poesio."], "title": "The MATE/GNOME proposals for anaphoric annotation, revisited", "venue": "Proceedings of the 5th SIGdial Workshop on Discourse and Dialogue, Cambridge, Mass., 30 April \u2013 1 May 2004, pages 154\u2013 162.", "year": 2004}, {"authors": ["Scott Prevost."], "title": "An information structural approach to spoken language generation", "venue": "Proceedings of the 34th Annual Meeting of the Association for Computational Linguistics, Santa Cruz, Cal., 24\u201327 June 1996, pages 294\u2013301.", "year": 1996}, {"authors": ["Ellen F. Prince."], "title": "Towards a taxonomy of given-new information", "venue": "P. Cole, editor, Radical Pragmatics, pages 223\u2013255. Academic Press, New York, N.Y.", "year": 1981}, {"authors": ["Ellen F. Prince."], "title": "The ZPG letter: Subjects, definiteness, and information-status", "venue": "W.C. Mann and S.A. Thompson, editors, Discourse Description. Diverse Linguistic Analyses of a Fund-Raising Text, pages 295\u2013325. John Benjamins, Amsterdam.", "year": 1992}, {"authors": ["Altaf Rahman", "Vincent Ng."], "title": "Learning the information status of noun phrases in spoken dialogues", "venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, Edinburgh, Scotland, U.K., 27\u201329 July 2011, pages 1069\u20131080.", "year": 2011}, {"authors": ["Arndt Riester", "David Lorenz", "Nina Seemann."], "title": "A recursive annotation scheme for referential information status", "venue": "Proceedings of the 7th International Conference on Language Resources and Evaluation, La Valetta, Malta, 17\u201323 May 2010, pages 717\u2013722.", "year": 2010}, {"authors": ["Julia Ritz", "Stefanie Dipper", "Michael G\u00f6tze."], "title": "Annotation of information structure: An evaluation across different types of texts", "venue": "Proceedings of the 6th International Conference on Language Resources and Evaluation, Marrakech, Morocco, 26 May \u2013 1", "year": 2008}, {"authors": ["Ryohei Sasano", "Sadao Kurohashi."], "title": "A probabilistic model for associative anaphora resolution", "venue": "Proceedings of the 2009 Conference on Empirical", "year": 2009}, {"authors": ["Advaith Siddharthan", "Ani Nenkova", "Kathleen McKeown."], "title": "Information status distinctions and referring expressions: An empirical study of references to people in news summaries", "venue": "Computational Linguistics, 37(4):811\u2013842.", "year": 2011}, {"authors": ["Swapna Somasundaran", "Galileo Namata", "Janyce Wiebe", "Lise Getoor."], "title": "Supervised and unsupervised methods in employing discourse relations for improving opinion polarity classification", "venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural", "year": 2009}, {"authors": ["Ben Taskar", "Pieter Abbeel", "Daphne Koller."], "title": "Discriminative probabilistic models for relational data", "venue": "Proceedings of the 18th Conference on Uncertainty in Artificial Intelligence, Edmonton, Alberta, Canada, 1-4 August 2002, pages 485\u2013492.", "year": 2002}, {"authors": ["Renata Vieira", "Massimo Poesio."], "title": "An empirically-based system for processing definite descriptions", "venue": "Computational Linguistics, 26(4):539\u2013 593.", "year": 2000}, {"authors": ["Ralph Weischedel", "Martha Palmer", "Mitchell Marcus", "Eduard Hovy", "Sameer Pradhan", "Lance Ramshaw", "Nianwen Xue", "Ann Taylor", "Jeff Kaufman", "Michelle Franchini", "Mohammed El-Bachouti", "Robert Belvin", "Ann Houston"], "title": "OntoNotes release 4.0", "year": 2011}, {"authors": ["Yiming Yang", "Se\u00e1n Slattery", "Rayid Ghani."], "title": "A study of approaches to hypertext categorization", "venue": "Journal of Intelligent Information Systems, 18(2-3):219\u2013 241.", "year": 2002}, {"authors": ["Guodong Zhou", "Fang Kong."], "title": "Global learning of noun phrase anaphoricity in coreference resolution via label propagation", "venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, Singapore, 6\u20137 August 2009, pages 978\u2013986.", "year": 2009}]
|
6 conclusions :We presented a new approach to information status classification in written text, for which we also provide the first reliably annotated English language corpus. Based on linguistic intuition, we define fea-
tures for classifying mentions collectively. We show that our collective classification approach outperforms the state-of-the-art in coarse-grained IS classification by about 10% (Nissim, 2006) and 5% (Rahman and Ng, 2011) accuracy. The gain is almost entirely due to improvements in distinguishing between new and mediatedmentions. For the latter, we also report the – to our knowledge – first finegrained IS classification results.
Since the work reported in this paper relied – following Nissim (2006) and Rahman and Ng (2011) – on gold standard mentions and syntactic annotations, we plan to perform experiments with predicted mentions as well. We also have to improve the recognition of bridging, ideally combining recognition and antecedent selection for a complete resolution component. In addition, we plan to integrate IS resolution with our coreference resolution system (Cai et al., 2011) to provide us with a more comprehensive discourse processing system.
Acknowledgements. Katja Markert received a Fellowship for Experienced Researchers by the Alexandervon-Humboldt Foundation and Yufang Hou is funded by a PhD scholarship from the Research Training GroupCoherence in Language Processing at Heidelberg University. We thank the Heidelberg Institute for Theoretical Studies for hosting Katja Markert and funding the annotation study, and the annotators for their diligent work.
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 introduction :Speakers present already known and yet to be established information according to principles referred to as information structure (Prince, 1981; Lambrecht, 1994; Kruijff-Korbayová and Steedman, 2003, inter alia). While information structure affects all kinds of constituents in a sentence, we here adopt the more restricted notion of information status which concerns only discourse entities realized as noun phrases, i.e. mentions1. Information status (IS henceforth) describes the degree to which a discourse entity is available to the hearer with regard to the speaker’s assumptions about the hearer’s knowledge and beliefs (Nissim et al., 2004). Old mentions are known to the hearer and have been referred
1Since not all noun phrases are referential, we call noun
phrases which carry information status mentions.
to previously. Mediated mentions have not been mentioned before but are also not autonomous, i.e., they can only be correctly interpreted by reference to another mention or to prior world knowledge. All other mentions are new.
IS can be beneficial for a number of NLP tasks, though the results have been mixed. Nenkova et al. (2007) used IS as a feature for generating pitch accent in conversational speech. As IS is restricted to noun phrases, while pitch accent can be assigned to any word in an utterance, the experiments were not conclusive. For determining constituent order of German sentences, Cahill and Riester (2009) incorporate features modeling IS to good effect. Rahman and Ng (2011) showed that IS is a useful feature for coreference resolution.
Previous work on learning IS (Nissim, 2006; Rahman and Ng, 2011) is restricted in several ways. It deals with conversational dialogue, in particular with the corpus annotated by Nissim et al. (2004). However, many applications that can profit from IS concentrate on written texts, such as summarization. For example, Siddharthan et al. (2011) show that solving the IS subproblem of whether a person proper name is already known to the reader improves automatic summarization of news. Therefore, we here model IS in written text, creating a new dataset which adds an IS layer to the already existing comprehensive annotation in the OntoNotes corpus (Weischedel et al., 2011). We also report the first results on fine-grained IS classification by modelling further distinctions within the category of mediated mentions, such as comparative and bridging anaphora (see Examples 1 and 2, re-
795
spectively).2 Fine-grained IS is a prerequisite to full bridging/comparative anaphora resolution, and therefore necessary to fill gaps in entity grids (Barzilay and Lapata, 2008) based on coreference only. Thus, Examples 1 and 2 do not exhibit any coreferential entity coherence but coherence can be established when the comparative anaphor others is resolved to others than freeway survivor Buck Helm, and the bridging anaphor the streets is resolved to the streets of Oranjemund, respectively.
(1) the condition of freeway survivor Buck
Helm . . . , improved, hospital officials said. Rescue crews, however, gave up hope that others would be found.
(2) Oranjemund, the mine headquarters, is a
lonely corporate oasis of 9,000 residents. Jackals roam the streets at night . . .
We approach the challenge of modeling IS via collective classification, using several novel linguistically motivated features. We reimplement Nissim’s (2006) and Rahman and Ng’s (2011) approaches as baselines and show that our approach outperforms these by a large margin for both coarse- and finegrained IS classification. 2 related work :IS annotation schemes and corpora. We enhance the approach in Nissim et al. (2004) in two major ways (see also Section 3.1). First, comparative anaphora are not specifically handled in Nissim et al. (2004) (and follow-on work such as Ritz et al. (2008) and Riester et al. (2010)), although some of them might be included in their respective bridging subcategories. Second, we apply the annotation scheme reliably to a new genre, namely news. This is a non-trivial extension: Ritz et al. (2008) applied a variation of the Nissim et al. (2004) scheme to a small set of 220 NPs in a German news/commentary corpus but found that reliability then dropped significantly to the range of κ = 0.55 to 0.60. They attributed this to the higher syntactic complexity and semantic vagueness in the commentary corpus. Riester et al. (2010) annotated a
2All examples in this paper are from the OntoNotes corpus. The mention in question is typed in boldface; antecedents, where applicable, are displayed in italics.
German news corpus marginally reliable (κ = 0.66) for their overall scheme but their confusion matrix shows even lower reliability for several subcategories, most importantly deixis and bridging.
While standard coreference corpora do not contain IS annotation, some corpora annotated for bridging are emerging (Poesio, 2004; Korzen and Buch-Kromann, 2011) but they are (i) not annotated for comparative anaphora or other IS categories, (ii) often not tested for reliability or reach only low reliability, (iii) often very small (Poesio, 2004).
To the best of our knowledge, we therefore present the first English corpus reliably annotated for a wide range of IS categories as well as full anaphoric information for three main anaphora types (coreference, bridging, comparative).
Automatic recognition of IS. Vieira and Poesio (2000) describe heuristics for processing definite descriptions in news text. As their approach is restricted to definites, they only analyse a subset of the mentions we consider carrying IS. Siddharthan et al. (2011) also concentrate on a subproblem of IS only, namely the hearer-old/hearer-new distinctions for person proper names.
Nissim (2006) and Rahman and Ng (2011) both present algorithms for IS detection on Nissim et al.’s (2004) Switchboard corpus. Both papers treat IS classification as a local classification problem whereas we look at dependencies between the IS status of different mentions, leading to collective classification. In addition, they only distinguish the three main categories old, mediated and new. Finally, we work on news corpora which poses different problems from dialogue.
Anaphoricity determination (Ng, 2009; Zhou and Kong, 2009) identifies many or most old mentions. However, no distinction between mediated and new mentions is made. Most approaches to bridging resolution (Meyer and Dale, 2002; Poesio et al., 2004) or comparative anaphora (Modjeska et al., 2003; Markert and Nissim, 2005) address only the selection of the antecedent for the bridging/comparative anaphor, not its recognition. Sasano and Kurohashi (2009) do also tackle bridging recognition, but they depend on languagespecific non-transferrable features for Japanese. 3 corpus creation : Our scheme follows Nissim et al. (2004) in distinguishing three major IS categories old, new and mediated. A mention is old if it is either coreferential with an already introduced entity or a generic or deictic pronoun. We follow the OntoNotes (Weischedel et al., 2011) definition of coreference to be able to integrate our annotations with it. This definition includes coreference with noun phrase as well as verb phrase antecedents3 .
Mediated refers to entities which have not yet been introduced in the text but are inferrable via other mentions or are known via world knowledge. We distinguish the following six subcategories: The category mediated/comparative comprises mentions compared via either a contrast or similarity to another one (see Example 1). This category is novel in our scheme. We also include a category mediated/bridging (see Examples 2, 3 and 4). Bridging anaphora can be any noun phrase and are not limited to definite NPs as in Poesio et al. (2004), Gardent and Manuélian (2005), Riester et al. (2010). In contrast to Nissim et al. (2004), antecedents for both comparative and bridging categories are annotated and can be noun phrases, verb phrases or even clauses. The category mediated/knowledge is inspired by the hearerold distinction introduced by Prince (1992) and covers entities generally known to the hearer. It includes many proper names, such as Poland.4 Mentions that are syntactically linked via a possessive relation or a PP modification to other, old or mediated mentions fall into the type mediated/synt (see Examples 5 and 6).5 With no change to Nissim et al.’s scheme, coordinated mentions where at least one element in the conjunction is old or mediated are covered by the category mediated/aggregate, and mentions referring to a value of a previously mentioned function by the type mediated/func.
All other mentions are annotated as new, includ-
3In contrast to Nissim et al. (2004), but in accordance with OntoNotes, we do not consider generics for coreference. 4This class corresponds roughly to Nissim et al.’s (2004) mediated/general. 5This class expands Nissim et al.’s (2004) poss category that only considers possessives but not PP modification.
ing most generics as well as newly introduced, specific mentions such as Example 7.
(3) Initial steps were taken at Poland’s first en-
vironmental conference, which I attended last month. . . . it was no accident that participants urged the free flow of information
(4) The Bakersfield supermarket went out of
business last May. The reason was . . .
(5) One Washington couple sold their liquor
store
(6) the main artery into San Francisco
(7) the owner was murdered by robbers We carried out an agreement study with 3 annotators, of which Annotator A was the scheme developer and first author of this paper. All texts used were from the Wall Street Journal (WSJ) portion of OntoNotes. There were no restrictions on which texts to include apart from (i) exclusion of letters to the editor as they contain cross-document links and (ii) a preference for longer texts with potentially richer discourse structure.
Mentions were automatically preselected for the annotators using the gold-standard syntactic annotation.6 The existing coreference annotation was automatically carried over to the IS task by marking all mentions in a coreference chain (apart from the first mention in the chain) as old. The annotation task consisted of marking all mentions for their IS (old, mediated or new) as well as marking mediated subcategories (see Section 3.1) and the antecedents for comparative and bridging anaphora.
The scheme was developed on 9 texts, which were also used for training the annotators. Inter-annotator agreement was measured on 26 new texts, which included 5905 pre-marked potential mentions. The annotations of 1499 of these were carried over from OntoNotes, leaving 4406 potential mentions for annotation and agreement measurement. In addition to
6Some non-mentions such as idioms could not be filtered out via the syntactic annotation and had to be excluded during human annotation.
percentage agreement, we measured Cohen’s κ (Artstein and Poesio, 2008) between all 3 possible annotator pairings. We also report single-category agreement for each category, where all categories but one are merged and then κ is computed as usual. Table 1 shows agreement results for the overall scheme at the coarse-grained (4 categories: non-mention, old, new, mediated) and the fine-grained level (9 categories: non-mention, old, new and the 6 mediated subtypes). The results show that the scheme is overall reliable, with not too many differences between the different annotator pairings.7
Table 2 shows the individual category agreement for all 9 categories. We achieve high reliability for most categories.8 Particularly interesting is the fact that hearer-old entities (mediated/knowledge) can be identified reliably although all annotators had substantially different backgrounds. The reliability of the category bridging is more annotatordependent, although still higher, sometimes considerably, than other previous attempts at bridg-
7Often, annotation is considered highly reliable when κ exceeds 0.80 and marginally reliable when between 0.67 and 0.80 (Carletta, 1996). However, the interpretation of κ is still under discussion (Artstein and Poesio, 2008). 8The low reliability of the rare category func, when involving Annotator B, was explained by Annotator B forgetting about this category after having used it once. Pair A-C achieved high reliability (κ 83.2 for pair A-C).
ing annotation (Poesio et al., 2004; Gardent and Manuélian, 2005; Riester et al., 2010). Our final gold standard corpus consists of 50 texts from the WSJ portion of the OntoNotes corpusThe corpus will be made publically available as OntoNotes annotation layer via http://www. h-its.org/nlp/download.
Disagreements in the 35 texts used for annotator training (9 texts) and testing (26 texts) were resolved via discussion between the annotators. An additional 15 texts were annotated by Annotator A. Finally, Annotator A carried out consistency checks over all texts. – The gold standard includes 10,980 true mentions (see Table 3). 4 features :In this Section, we describe both the local as well as the relational features we use. We use the following local features, including the features in Nissim (2006) and Rahman and Ng (2011) to be able to gauge how their systems fare on our corpus and as a comparison point for our novel collective classification approach.
The features developed by Nissim (2006) are shown in Table 4. Nissim shows clearly that these features are useful for IS classification. Thus, subjects are more likely to be old as assumed by, e.g., centering theory (Grosz et al.,
1995). Also, previously unmentioned proper names are more likely to be hearer-old and therefore mediated/knowledge, although their exact status will depend on how well known a particular proper name is.
Rahman and Ng (2011) add all unigrams appearing in any mention in the training set as features. They also integrated (via a convolution tree-kernel SVM (Collins and Duffy, 2001)) partial parse trees that capture the generalised syntactic context of a mention e and include the mention’s parent and sibling nodes without lexical leaves. However, they use no structure underneath the mention node e itself, assuming that “any NP-internal information has presumably been captured by the flat features”.
To these feature sets, we add a small set of other local features otherlocal. These track partial previous mentions by also counting partial previous mention time as well as the previous mention of content words only. We also add a mention’s number as one of singular, plural or unknown, and whether the mention is modified by an adjective. Another feature encapsulates whether the mention is modified by a comparative marker, using a small set of 10 markers such as another, such, similar . . . and the presence of adjectives or adverbs in the comparative. Finally, we include the mention’s semantic class as one of 12 coarse-grained classes, including location, organisation, person and several classes for numbers (such as date, money or percent). Both Nissim (2006) and Rahman and Ng (2011) classify each mention individually in a standard supervised ML setting, not considering potential dependencies between the IS categories of different
9We changed the value of “full prev mention” from “nu-
meric’ to {yes, no, NA}.
mentions. However, collective or joint classification has made substantial impact in other NLP tasks, such as opinion mining (Pang and Lee, 2004; Somasundaran et al., 2009), text categorization (Yang et al., 2002; Taskar et al., 2002) and the related task of coreference resolution (Denis and Baldridge, 2007). We investigate two types of relations between mentions that might impact on IS classification.
Syntactic parent-child relations. Two mediated subcategories account for accessibility via syntactic links to another old or mediated mention: mediated/synt is used when at least one child of a mention is mediated or old, with child relations restricted to pre- or postnominal possessives as well as PP children in our scheme (see Section 3.1). mediated/aggregate is for coordinations in which at least one of the children is old or mediated. In these two cases, a mention’s IS depends directly on the IS of its children. We therefore link a mention m1 to a mention m2 via a hasChild relation if (i) m2 is a possessive or prepositional modification of m1, or (ii) m1 is a coordination and m2 is one of its children.
Using such a relational feature catches two birds with one stone: firstly, it integrates the internal structure of a mention into the algorithm, which Rahman and Ng (2011) ignore; secondly, it captures dependencies between parent and child classification, which would not be possible if we integrated the internal structure via flat features or additional tree kernels. We hypothesise that the higher syntactic complexity of our news genre (14.5% of all mentions are mediated/synt) will make this feature highly effective in distinguishing between new and mediated categories.
Syntactic precedence relations. IS is said to influence word order (Birner and Ward, 1998; Cahill and Riester, 2009) and this fact has been exploited in work on generation (Prevost, 1996; Filippova and Strube, 2007; Cahill and Riester, 2009). Therefore, we integrate dependencies between the IS classification of mentions in precedence relations.
m1 precedes m2 if (i) m1 and m2 are in the same clause, allowing for trace subjects in gerund and infinitive constructions, (ii) m1 and m2 are dependent on the same verb or noun, allowing for intervening nodes via modal, auxiliary, gerund and infinitive
constructions, (iii) m1 is neither a child nor a parent of m2, and (iv) m1 occurs before m2.
For Example 8 (slightly simplified) we extract the
precedence relations shown in Table 5.
(8) She was sent by her mother to a white
Proper names behave differently from common nouns. For example, they can occur at many different places in the clause when functioning as spatial or temporal scene-setting elements, such as In New York. We therefore exclude all precedence relations where one element of the pair is a proper name.
We extract 2855 precedence relations. Table 6 shows the statistics on precedence with the first mention in a pair in rows and the second in columns. Mediated and new mentions indeed rarely precede old mentions, so that precedence should improve separating of old vs other mentions. 5 experiments : We use our gold standard corpus (see Section 3.3) via 10-fold cross-validation on documents for all experiments. Following Nissim (2006) and Rahman and Ng (2011), we perform all experiments on gold standard mentions and use the human WSJ syntactic annotation for feature extraction, when necessary. For the extraction of semantic class, we use
OntoNotes entity type annotation for proper names and an automatic assignment of semantic class via WordNet hypernyms for common nouns.
Coarse-grained versions of all algorithms distinguish only between the three old, mediated, new categories. Fine-grained versions distinguish between the categories old, the six mediated subtypes, and new. We report overall accuracy as well as precision, recall and F-measure per category. Significance tests are conducted using McNemar’s test on overall algorithm accuracy, at the level of 1%. We reimplemented the algorithms in Nissim (2006) and Rahman and Ng (2011) as comparison baselines, using their feature and algorithm choices. Algorithm Nissim is therefore a decision tree J48 with standard settings in WEKA with the features in Table 4. Algorithm RahmanNg is an SVMwith a composite kernel and one-vs-all training/testing (toolkit SVMLight). They use the features in Table 4 plus unigram and tree kernel features, described in Section 4.1. We add our additional set of otherlocal features to both baseline algorithms (yielding Nissim+ol and RahmanNg+ol) as they aim specifically at improving fine-grained classification. For incorporating our inter-mention links, we use a variant of Iterative Collective classification (ICA), which has shown good performance over a variety of tasks (Lu and Getoor, 2003) and has been used in NLP for example for opinion mining (Somasundaran et al., 2009). ICA is normally faster than Gibbs sampling and — in initial experiments — did not yield significantly different results from it.
ICA initializes each mention with its most likely IS, according to the local classifier and features. It then iterates a relational classifier, which uses both local and relational features (our hasChild and precedes features) taking IS assignments to neighbouring mentions into account. We use the exist aggregator to define the dependence between mentions.
We use NetKit (Macskassy and Provost, 2007) with its standard ICA settings for collective inference, as it allows direct comparison between local and collective classification. The relational classifiers are always exactly the same classifiers as the
local ones with the relational features added: thus, if the local classifier is a tree kernel SVM so is the relational one. One problem when using the SVM Tree kernel as relational classifier is that it allows only for binary classification so that we need to train several binary networks in a one-vs-all paradigm (see also (Rahman and Ng, 2011)), which will not be able to use the multiclass dependencies of the relational features to optimum effect. Table 7 shows the comparison of collective classification to local classification, using Nissim’s framework and features, and Table 8 the equivalent table for Rahman and Ng’s approach.
The improvements using the additional local features over the original local classifiers are statistically significant in all cases. In particular, the inclusion of semantic classes improves mediated/knowledge and mediated/func, and comparative anaphora are recognised highly reliably via a small set of comparative markers.
The hasChild relation leads to significant improvement in accuracy over local classification in all cases, showing the value of collective classification. The improvement here is centered on the categories of mediated/synt (for both cases) and mediated/aggregate (for Nissim+ol+hasChild) as well as their distinction from
new.10 It is also interesting that collective classification with a concise feature set and a simple decision tree as used in Nissim+ol+hasChild, performs equally well as RahmanNg+ol+hasChild, which uses thousands of unigram and tree features and a more sophisticated local classifier. It also shows more consistent improvements over all finegrained classes.
The precedes relation does not lead to any further improvement. We investigated several variations of the precedence link, such as restricting it to certain grammatical relations, taking into account definiteness or NP type but none of them led to any improvement. We think there are two reasons for this lack of success. First, the precedence of mediated vs. new mentions does not follow a clear order and is therefore not a very predictive feature (see Table 6). At first, this seems to contradict studies such as Cahill and Riester (2009) that find a variety of precedences according to information status. However, many of the clearest precedences they find are more specific variants of the old >p mediated or old >p new precedence or they are preferences at an even finer level than the one we annotate, including for example the identification of generics. Second, the clear old >p mediated
10For RhamanNg+ol+hasChild, the aggregate class suffers from collective classification. We hypothesise that this is an artefact of the one-vs-all training/testing for rare categories.
and old >p new preferences are partially already captured by the local features, especially the grammatical role, as, for example, subjects are often both old as well as early on in a sentence.
With regard to fine-grained classification, many categories including comparative anaphora, are identified quite reliably, especially in the multiclass classification setting (Nissim+ol+hasChild). Bridging seems to be the by far most difficult category to identify with final best F-measures still very low. Most bridging mentions do not have any clear internal structure or external syntactic contexts that signal their presence. Instead, they rely more on lexical and world knowledge for recognition. Unigrams could potentially encapsulate some of this lexical knowledge but — without generalization — are too sparse for a relatively rare category such as bridging (6% of all mentions) to perform well. The difficulty of bridging recognition is an important insight of this paper as it casts doubt on the strategy in previous research to concentrate almost exclusively on antecedent selection (see Section 2). Previous work on classifying information status (Nissim, 2006; Rahman and Ng, 2011) is restricted to coarse-grained classification and focuses on conversational dialogue. We here introduce the task of classifying finegrained information status and work on written text. We add a fine-grained information status layer to the Wall Street Journal portion of the OntoNotes corpus. We claim that the information status of a mention depends not only on the mention itself but also on other mentions in the vicinity and solve the task by collectively classifying the information status of all mentions. Our approach strongly outperforms reimplementations of previous work. [{"affiliations": [], "name": "Katja Markert"}, {"affiliations": [], "name": "Yufang Hou"}, {"affiliations": [], "name": "Michael Strube"}] SP:1c5cf913e5539237754494a20f57450888547f41 [{"authors": ["Ron Artstein", "Massimo Poesio."], "title": "Inter-coder agreement for computational linguistics", "venue": "Computational Linguistics, 34(4):555\u2013596.", "year": 2008}, {"authors": ["Regina Barzilay", "Mirella Lapata."], "title": "Modeling local coherence: An entity-based approach", "venue": "Computational Linguistics, 34(1):1\u201334.", "year": 2008}, {"authors": ["Betty J. Birner", "Gregory Ward."], "title": "Information Status and NoncanonicalWord Order in English", "venue": "John Benjamins, Amsterdam, The Netherlands.", "year": 1998}, {"authors": ["Aoife Cahill", "Arndt Riester."], "title": "Incorporating information status into generation ranking", "venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the Association for Computational Linguistics and the 4th International Joint Conference on Natural", "year": 2009}, {"authors": ["Jie Cai", "\u00c9va M\u00fajdricza-Maydt", "Michael Strube."], "title": "Unrestricted coreference resolution via global hypergraph partitioning", "venue": "Proceedings of the Shared Task of the 15th Conference on Computational Natural Language Learning, Portland, Oreg., 23\u201324 June", "year": 2011}, {"authors": ["Jean Carletta."], "title": "Assessing agreement on classification tasks: The kappa statistic", "venue": "Computational Linguistics, 22(2):249\u2013254.", "year": 1996}, {"authors": ["Michael Collins", "Nigel Duffy."], "title": "Convolution kernels for natural language", "venue": "Advances in Neural Information Processing Systems 14, Vancouver, B.C., Canada, 3\u20138 December, 2001, pages 625\u2013632, Cambridge, Mass. MIT Press.", "year": 2001}, {"authors": ["Pascal Denis", "Jason Baldridge."], "title": "Joint determination of anaphoricity and coreference resolution using integer programming", "venue": "Proceedings of Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Com-", "year": 2007}, {"authors": ["Katja Filippova", "Michael Strube."], "title": "Generating constituent order in German clauses", "venue": "Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics, Prague, Czech Republic, 23\u201330 June 2007, pages 320\u2013327.", "year": 2007}, {"authors": ["Claire Gardent", "H\u00e9l\u00e8ne Manu\u00e9lian."], "title": "Cr\u00e9ation d\u2019un corpus annot\u00e9 pour le traitement des descriptions d\u00e9finies", "venue": "Traitement Automatique des Langues, 46(1):115\u2013140.", "year": 2005}, {"authors": ["Barbara J. Grosz", "Aravind K. Joshi", "Scott Weinstein."], "title": "Centering: A framework for modeling the local coherence of discourse", "venue": "Computational Linguistics, 21(2):203\u2013225.", "year": 1995}, {"authors": ["Ivana Kruijff-Korbayov\u00e1", "Mark Steedman."], "title": "Discourse and information structure", "venue": "Journal of Logic, Language and Information. Special Issue on Discource and Information Structure, 12(3):149\u2013259.", "year": 2003}, {"authors": ["Knud Lambrecht."], "title": "Information Structure and Sentence Form", "venue": "Cambridge, U.K.: Cambridge University Press.", "year": 1994}, {"authors": ["Qing Lu", "Lise Getoor."], "title": "Link-based classification", "venue": "Proceedings of the 20th International Conference on Machine Learning, Washington, D.C., 21\u201324 August 2003, pages 496\u2013503.", "year": 2003}, {"authors": ["Sofus A. Macskassy", "Foster Provost."], "title": "Classification in networked data: A toolkit and a univariate case study", "venue": "Journal of Machine Learning Research, 8:935\u2013983.", "year": 2007}, {"authors": ["Katja Markert", "Malvina Nissim."], "title": "Comparing knowledge sources for nominal anaphora resolution", "venue": "Computational Linguistics, 31(3):367\u2013401.", "year": 2005}, {"authors": ["Josef Meyer", "Robert Dale."], "title": "Mining a corpus to support associative anaphora resolution", "venue": "Proceedings of the 4th International Conference on Discourse Anaphora and Anaphor Resolution, Lisbon, Portugal, 18\u201320 September, 2002.", "year": 2002}, {"authors": ["Natalia M. Modjeska", "Katja Markert", "Malvina Nissim."], "title": "Using the web in machine learning for other-anaphora resolution", "venue": "Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing, Sapporo, Japan, 11\u201312 July 2003,", "year": 2003}, {"authors": ["Ani Nenkova", "Jason Brenier", "Anubha Kothari", "Sasha Calhoun", "LauraWhitton", "David Beaver", "Dan Jurafsky."], "title": "To memorize or to predict: Prominence labeling in conversational speech", "venue": "Proceedings of Human Language Technologies 2007: The Conference of the", "year": 2007}, {"authors": ["Vincent Ng."], "title": "Graph-cut-based anaphoricity determination for coreference resolution", "venue": "Proceedings of Human Language Technologies 2009: The Conference of the North American Chapter of the Association for Computational Linguistics, Boulder, Col., 31 May \u2013 5", "year": 2009}, {"authors": ["Malvina Nissim", "Shipara Dingare", "Jean Carletta", "Mark Steedman."], "title": "An annotation scheme for information status in dialogue", "venue": "Proceedings of the 4th International Conference on Language Resources and Evaluation, Lisbon, Portugal, 26\u201328 May 2004, pages", "year": 2004}, {"authors": ["Malvina Nissim."], "title": "Learning information status of discourse entities", "venue": "Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, Sydney, Australia, 22\u201323 July 2006, pages 94\u2013012.", "year": 2006}, {"authors": ["Bo Pang", "Lillian Lee."], "title": "A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts", "venue": "Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics, Barcelona, Spain, 21\u201326 July 2004, pages", "year": 2004}, {"authors": ["Massimo Poesio", "Rahul Mehta", "Axel Maroudas", "Janet Hitzeman."], "title": "Learning to resolve bridging references", "venue": "Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics, Barcelona, Spain, 21\u201326 July 2004, pages 143\u2013150.", "year": 2004}, {"authors": ["Massimo Poesio."], "title": "The MATE/GNOME proposals for anaphoric annotation, revisited", "venue": "Proceedings of the 5th SIGdial Workshop on Discourse and Dialogue, Cambridge, Mass., 30 April \u2013 1 May 2004, pages 154\u2013 162.", "year": 2004}, {"authors": ["Scott Prevost."], "title": "An information structural approach to spoken language generation", "venue": "Proceedings of the 34th Annual Meeting of the Association for Computational Linguistics, Santa Cruz, Cal., 24\u201327 June 1996, pages 294\u2013301.", "year": 1996}, {"authors": ["Ellen F. Prince."], "title": "Towards a taxonomy of given-new information", "venue": "P. Cole, editor, Radical Pragmatics, pages 223\u2013255. Academic Press, New York, N.Y.", "year": 1981}, {"authors": ["Ellen F. Prince."], "title": "The ZPG letter: Subjects, definiteness, and information-status", "venue": "W.C. Mann and S.A. Thompson, editors, Discourse Description. Diverse Linguistic Analyses of a Fund-Raising Text, pages 295\u2013325. John Benjamins, Amsterdam.", "year": 1992}, {"authors": ["Altaf Rahman", "Vincent Ng."], "title": "Learning the information status of noun phrases in spoken dialogues", "venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, Edinburgh, Scotland, U.K., 27\u201329 July 2011, pages 1069\u20131080.", "year": 2011}, {"authors": ["Arndt Riester", "David Lorenz", "Nina Seemann."], "title": "A recursive annotation scheme for referential information status", "venue": "Proceedings of the 7th International Conference on Language Resources and Evaluation, La Valetta, Malta, 17\u201323 May 2010, pages 717\u2013722.", "year": 2010}, {"authors": ["Julia Ritz", "Stefanie Dipper", "Michael G\u00f6tze."], "title": "Annotation of information structure: An evaluation across different types of texts", "venue": "Proceedings of the 6th International Conference on Language Resources and Evaluation, Marrakech, Morocco, 26 May \u2013 1", "year": 2008}, {"authors": ["Ryohei Sasano", "Sadao Kurohashi."], "title": "A probabilistic model for associative anaphora resolution", "venue": "Proceedings of the 2009 Conference on Empirical", "year": 2009}, {"authors": ["Advaith Siddharthan", "Ani Nenkova", "Kathleen McKeown."], "title": "Information status distinctions and referring expressions: An empirical study of references to people in news summaries", "venue": "Computational Linguistics, 37(4):811\u2013842.", "year": 2011}, {"authors": ["Swapna Somasundaran", "Galileo Namata", "Janyce Wiebe", "Lise Getoor."], "title": "Supervised and unsupervised methods in employing discourse relations for improving opinion polarity classification", "venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural", "year": 2009}, {"authors": ["Ben Taskar", "Pieter Abbeel", "Daphne Koller."], "title": "Discriminative probabilistic models for relational data", "venue": "Proceedings of the 18th Conference on Uncertainty in Artificial Intelligence, Edmonton, Alberta, Canada, 1-4 August 2002, pages 485\u2013492.", "year": 2002}, {"authors": ["Renata Vieira", "Massimo Poesio."], "title": "An empirically-based system for processing definite descriptions", "venue": "Computational Linguistics, 26(4):539\u2013 593.", "year": 2000}, {"authors": ["Ralph Weischedel", "Martha Palmer", "Mitchell Marcus", "Eduard Hovy", "Sameer Pradhan", "Lance Ramshaw", "Nianwen Xue", "Ann Taylor", "Jeff Kaufman", "Michelle Franchini", "Mohammed El-Bachouti", "Robert Belvin", "Ann Houston"], "title": "OntoNotes release 4.0", "year": 2011}, {"authors": ["Yiming Yang", "Se\u00e1n Slattery", "Rayid Ghani."], "title": "A study of approaches to hypertext categorization", "venue": "Journal of Intelligent Information Systems, 18(2-3):219\u2013 241.", "year": 2002}, {"authors": ["Guodong Zhou", "Fang Kong."], "title": "Global learning of noun phrase anaphoricity in coreference resolution via label propagation", "venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, Singapore, 6\u20137 August 2009, pages 978\u2013986.", "year": 2009}] 6 conclusions :We presented a new approach to information status classification in written text, for which we also provide the first reliably annotated English language corpus. Based on linguistic intuition, we define fea-
tures for classifying mentions collectively. We show that our collective classification approach outperforms the state-of-the-art in coarse-grained IS classification by about 10% (Nissim, 2006) and 5% (Rahman and Ng, 2011) accuracy. The gain is almost entirely due to improvements in distinguishing between new and mediatedmentions. For the latter, we also report the – to our knowledge – first finegrained IS classification results.
Since the work reported in this paper relied – following Nissim (2006) and Rahman and Ng (2011) – on gold standard mentions and syntactic annotations, we plan to perform experiments with predicted mentions as well. We also have to improve the recognition of bridging, ideally combining recognition and antecedent selection for a complete resolution component. In addition, we plan to integrate IS resolution with our coreference resolution system (Cai et al., 2011) to provide us with a more comprehensive discourse processing system.
Acknowledgements. Katja Markert received a Fellowship for Experienced Researchers by the Alexandervon-Humboldt Foundation and Yufang Hou is funded by a PhD scholarship from the Research Training GroupCoherence in Language Processing at Heidelberg University. We thank the Heidelberg Institute for Theoretical Studies for hosting Katja Markert and funding the annotation study, and the annotators for their diligent work.
| null |
1 introduction :The builders of affective lexica face the vexing task of distilling the many and varied pragmatic uses of a word or concept into an overall semantic measure of affect. The task is greatly complicated by the fact that in each context of use, speakers may implicitly agree to focus on just a subset of the salient features of a concept, and it is these features that determine contextual affect. Naturally, disagreements arise when speakers do not implicitly arrive at such a consensus, as when people disagree about hackers: advocates often focus on qualities that emphasize curiosity or technical virtuosity, while opponents focus on qualities that emphasize criminality and a disregard for the law. In each case, it is the same concept, Hacker, that is being described, yet speakers can focus on different qualities to arrive at different affective stances.
Any gross measure of affect (such as e.g., that hackers are good or bad) must thus be grounded in a nuanced model of the stereotypical properties and behaviors of the underlying word-concept. As different stereotypical qualities are highlighted or
de-emphasized in a given context – a particular metaphor, say, might describe hackers as terrorists or hackers as artists – we need to be able to recalculate the perceived affect of the word-concept.
This paper presents such a stereotype-grounded model of the affective lexicon. After reviewing the relevant background in section 2, we present the basis of the model in section 3. Here we describe how a large body of feature-rich stereotypes is acquired from the web and from local n-grams. The model is evaluated in section 4. We conclude by showing the utility of the model to that most contextual of NLP phenomena – affective metaphor.
|
2 related work and ideas :In its simplest form, an affect lexicon assigns an affective score – along one or more dimensions – to each word or sense. For instance, Whissell’s (1989) Dictionary of Affect (or DoA) assigns a trio of scores to each of its 8000+ words to describe three psycholinguistic dimensions: pleasantness, activation and imagery. In the DoA, the lowest pleasantness score of 1.0 is assigned to words like abnormal and ugly, while the highest, 3.0, is assigned to words like wedding and winning. Though Whissell’s DoA is based on human ratings, Turney (2002) shows how affective valence can be derived from measures of word association in web texts. Human intuitions are prized in matters of lexical affect. For reliable results on a large-scale, Mohammad & Turney (2010) and Mohammad & Yang (2011) thus used the Mechanical Turk to elicit human ratings of the emotional content of words. Ratings were sought along the eight dimensions identified in Plutchik (1980) as primary emotions: trust , anger, anticipation, disgust, fear, joy, sadness and surprise. Automated tests were used to exclude unsuitable raters. In all, 24,000+ wordsense pairs were annotated by five different raters.
75
Liu et al. (2003) also present a multidimensional affective model that uses the six basic emotion categories of Ekman (1993) as its dimensions: happy, sad, angry, fearful, disgusted and surprised. These authors base estimates of affect on the contents of Open Mind, a common-sense knowledgebase (Singh, 2002) harvested from contributions of web volunteers. These contents are treated as sentential objects, and a range of NLP models is used to derive affective labels for the subset of contents (~10%) that appear to convey an emotional stance. These labels are then propagated to related concepts (e.g., excitement is propagated from rollercoasters to amusement parks) so that the implicit affect of many other concepts can be determined. Strapparava and Valitutti (2004) provide a set of affective annotations for a subset of WordNet’s synsets in a resource called Wordnet-affect. The annotation labels, called a-labels, focus on the cognitive dynamics of emotion, allowing one to distinguish e.g. between words that denote an emotion-eliciting situation and those than denote an emotional response. Esuli and Sebastiani (2006) also build directly on WordNet as their lexical platform, using a semi-supervised learning algorithm to assign a trio of numbers – positivity, negativity and neutrality – to word senses in their newly derived resource, SentiWordNet. (Wordnet-affect also supports these three dimensions as a-labels, and adds a fourth, ambiguous). Esuli & Sebastiani (2007) improve on their affect scores by running a variant of the PageRank algorithm (see also Mihalcea and Tarau, 2004) on the graph structure that tacitly connects word-senses in WordNet to each other via the words used in their textual glosses. These lexica attempt to capture the affective profile of a word/sense when it is used in its most normative and stereotypical guise, but they do so without an explicit model of stereotypical meaning. Veale & Hao (2007) describe a web-based approach to acquiring such a model. They note that since the simile pattern “as ADJ as DET NOUN” presupposes that NOUN is an exemplar of ADJness, it follows that ADJ must be a highly salient property of NOUN. Veale & Hao harvested tens of thousands of instances of this pattern from the Web, to extract sets of adjectival properties for thousands of commonplace nouns. They show that if one estimates the pleasantness of a term like snake or artist as a weighted average of the pleasantness of its properties (like sneaky or creative) in a resource like Whissell’s DoA, then the estimated scores show a reliable correlation with the DoA’s own scores. It thus makes computational sense to calculate the affect of a word-concept as a function of the affect of its most salient properties. Veale (2011) later built on this work to show how a property-rich stereotypical representation could be used for non-literal matching and retrieval of creative texts, such as metaphors and analogies. Both Liu et al. (2003) and Veale & Hao (2010) argue for the importance of common-sense knowledge in the determination of affect. We incorporate ideas from both here, while choosing to build mainly on the latter, to construct a nuanced, two-level model of the affective lexicon.
|
3 an affective lexicon of stereotypes :We construct the stereotype-based lexicon in two stages. For the first layer, a large collection of stereotypical descriptions is harvested from the web. As in Liu et al. (2003), our goal is to acquire a lightweight common-sense representation of many everyday concepts. For the second layer, we link these common-sense qualities in a support graph that captures how they mutually support each other in their co-description of a stereotypical idea. From this graph we can estimate pleasantness and unpleasantness valence scores for each property and behavior, and for the stereotypes that exhibit them. Expanding on the approach in Veale (2011), we use two kinds of query for harvesting stereotypes from the web. The first, “as ADJ as a NOUN”, acquires typical adjectival properties for noun concepts; the second, “VERB+ing like a NOUN” and “VERB+ed like a NOUN”, acquires typical verb behaviors. Rather than use a wildcard * in both positions (ADJ and NOUN, or VERB and NOUN), which gives limited results with a search engine like Google, we generate fully instantiated similes from hypotheses generated via the Google n-grams (Brants & Franz, 2006). Thus, from the 3-gram “a drooling zombie” we generate the query “drooling like a zombie”, and from the 3-gram “a mindless zombie” we generate “as mindless as a zombie”. Only those queries that retrieve one or more Web documents via the Google API indicate the most promising associations. This still gives us over 250,000 web-validated simile associations for our stereotypical model, and we filter these manually, to ensure that the lexicon is both reusable and
of the highest quality. We obtain rich descriptions for many stereotypical ideas, such as Baby, which is described via 163 typical properties and behaviors like crying, drooling and guileless. After this phase, the lexicon maps each of 9,479 stereotypes to a mix of 7,898 properties and behaviors.
We construct the second level of the lexicon by automatically linking these properties and behaviors to each other in a support graph. The intuition here is that properties which reinforce each other in a single description (e.g. “as lush and green as a jungle” or “as hot and humid as a sauna”) are more likely to have a similar affect than properties which do not support each other. We first gather all Google 3-grams in which a pair of stereotypical properties or behaviors X and Y are linked via coordination, as in “hot and humid” or “kicking and screaming”. A bidirectional link between X and Y is added to the support graph if one or more stereotypes in the lexicon contain both X and Y. If this is not so, we also ask whether both descriptors ever reinforce each other in Web similes, by posing the web query “as X and Y as”. If this query has nonzero hits, we still add a link between X and Y.
Let N denote this support graph, and N(p) denote the set of neighboring terms to p, that is, the set of properties and behaviors that can mutually support a property p. Since every edge in N represents an affective context, we can estimate the likelihood that p is ever used in a positive or negative context if we know the positive or negative affect of enough members of N(p). So if we label enough vertices of N with + / – labels, we can interpolate a positive/negative affect for all vertices p in N.
We thus build a reference set -R of typically negative words, and a set +R of typically positive words. Given a few seed members of -R (such as sad, evil, etc.) and a few seed members of +R (such as happy, wonderful, etc.), we find many other candidates to add to +R and -R by considering neighbors of these seeds in N. After just three iterations, +R and -R contain ~2000 words each. For a property p, we define N+(p) and N-(p) as
(1) N+(p) = N(p) ∩ +R
(2) N-(p) = N(p) ∩ -R
We assign pos/neg valence scores to each property p by interpolating from reference values to their neighbors in N. Unlike that of Takamura et al. (2005), the approach is non-iterative and involves
no feedback between the nodes of N, and thus, no inter-dependence between adjacent affect scores:
(3) pos(p) = |N+(p)|
|N+(p) ∪ N-(p)|
(4) neg(p) = 1 - pos(p)
If a term S denotes a stereotypical idea and is described via a set of typical properties and behaviors typical(S) in the lexicon, then:
(5) pos(S) = Σp∈typical(S) pos(p) |typical(S)|
(6) neg(S) = 1 - pos(S)
Thus, (5) and (6) calculate the mean affect of the properties and behaviors of S, as represented via typical(S). We can now use (3) and (4) to separate typical(S) into those elements that are more negative than positive (putting an unpleasant spin on S in context) and those that are more positive than negative (putting a pleasant spin on S in context):
(7) posTypical(S) = {p | p ∈ typical(S) ∧ pos(p) > 0.5}
(8) negTypical(S) = {p | p ∈ typical(S) ∧ neg(p) > 0.5}
|
4 empirical evaluation :In the process of populating +R and -R, we identify a reference set of 478 positive stereotype nouns (such as saint and hero) and 677 negative stereotype nouns (such as tyrant and monster). We can use these reference stereotypes to test the effectiveness of (5) and (6), and thus, indirectly, of (3) and (4) and of the affective lexicon itself. Thus, we find that 96.7% of the stereotypes in +R are correctly assigned a positivity score greater than 0.5 (pos(S) > neg(S)) by (5), while 96.2% of the stereotypes in -R are correctly assigned a negativity score greater than 0.5 (neg(S) > pos(S)) by (6).
We can also use +R and -R as a gold standard for evaluating the separation of typical(S) into distinct positive and negative subsets posTypical(S) and negTypical(S) via (7) and (8). The lexicon contains 6,230 stereotypes with at least one property in +R∪-R. On average, +R∪-R contains 6.51 of the properties of each of these stereotypes, where, on average, 2.95 are in +R while 3.56 are in -R.
In a perfect separation, (7) should yield a positive subset that contains only those properties in
typical(S)∩+R, while (8) should yield a negative subset that contains only those in typical(S)∩-R.
Viewing the problem as a retrieval task then, in which (7) and (8) are used to retrieve distinct positive and negative property sets for a stereotype S, we report the encouraging results of Table 1 above.
|
5 re-shaping affect in figurative contexts :The Google n-grams are a rich source of affective metaphors of the form Target is Source, such as “politicians are crooks”, “Apple is a cult”, “racism is a disease” and “Steve Jobs is a god”. Let src(T) denote the set of stereotypes that are commonly used to describe T, where commonality is defined as the presence of the corresponding copula metaphor in the Google n-grams. Thus, for example:
src(racism) = {problem, disease, poison, sin, crime, ideology, weapon, …}
src(Hitler) = {monster, criminal, tyrant, idiot, madman, vegetarian, racist, …}
Let srcTypical(T) denote the aggregation of all properties ascribable to T via metaphors in src(T):
(9) srcTypical (T) = M∈src(T) typical(M)
We can also use the posTypical and negTypical variants in (7) and (8) to focus only on metaphors that project positive or negative qualities onto T. In effect, (9) provides a feature representation for a topic T as viewed through the prism of metaphor. This is useful when the source S in the metaphor T is S is not a known stereotype in the lexicon, as happens e.g. in Apple is Scientology. We can also estimate whether a given term S is more positive than negative by taking the average pos/neg valence of src(S). Such estimates are 87% correct when evaluated using +R and -R examples.
The properties and behaviors that are contextually relevant to the interpretation of T is S are given by
(10) salient (T,S) = |srcTypical(T) ∪ typical(T)| ∩ |srcTypical(S) ∪ typical(S)|
In the context of T is S, the figurative perspective M ∈ src(S)∪src(T)∪{S} is deemed apt for T if:
(11) apt(M, T,S) = |salient(T,S) ∩ typical(M)| > 0
and the degree to which M is apt for T is given by:
(12) aptness(M,T,S) = |salient(T, S) ∩ typical(M)| |typical(M)|
We can construct an interpretation for T is S by considering not just {S}, but the stereotypes in src(T) that are apt for T in the context of T is S, as well as the stereotypes that are commonly used to describe S – that is, src(S) – that are also apt for T:
(13) interpretation(T, S) = {M|M ∈ src(T)∪src(S)∪{S} ∧ apt(M, T, S)}
The elements {Mi} of interpretation(T, S) can now be sorted by aptness(Mi T, S) to produce a ranked list of interpretations (M1, M2 … Mn). For any interpretation M, the salient features of M are thus:
(14) salient(M, T,S) = typical(M) ∩ salient (T,S)
So interpretation(T, S) is an expansion of the affective metaphor T is S that includes the common metaphors that are consistent with T qua S. For instance, “Google is -Microsoft” (where - indicates a negative spin) produces {monopoly, threat, bully, giant, dinosaur, demon, …}. For each Mi in interpretation(T, S), salient(Mi, T, S) is an expansion of Mi that includes all of the qualities that are apt for T qua Mi (e.g. threatening, sprawling, evil, etc.).
|
Since we can ‘spin’ words and concepts to suit our affective needs, context is a major determinant of the perceived affect of a word or concept. We view this re-profiling as a selective emphasis or de-emphasis of the qualities that underpin our shared stereotype of a concept or a word meaning, and construct our model of the affective lexicon accordingly. We show how a large body of affective stereotypes can be acquired from the web, and also show how these are used to create and interpret affective metaphors.
|
[{"affiliations": [], "name": "Tony Veale"}]
|
SP:bd4c1958900c1a0e0f9d86de86daff19247d3b70
|
[{"authors": ["Thorsten Brants", "Alex Franz."], "title": "Web 1T 5-gram Version 1", "venue": "Linguistic Data Consortium.", "year": 2006}, {"authors": ["Paul Ekman."], "title": "Facial expression of emotion", "venue": "American Psychologist, 48:384-392.", "year": 1993}, {"authors": ["Andrea Esuli", "Fabrizio Sebastiani."], "title": "SentiWordNet: A publicly available lexical resource for opinion mining", "venue": "Proc. of LREC-2006, the 5 Conference on Language Resources and Evaluation, 417422.", "year": 2006}, {"authors": ["Andrea Esuli", "Fabrizio Sebastiani."], "title": "PageRanking WordNet Synsets: An application to opinion mining", "venue": "Proc. of ACL-2007, the 45 Annual Meeting of the Association for Computational Linguistics.", "year": 2007}, {"authors": ["Hugo Liu", "Henry Lieberman", "Ted Selker."], "title": "A Model of Textual Affect Sensing Using Real-World Knowledge", "venue": "Proceedings of the 8 international conference on Intelligent user interfaces, pp. 125-132.", "year": 2003}, {"authors": ["Rada Mihalcea", "Paul Tarau."], "title": "TextRank: Bringing Order to Texts", "venue": "Proceedings of EMNLP-04, the 2004 Conference on Empirical Methods in Natural Language Processing.", "year": 2004}, {"authors": ["Saif F. Mohammad", "Peter D. Turney."], "title": "Emotions evoked by common words and phrases: Using Mechanical Turk to create an emotional lexicon", "venue": "Proceedings of the NAACL-HLT 2010 workshop on Computational Approaches to Analysis and Genera-", "year": 2010}, {"authors": ["Saif F. Mohammad", "Tony Yang."], "title": "Tracking sentiment in mail: how genders differ on emotional axes", "venue": "Proceedings of the ACL 2011 WASSA workshop on Computational Approaches to Subjectivity and Sentiment Analysis, Portland, Oregon.", "year": 2011}, {"authors": ["Robert Plutchik."], "title": "A general psycho-evolutionary theory of emotion", "venue": "Emotion: Theory, research and experience, 2(1-2):1-135.", "year": 1980}, {"authors": ["Push Singh."], "title": "The public acquisition of commonsense knowledge", "venue": "Proceedings of AAAI Spring Symposium on Acquiring (and Using) Linguistic (and World) Knowledge for Information Access. Palo Alto, CA.", "year": 2002}, {"authors": ["Carlo Strapparava", "Alessandro Valitutti."], "title": "Wordnet-affect: an affective extension of Wordnet", "venue": "Proceedings of LREC-2004, the 4 International Conference on Language Resources and Evaluation, Lisbon, Portugal.", "year": 2004}, {"authors": ["Hiroya Takamura", "Takashi Inui", "Manabu Okumura."], "title": "Extracting semantic orientation of words using spin model", "venue": "Proceedings of the 43 Annual Meeting of the ACL, 133\u2013140.", "year": 2005}, {"authors": ["P.D. Turney"], "title": "Thumbs Up or Thumbs Down? Semantic Orientation Applied to Unsupervised Classification of Reviews", "venue": "In Proceedings of ACL-2002,", "year": 2002}, {"authors": ["T. Veale", "Y. Hao"], "title": "Making Lexical Ontologies Functional and Context-Sensitive", "venue": "Proceedings of ACL2007,", "year": 2007}, {"authors": ["T. Veale", "Y. Hao"], "title": "Detecting Ironic Intent in Creative Comparisons", "venue": "Proceedings of ECAI\u20192010,", "year": 2010}, {"authors": ["T. Veale"], "title": "Creative Language Retrieval: A Robust Hybrid of Information Retrieval and Linguistic Creativity", "venue": "Proceedings of ACL\u20192011, the 49th Annual Meeting of the Association of Computational Linguistics", "year": 2011}, {"authors": ["C. Whissell"], "title": "The dictionary of affect in language", "venue": "In R. Plutchik and H. Kellerman (Eds.) Emotion: Theory and research. Harcourt Brace,", "year": 1989}]
|
6 concluding remarks :Metaphor is the perfect tool for influencing the perceived affect of words and concepts in context. The web application Metaphor Magnet provides a proof-of-concept demonstration of this re-shaping process at work, using the stereotype lexicon of §3, the selective highlighting of (7)–(8), and the model of metaphor in (9)–(14). It can be accessed at: http://boundinanutshell.com/metaphor-magnet
∪
|
acknowledgements :This research was supported by the WCU (World Class University) program under the National Research Foundation of Korea, and funded by the Ministry of Education, Science and Technology of Korea (Project No: R31-30007).
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 introduction :The builders of affective lexica face the vexing task of distilling the many and varied pragmatic uses of a word or concept into an overall semantic measure of affect. The task is greatly complicated by the fact that in each context of use, speakers may implicitly agree to focus on just a subset of the salient features of a concept, and it is these features that determine contextual affect. Naturally, disagreements arise when speakers do not implicitly arrive at such a consensus, as when people disagree about hackers: advocates often focus on qualities that emphasize curiosity or technical virtuosity, while opponents focus on qualities that emphasize criminality and a disregard for the law. In each case, it is the same concept, Hacker, that is being described, yet speakers can focus on different qualities to arrive at different affective stances.
Any gross measure of affect (such as e.g., that hackers are good or bad) must thus be grounded in a nuanced model of the stereotypical properties and behaviors of the underlying word-concept. As different stereotypical qualities are highlighted or
de-emphasized in a given context – a particular metaphor, say, might describe hackers as terrorists or hackers as artists – we need to be able to recalculate the perceived affect of the word-concept.
This paper presents such a stereotype-grounded model of the affective lexicon. After reviewing the relevant background in section 2, we present the basis of the model in section 3. Here we describe how a large body of feature-rich stereotypes is acquired from the web and from local n-grams. The model is evaluated in section 4. We conclude by showing the utility of the model to that most contextual of NLP phenomena – affective metaphor. 2 related work and ideas :In its simplest form, an affect lexicon assigns an affective score – along one or more dimensions – to each word or sense. For instance, Whissell’s (1989) Dictionary of Affect (or DoA) assigns a trio of scores to each of its 8000+ words to describe three psycholinguistic dimensions: pleasantness, activation and imagery. In the DoA, the lowest pleasantness score of 1.0 is assigned to words like abnormal and ugly, while the highest, 3.0, is assigned to words like wedding and winning. Though Whissell’s DoA is based on human ratings, Turney (2002) shows how affective valence can be derived from measures of word association in web texts. Human intuitions are prized in matters of lexical affect. For reliable results on a large-scale, Mohammad & Turney (2010) and Mohammad & Yang (2011) thus used the Mechanical Turk to elicit human ratings of the emotional content of words. Ratings were sought along the eight dimensions identified in Plutchik (1980) as primary emotions: trust , anger, anticipation, disgust, fear, joy, sadness and surprise. Automated tests were used to exclude unsuitable raters. In all, 24,000+ wordsense pairs were annotated by five different raters.
75
Liu et al. (2003) also present a multidimensional affective model that uses the six basic emotion categories of Ekman (1993) as its dimensions: happy, sad, angry, fearful, disgusted and surprised. These authors base estimates of affect on the contents of Open Mind, a common-sense knowledgebase (Singh, 2002) harvested from contributions of web volunteers. These contents are treated as sentential objects, and a range of NLP models is used to derive affective labels for the subset of contents (~10%) that appear to convey an emotional stance. These labels are then propagated to related concepts (e.g., excitement is propagated from rollercoasters to amusement parks) so that the implicit affect of many other concepts can be determined. Strapparava and Valitutti (2004) provide a set of affective annotations for a subset of WordNet’s synsets in a resource called Wordnet-affect. The annotation labels, called a-labels, focus on the cognitive dynamics of emotion, allowing one to distinguish e.g. between words that denote an emotion-eliciting situation and those than denote an emotional response. Esuli and Sebastiani (2006) also build directly on WordNet as their lexical platform, using a semi-supervised learning algorithm to assign a trio of numbers – positivity, negativity and neutrality – to word senses in their newly derived resource, SentiWordNet. (Wordnet-affect also supports these three dimensions as a-labels, and adds a fourth, ambiguous). Esuli & Sebastiani (2007) improve on their affect scores by running a variant of the PageRank algorithm (see also Mihalcea and Tarau, 2004) on the graph structure that tacitly connects word-senses in WordNet to each other via the words used in their textual glosses. These lexica attempt to capture the affective profile of a word/sense when it is used in its most normative and stereotypical guise, but they do so without an explicit model of stereotypical meaning. Veale & Hao (2007) describe a web-based approach to acquiring such a model. They note that since the simile pattern “as ADJ as DET NOUN” presupposes that NOUN is an exemplar of ADJness, it follows that ADJ must be a highly salient property of NOUN. Veale & Hao harvested tens of thousands of instances of this pattern from the Web, to extract sets of adjectival properties for thousands of commonplace nouns. They show that if one estimates the pleasantness of a term like snake or artist as a weighted average of the pleasantness of its properties (like sneaky or creative) in a resource like Whissell’s DoA, then the estimated scores show a reliable correlation with the DoA’s own scores. It thus makes computational sense to calculate the affect of a word-concept as a function of the affect of its most salient properties. Veale (2011) later built on this work to show how a property-rich stereotypical representation could be used for non-literal matching and retrieval of creative texts, such as metaphors and analogies. Both Liu et al. (2003) and Veale & Hao (2010) argue for the importance of common-sense knowledge in the determination of affect. We incorporate ideas from both here, while choosing to build mainly on the latter, to construct a nuanced, two-level model of the affective lexicon. 3 an affective lexicon of stereotypes :We construct the stereotype-based lexicon in two stages. For the first layer, a large collection of stereotypical descriptions is harvested from the web. As in Liu et al. (2003), our goal is to acquire a lightweight common-sense representation of many everyday concepts. For the second layer, we link these common-sense qualities in a support graph that captures how they mutually support each other in their co-description of a stereotypical idea. From this graph we can estimate pleasantness and unpleasantness valence scores for each property and behavior, and for the stereotypes that exhibit them. Expanding on the approach in Veale (2011), we use two kinds of query for harvesting stereotypes from the web. The first, “as ADJ as a NOUN”, acquires typical adjectival properties for noun concepts; the second, “VERB+ing like a NOUN” and “VERB+ed like a NOUN”, acquires typical verb behaviors. Rather than use a wildcard * in both positions (ADJ and NOUN, or VERB and NOUN), which gives limited results with a search engine like Google, we generate fully instantiated similes from hypotheses generated via the Google n-grams (Brants & Franz, 2006). Thus, from the 3-gram “a drooling zombie” we generate the query “drooling like a zombie”, and from the 3-gram “a mindless zombie” we generate “as mindless as a zombie”. Only those queries that retrieve one or more Web documents via the Google API indicate the most promising associations. This still gives us over 250,000 web-validated simile associations for our stereotypical model, and we filter these manually, to ensure that the lexicon is both reusable and
of the highest quality. We obtain rich descriptions for many stereotypical ideas, such as Baby, which is described via 163 typical properties and behaviors like crying, drooling and guileless. After this phase, the lexicon maps each of 9,479 stereotypes to a mix of 7,898 properties and behaviors.
We construct the second level of the lexicon by automatically linking these properties and behaviors to each other in a support graph. The intuition here is that properties which reinforce each other in a single description (e.g. “as lush and green as a jungle” or “as hot and humid as a sauna”) are more likely to have a similar affect than properties which do not support each other. We first gather all Google 3-grams in which a pair of stereotypical properties or behaviors X and Y are linked via coordination, as in “hot and humid” or “kicking and screaming”. A bidirectional link between X and Y is added to the support graph if one or more stereotypes in the lexicon contain both X and Y. If this is not so, we also ask whether both descriptors ever reinforce each other in Web similes, by posing the web query “as X and Y as”. If this query has nonzero hits, we still add a link between X and Y.
Let N denote this support graph, and N(p) denote the set of neighboring terms to p, that is, the set of properties and behaviors that can mutually support a property p. Since every edge in N represents an affective context, we can estimate the likelihood that p is ever used in a positive or negative context if we know the positive or negative affect of enough members of N(p). So if we label enough vertices of N with + / – labels, we can interpolate a positive/negative affect for all vertices p in N.
We thus build a reference set -R of typically negative words, and a set +R of typically positive words. Given a few seed members of -R (such as sad, evil, etc.) and a few seed members of +R (such as happy, wonderful, etc.), we find many other candidates to add to +R and -R by considering neighbors of these seeds in N. After just three iterations, +R and -R contain ~2000 words each. For a property p, we define N+(p) and N-(p) as
(1) N+(p) = N(p) ∩ +R
(2) N-(p) = N(p) ∩ -R
We assign pos/neg valence scores to each property p by interpolating from reference values to their neighbors in N. Unlike that of Takamura et al. (2005), the approach is non-iterative and involves
no feedback between the nodes of N, and thus, no inter-dependence between adjacent affect scores:
(3) pos(p) = |N+(p)|
|N+(p) ∪ N-(p)|
(4) neg(p) = 1 - pos(p)
If a term S denotes a stereotypical idea and is described via a set of typical properties and behaviors typical(S) in the lexicon, then:
(5) pos(S) = Σp∈typical(S) pos(p) |typical(S)|
(6) neg(S) = 1 - pos(S)
Thus, (5) and (6) calculate the mean affect of the properties and behaviors of S, as represented via typical(S). We can now use (3) and (4) to separate typical(S) into those elements that are more negative than positive (putting an unpleasant spin on S in context) and those that are more positive than negative (putting a pleasant spin on S in context):
(7) posTypical(S) = {p | p ∈ typical(S) ∧ pos(p) > 0.5}
(8) negTypical(S) = {p | p ∈ typical(S) ∧ neg(p) > 0.5} 4 empirical evaluation :In the process of populating +R and -R, we identify a reference set of 478 positive stereotype nouns (such as saint and hero) and 677 negative stereotype nouns (such as tyrant and monster). We can use these reference stereotypes to test the effectiveness of (5) and (6), and thus, indirectly, of (3) and (4) and of the affective lexicon itself. Thus, we find that 96.7% of the stereotypes in +R are correctly assigned a positivity score greater than 0.5 (pos(S) > neg(S)) by (5), while 96.2% of the stereotypes in -R are correctly assigned a negativity score greater than 0.5 (neg(S) > pos(S)) by (6).
We can also use +R and -R as a gold standard for evaluating the separation of typical(S) into distinct positive and negative subsets posTypical(S) and negTypical(S) via (7) and (8). The lexicon contains 6,230 stereotypes with at least one property in +R∪-R. On average, +R∪-R contains 6.51 of the properties of each of these stereotypes, where, on average, 2.95 are in +R while 3.56 are in -R.
In a perfect separation, (7) should yield a positive subset that contains only those properties in
typical(S)∩+R, while (8) should yield a negative subset that contains only those in typical(S)∩-R.
Viewing the problem as a retrieval task then, in which (7) and (8) are used to retrieve distinct positive and negative property sets for a stereotype S, we report the encouraging results of Table 1 above. 5 re-shaping affect in figurative contexts :The Google n-grams are a rich source of affective metaphors of the form Target is Source, such as “politicians are crooks”, “Apple is a cult”, “racism is a disease” and “Steve Jobs is a god”. Let src(T) denote the set of stereotypes that are commonly used to describe T, where commonality is defined as the presence of the corresponding copula metaphor in the Google n-grams. Thus, for example:
src(racism) = {problem, disease, poison, sin, crime, ideology, weapon, …}
src(Hitler) = {monster, criminal, tyrant, idiot, madman, vegetarian, racist, …}
Let srcTypical(T) denote the aggregation of all properties ascribable to T via metaphors in src(T):
(9) srcTypical (T) = M∈src(T) typical(M)
We can also use the posTypical and negTypical variants in (7) and (8) to focus only on metaphors that project positive or negative qualities onto T. In effect, (9) provides a feature representation for a topic T as viewed through the prism of metaphor. This is useful when the source S in the metaphor T is S is not a known stereotype in the lexicon, as happens e.g. in Apple is Scientology. We can also estimate whether a given term S is more positive than negative by taking the average pos/neg valence of src(S). Such estimates are 87% correct when evaluated using +R and -R examples.
The properties and behaviors that are contextually relevant to the interpretation of T is S are given by
(10) salient (T,S) = |srcTypical(T) ∪ typical(T)| ∩ |srcTypical(S) ∪ typical(S)|
In the context of T is S, the figurative perspective M ∈ src(S)∪src(T)∪{S} is deemed apt for T if:
(11) apt(M, T,S) = |salient(T,S) ∩ typical(M)| > 0
and the degree to which M is apt for T is given by:
(12) aptness(M,T,S) = |salient(T, S) ∩ typical(M)| |typical(M)|
We can construct an interpretation for T is S by considering not just {S}, but the stereotypes in src(T) that are apt for T in the context of T is S, as well as the stereotypes that are commonly used to describe S – that is, src(S) – that are also apt for T:
(13) interpretation(T, S) = {M|M ∈ src(T)∪src(S)∪{S} ∧ apt(M, T, S)}
The elements {Mi} of interpretation(T, S) can now be sorted by aptness(Mi T, S) to produce a ranked list of interpretations (M1, M2 … Mn). For any interpretation M, the salient features of M are thus:
(14) salient(M, T,S) = typical(M) ∩ salient (T,S)
So interpretation(T, S) is an expansion of the affective metaphor T is S that includes the common metaphors that are consistent with T qua S. For instance, “Google is -Microsoft” (where - indicates a negative spin) produces {monopoly, threat, bully, giant, dinosaur, demon, …}. For each Mi in interpretation(T, S), salient(Mi, T, S) is an expansion of Mi that includes all of the qualities that are apt for T qua Mi (e.g. threatening, sprawling, evil, etc.). Since we can ‘spin’ words and concepts to suit our affective needs, context is a major determinant of the perceived affect of a word or concept. We view this re-profiling as a selective emphasis or de-emphasis of the qualities that underpin our shared stereotype of a concept or a word meaning, and construct our model of the affective lexicon accordingly. We show how a large body of affective stereotypes can be acquired from the web, and also show how these are used to create and interpret affective metaphors. [{"affiliations": [], "name": "Tony Veale"}] SP:bd4c1958900c1a0e0f9d86de86daff19247d3b70 [{"authors": ["Thorsten Brants", "Alex Franz."], "title": "Web 1T 5-gram Version 1", "venue": "Linguistic Data Consortium.", "year": 2006}, {"authors": ["Paul Ekman."], "title": "Facial expression of emotion", "venue": "American Psychologist, 48:384-392.", "year": 1993}, {"authors": ["Andrea Esuli", "Fabrizio Sebastiani."], "title": "SentiWordNet: A publicly available lexical resource for opinion mining", "venue": "Proc. of LREC-2006, the 5 Conference on Language Resources and Evaluation, 417422.", "year": 2006}, {"authors": ["Andrea Esuli", "Fabrizio Sebastiani."], "title": "PageRanking WordNet Synsets: An application to opinion mining", "venue": "Proc. of ACL-2007, the 45 Annual Meeting of the Association for Computational Linguistics.", "year": 2007}, {"authors": ["Hugo Liu", "Henry Lieberman", "Ted Selker."], "title": "A Model of Textual Affect Sensing Using Real-World Knowledge", "venue": "Proceedings of the 8 international conference on Intelligent user interfaces, pp. 125-132.", "year": 2003}, {"authors": ["Rada Mihalcea", "Paul Tarau."], "title": "TextRank: Bringing Order to Texts", "venue": "Proceedings of EMNLP-04, the 2004 Conference on Empirical Methods in Natural Language Processing.", "year": 2004}, {"authors": ["Saif F. Mohammad", "Peter D. Turney."], "title": "Emotions evoked by common words and phrases: Using Mechanical Turk to create an emotional lexicon", "venue": "Proceedings of the NAACL-HLT 2010 workshop on Computational Approaches to Analysis and Genera-", "year": 2010}, {"authors": ["Saif F. Mohammad", "Tony Yang."], "title": "Tracking sentiment in mail: how genders differ on emotional axes", "venue": "Proceedings of the ACL 2011 WASSA workshop on Computational Approaches to Subjectivity and Sentiment Analysis, Portland, Oregon.", "year": 2011}, {"authors": ["Robert Plutchik."], "title": "A general psycho-evolutionary theory of emotion", "venue": "Emotion: Theory, research and experience, 2(1-2):1-135.", "year": 1980}, {"authors": ["Push Singh."], "title": "The public acquisition of commonsense knowledge", "venue": "Proceedings of AAAI Spring Symposium on Acquiring (and Using) Linguistic (and World) Knowledge for Information Access. Palo Alto, CA.", "year": 2002}, {"authors": ["Carlo Strapparava", "Alessandro Valitutti."], "title": "Wordnet-affect: an affective extension of Wordnet", "venue": "Proceedings of LREC-2004, the 4 International Conference on Language Resources and Evaluation, Lisbon, Portugal.", "year": 2004}, {"authors": ["Hiroya Takamura", "Takashi Inui", "Manabu Okumura."], "title": "Extracting semantic orientation of words using spin model", "venue": "Proceedings of the 43 Annual Meeting of the ACL, 133\u2013140.", "year": 2005}, {"authors": ["P.D. Turney"], "title": "Thumbs Up or Thumbs Down? Semantic Orientation Applied to Unsupervised Classification of Reviews", "venue": "In Proceedings of ACL-2002,", "year": 2002}, {"authors": ["T. Veale", "Y. Hao"], "title": "Making Lexical Ontologies Functional and Context-Sensitive", "venue": "Proceedings of ACL2007,", "year": 2007}, {"authors": ["T. Veale", "Y. Hao"], "title": "Detecting Ironic Intent in Creative Comparisons", "venue": "Proceedings of ECAI\u20192010,", "year": 2010}, {"authors": ["T. Veale"], "title": "Creative Language Retrieval: A Robust Hybrid of Information Retrieval and Linguistic Creativity", "venue": "Proceedings of ACL\u20192011, the 49th Annual Meeting of the Association of Computational Linguistics", "year": 2011}, {"authors": ["C. Whissell"], "title": "The dictionary of affect in language", "venue": "In R. Plutchik and H. Kellerman (Eds.) Emotion: Theory and research. Harcourt Brace,", "year": 1989}] 6 concluding remarks :Metaphor is the perfect tool for influencing the perceived affect of words and concepts in context. The web application Metaphor Magnet provides a proof-of-concept demonstration of this re-shaping process at work, using the stereotype lexicon of §3, the selective highlighting of (7)–(8), and the model of metaphor in (9)–(14). It can be accessed at: http://boundinanutshell.com/metaphor-magnet
∪ acknowledgements :This research was supported by the WCU (World Class University) program under the National Research Foundation of Korea, and funded by the Ministry of Education, Science and Technology of Korea (Project No: R31-30007).
| null |
"1 introduction :In recent years, standardized examinations have proved a fertile source of evaluati(...TRUNCATED)
| "2 related work :The past work which is most similar to ours is derived from the lexical substitutio(...TRUNCATED)
| "3 sentence completion via language modeling :Perhaps the most straightforward approach to solving t(...TRUNCATED)
| "4 sentence completion via latent semantic analysis :Latent Semantic Analysis (LSA) (Deerwester et a(...TRUNCATED)
| "5 experimental results : We present results with two datasets. The first is taken from 11 Practice (...TRUNCATED)
| "This paper studies the problem of sentencelevel semantic coherence by answering SATstyle sentence c(...TRUNCATED)
| "[{\"affiliations\": [], \"name\": \"Geoffrey Zweig\"}, {\"affiliations\": [], \"name\": \"John C. P(...TRUNCATED)
|
SP:22407c0cc9a80fbd400fc49efba0d270857b046e
| "[{\"authors\": [\"J. Bellegarda.\"], \"title\": \"Exploiting latent semantic information in statist(...TRUNCATED)
| "6 discussion :To verify that the differences in accuracy between the different algorithms are not s(...TRUNCATED)
| null | "7 conclusion :In this paper we have investigated methods for answering sentence-completion question(...TRUNCATED)
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | "1 introduction :In recent years, standardized examinations have proved a fertile source of evaluati(...TRUNCATED)
| "3 sentence completion via language modeling :Perhaps the most straightforward approach to solving t(...TRUNCATED)
|
"1 introduction :The rapid childhood development from a seemingly blank slate to language mastery is(...TRUNCATED)
| "2 data :To identify trends in child language learning we need a corpus of child speech samples, whi(...TRUNCATED)
| "3 experiments :Learning Individual Child Metrics Our first task is to predict the age at which a he(...TRUNCATED)
| "4 discussion :Our first set of experiments verified that we can achieve a decrease in mean squared (...TRUNCATED)
| null | "We propose a new approach for the creation of child language development metrics. A set of linguist(...TRUNCATED)
| "[{\"affiliations\": [], \"name\": \"Sam Sahakian\"}, {\"affiliations\": [], \"name\": \"Benjamin Sn(...TRUNCATED)
|
SP:b8cc91cb7b418d46f769a87e4a5121b7c8c20247
| "[{\"authors\": [\"R.H. Baayen\", \"R. Piepenbrock\", \"L. Gulikers.\"], \"title\": \"The CELEX lexi(...TRUNCATED)
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | "1 introduction :The rapid childhood development from a seemingly blank slate to language mastery is(...TRUNCATED)
| null |
"1 introduction :There is a deep tension in statistical modeling of grammatical structure between pr(...TRUNCATED)
| "2 probabilistic model :In the basic nonparametric TSG model, there is an independent DP for every g(...TRUNCATED)
| "3 inference :Given this model, our inference task is to explore optimal derivations underlying the (...TRUNCATED)
| "4 evaluation results :We use the standard Penn treebank methodology of training on sections 2–21 (...TRUNCATED)
| "5 conclusion :We described a nonparametric Bayesian inference scheme for estimating TIG grammars an(...TRUNCATED)
| "We present a Bayesian nonparametric model for estimating tree insertion grammars (TIG), building up(...TRUNCATED)
| "[{\"affiliations\": [], \"name\": \"Elif Yamangil\"}, {\"affiliations\": [], \"name\": \"Stuart M. (...TRUNCATED)
|
SP:c3ecef4d8ea5e8b2ecff74ca1b0f3209113b27f4
| "[{\"authors\": [\"Xavier Carreras\", \"Michael Collins\", \"Terry Koo.\"], \"title\": \"TAG, dynami(...TRUNCATED)
| null | "acknowledgements :The first author was supported in part by a Google PhD Fellowship in Natural Lan(...TRUNCATED)
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | "1 introduction :There is a deep tension in statistical modeling of grammatical structure between pr(...TRUNCATED)
| null |
"1 introduction :Domain adaptation has been shown useful to many natural language processing applica(...TRUNCATED)
| "2 related work :The work closely related to ours was done by Dai et al. (2007), where they proposed(...TRUNCATED)
| "3 our model :Intuitively, source-specific and target-specific features can be drawn together by min(...TRUNCATED)
| "4 consistency of multiple views :In this section, we present how the consistency of document cluste(...TRUNCATED)
| "5 experiments and results :Data and Setup\nCora (McCallum et al., 2000) is an online archive of com(...TRUNCATED)
| "We use multiple views for cross-domain document classification. The main idea is to strengthen the (...TRUNCATED)
| "[{\"affiliations\": [], \"name\": \"Pei Yang\"}, {\"affiliations\": [], \"name\": \"Wei Gao\"}, {\"(...TRUNCATED)
|
SP:6801d63823343b2c737bb2d400f03c4b6108f11b
| "[{\"authors\": [\"Steven Abney.\"], \"title\": \"Bootstrapping\", \"venue\": \"Proceedings of the 4(...TRUNCATED)
| "6 conclusion :We presented a novel feature-level multi-view domain adaptation approach. The thrust (...TRUNCATED)
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | "1 introduction :Domain adaptation has been shown useful to many natural language processing applica(...TRUNCATED)
| null |
"1 introduction :Many scientific subjects, such as psychology, learning sciences, and biology, have (...TRUNCATED)
| "2 related work :Natural Language Processing (NLP) methods for automatically understanding and ident(...TRUNCATED)
| "3 data :We have collected a corpus of slavery-related United States supreme court legal opinions fr(...TRUNCATED)
| "4 the sparse mixed-effects model :To address the over-parameterization, lack of expressiveness and (...TRUNCATED)
| "5 prediction experiments :We perform three quantitative experiments to evaluate the predictive powe(...TRUNCATED)
| "We propose a latent variable model to enhance historical analysis of large corpora. This work exten(...TRUNCATED)
| "[{\"affiliations\": [], \"name\": \"William Yang Wang\"}, {\"affiliations\": [], \"name\": \"Elijah(...TRUNCATED)
|
SP:bf138cd51c096f01743627deefddf365d96e86eb
| "[{\"authors\": [\"Lalit R. Bahl\", \"Peter F. Brown.\", \"Peter V. de Souza\", \"Robert L. Mercer.\(...TRUNCATED)
| "6 conclusion and future work :In this work, we propose a sparse mixed-effects model for historical (...TRUNCATED)
| null | null | "acknowledgments :We thank Jacob Eisenstein, Noah Smith, and anonymous reviewers for valuable sugge(...TRUNCATED)
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | "1 introduction :Many scientific subjects, such as psychology, learning sciences, and biology, have (...TRUNCATED)
| "5 prediction experiments :We perform three quantitative experiments to evaluate the predictive powe(...TRUNCATED)
|
"1 introduction :Reranking techniques are commonly used for improving the accuracy of parsing (Charn(...TRUNCATED)
| "2 background :Combinatory Categorial Grammar (CCG, Steedman, 2000) is a lexicalised grammar formali(...TRUNCATED)
| "3 dependency hashing :To address this problem of semantically equivalent n-best parses, we define a(...TRUNCATED)
| "4 analysing parser errors :A substantial gap exists between the oracle F-score of our improved n-be(...TRUNCATED)
| "5 conclusion :We have described how a mismatch between the way CCG parses are modeled and evaluated(...TRUNCATED)
| "Optimising for one grammatical representation, but evaluating over a different one is a particular (...TRUNCATED)
| "[{\"affiliations\": [], \"name\": \"Dominick Ng\"}, {\"affiliations\": [], \"name\": \"James R. Cur(...TRUNCATED)
|
SP:4efc0652f6a37b82ef000806e90ef64068587481
| "[{\"authors\": [\"UK. Forrest Brennan\"], \"title\": \"k-best Parsing Algorithms for a\", \"year\":(...TRUNCATED)
| null | null | null | "acknowledgments :We would like to thank the reviewers for their comments. This work was supported (...TRUNCATED)
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | "1 introduction :Reranking techniques are commonly used for improving the accuracy of parsing (Charn(...TRUNCATED)
| "5 conclusion :We have described how a mismatch between the way CCG parses are modeled and evaluated(...TRUNCATED)
|
"1 introduction :Current top performing parsing algorithms rely on the availability of annotated dat(...TRUNCATED)
| "2 related work :Traditionally, parallel corpora have been a mainstay of multilingual parsing (Wu, 1(...TRUNCATED)
| "3 linguistic motivation :Language-Independent Dependency Properties Despite significant syntactic d(...TRUNCATED)
| "4 model :We propose a probabilistic model for generating dependency trees that facilitates paramete(...TRUNCATED)
| "5 parameter learning :Our model is parameterized by the parameters θsel, θsize and word. We learn(...TRUNCATED)
| "We present a novel algorithm for multilingual dependency parsing that uses annotations from a diver(...TRUNCATED)
| "[{\"affiliations\": [], \"name\": \"Tahira Naseem\"}, {\"affiliations\": [], \"name\": \"Regina Bar(...TRUNCATED)
|
SP:7a2220d007cd44dfd157d625d9e12351339c55f2
| "[{\"authors\": [\"Taylor Berg-Kirkpatrick\", \"Dan Klein.\"], \"title\": \"Phylogenetic grammar ind(...TRUNCATED)
| "6 experimental setup :Datasets and Evaluation We test the effectiveness of our approach on 17 langu(...TRUNCATED)
| null | "7 results :Table 2 summarizes the performance for different configurations of our model and the bas(...TRUNCATED)
| "acknowledgments :The authors acknowledge the support of the NSF (IIS-0835445), the MURI program (W(...TRUNCATED)
| "8 conclusions :We present a novel algorithm for multilingual dependency parsing that uses annotatio(...TRUNCATED)
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | "1 introduction :Current top performing parsing algorithms rely on the availability of annotated dat(...TRUNCATED)
| null |
📚 ACL & NeurIPS Future Work Dataset
This dataset contains curated "Future Work" sections from ACL and NeurIPS research papers. It is designed to support tasks like scientific document understanding, future work generation, citation intent analysis, and summarization of research directions.
📦 Dataset Details
🔍 Dataset Description
This dataset includes:
ACL_2012.csvtoACL_2024.csv: Tabular data where each row is a paper and each column represents a paper section (e.g., Abstract, Introduction, Future Work).NeurIPS_2021.csv,NeurIPS_2022.csv: Similar format as ACL.csvfiles.ACL_2023.json,ACL_2024.json: Each file contains paper-wise parsed output including section headers and content. "Future Work" sections are extracted and added if found.
Each record is either a paper (in .csv) or a structured section-by-section breakdown of a paper (in .json). If a paper does not contain a "Future Work" section, it has been excluded from the .json.
- Languages: English
- Total Papers (after filtering): Varies by year (see
Statisticssection on HF for breakdown) - Data format:
.csv,.json
✍️ Curated by
Ibrahim Al Azher, Northern Illinois University, DATALab
📑 Dataset Structure
For .csv Files:
Each file contains:
- Columns:
'title','abstract','introduction','related work', ...,'future work' - Rows: One paper per row
- Year-specific files (
ACL_2012.csvtoACL_2024.csv)
For .json Files:
Each key is a paper ID (e.g., "ACL23_1.pdf") and its value includes:
{
"abstractText": "string",
"sections": [
{ "heading": "Introduction", "text": "..." },
...
{ "heading": "Future Work", "text": "..." }
],
"title": "string",
"year": "int"
}
- Downloads last month
- 73