table_id_paper
stringlengths
15
15
caption
stringlengths
14
1.88k
row_header_level
int32
1
9
row_headers
large_stringlengths
15
1.75k
column_header_level
int32
1
6
column_headers
large_stringlengths
7
1.01k
contents
large_stringlengths
18
2.36k
metrics_loc
stringclasses
2 values
metrics_type
large_stringlengths
5
532
target_entity
large_stringlengths
2
330
table_html_clean
large_stringlengths
274
7.88k
table_name
stringclasses
9 values
table_id
stringclasses
9 values
paper_id
stringlengths
8
8
page_no
int32
1
13
dir
stringclasses
8 values
description
large_stringlengths
103
3.8k
class_sentence
stringlengths
3
120
sentences
large_stringlengths
110
3.92k
header_mention
stringlengths
12
1.8k
valid
int32
0
1
D19-1543table_2
Performance improvement of the neural semantic parser on Spider with different hardness levels.
2
[['Method', 'SyntaxSQLNet (Yu et al., 2018b)'], ['Method', 'SyntaxSQLNet + DAE'], ['Method', 'SyntaxSQLNetAug (Yu et al., 2018b)'], ['Method', 'SyntaxSQLNetAug + DAE']]
1
[['Easy (%)'], ['Medium (%)'], ['Hard (%)'], ['Extra Hard (%)'], ['All (%)']]
[['38.4', '15.0', '16.1', '3.5', '18.9'], ['39.6(+1.2)', '18.2(+3.2)', '20.7(+4.6)', '7.6(+4.1)', '22.1(+3.2)'], ['44.4', '23.0', '23.0', '2.9', '24.9'], ['44.8(+0.4)', '27.0(+4.0)', '24.1(+1.1)', '5.9(+3.0)', '27.4(+2.5)']]
column
['Easy (%)', 'Medium (%)', 'Hard (%)', 'Extra Hard (%)', 'All (%)']
['SyntaxSQLNet + DAE', 'SyntaxSQLNetAug + DAE']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Easy (%)</th> <th>Medium (%)</th> <th>Hard (%)</th> <th>Extra Hard (%)</th> <th>All (%)</th> </tr> </thead> <tbody> <tr> <td>Method || SyntaxSQLNet (Yu et al., 2018b)</td> <td>3...
Table 2
table_2
D19-1543
7
emnlp2019
On Spider, the performance is evaluated by exact-match accuracy on different difficulty levels of SQL queries, i.e., easy, medium, hard and extra hard. (Yu et al., 2018c). Table 2 shows the results. First, the overall accuracy can be improved by 3.2% and 2.5% respectively. Furthermore, performances on medium, hard and ...
[2, 1, 1, 1]
['On Spider, the performance is evaluated by exact-match accuracy on different difficulty levels of SQL queries, i.e., easy, medium, hard and extra hard. (Yu et al., 2018c).', 'Table 2 shows the results.', 'First, the overall accuracy can be improved by 3.2% and 2.5% respectively.', 'Furthermore, performances on medium...
[None, None, ['SyntaxSQLNet + DAE', 'SyntaxSQLNetAug + DAE'], ['Medium (%)', 'Hard (%)', 'Extra Hard (%)', 'Easy (%)']]
1
D19-1543table_5
Performances of different anonymization models on WikiSQL.
2
[['Method', 'TypeSQL (Yu et al., 2018a)'], ['Method', 'AnnotatedSeq2Seq (Wang et al., 2018b)'], ['Method', 'DAE']]
2
[['Dev (%)', 'ACCSC'], ['Dev (%)', 'ACCOC'], ['Dev (%)', 'ACCCE'], ['Test (%)', 'ACCSC'], ['Test (%)', 'ACCOC'], ['Test (%)', 'ACCCE']]
[['75.9', '92.9 −', '76.0', '92.9 −'], ['88.8 64.6 −', '88.8 63.6 −'], ['92.6', '93.6', '86.7', '92.0', '93.7', '86.2']]
column
['ACCSC', 'ACCOC', 'ACCCE', 'ACCSC', 'ACCOC', 'ACCCE']
['DAE']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Dev (%) || ACCSC</th> <th>Dev (%) || ACCOC</th> <th>Dev (%) || ACCCE</th> <th>Test (%) || ACCSC</th> <th>Test (%) || ACCOC</th> <th>Test (%) || ACCCE</th> </tr> </thead> <tbody> <tr>...
Table 5
table_5
D19-1543
8
emnlp2019
Table 5 shows that DAE significantly outperforms TypeSQL and AnnotatedSeq2Seq on all the evaluation metrics. First, for ACCSC, DAE outperforms TypeSQL and AnotatedSeq2Seq by 16% and 3.5% on test data; for ACCOC, DAE outperforms TypeSQL and AnnotatedSeq2Seq by 0.8% and 28% on test data. Moreover, DAE can achieve around ...
[1, 1, 1]
['Table 5 shows that DAE significantly outperforms TypeSQL and AnnotatedSeq2Seq on all the evaluation metrics.', 'First, for ACCSC, DAE outperforms TypeSQL and AnotatedSeq2Seq by 16% and 3.5% on test data; for ACCOC, DAE outperforms TypeSQL and AnnotatedSeq2Seq by 0.8% and 28% on test data.', 'Moreover, DAE can achieve...
[['DAE', 'TypeSQL (Yu et al., 2018a)', 'AnnotatedSeq2Seq (Wang et al., 2018b)'], ['DAE', 'TypeSQL (Yu et al., 2018a)', 'ACCOC', 'AnnotatedSeq2Seq (Wang et al., 2018b)'], ['ACCCE', 'DAE']]
1
D19-1544table_5
Labeled F1 score (including senses) for all languages on the CoNLL-2009 in-domain test sets. For previous best result, Japanese (Ja) is from Watanabe et al. (2010), Catalan (Ca) is from Zhao et al. (2009), Spanish (Es) and German (De) are from Roth and Lapata (2016), Czech (Cz) is from Henderson et al. (2013), Chinese ...
2
[['Model', 'Previous Best Single Model'], ['Model', 'Baseline Model'], ['Model', 'CapsuleNet SRL (This Work)']]
1
[['Ja'], ['Es'], ['Ca'], ['De'], ['Cz'], ['Zh'], ['En'], ['Avg.']]
[['78.69', '80.50', '80.32', '80.10', '86.02', '84.30', '90.40', '82.90'], ['80.12', '81.0', '81.39', '76.01', '87.79', '81.05', '90.49', '82.55'], ['81.26', '81.32', '81.65', '76.44', '88.08', '81.65', '91.06', '83.07']]
column
['F1', 'F1', 'F1', 'F1', 'F1', 'F1', 'F1', 'F1']
['CapsuleNet SRL (This Work)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Ja</th> <th>Es</th> <th>Ca</th> <th>De</th> <th>Cz</th> <th>Zh</th> <th>En</th> <th>Avg.</th> </tr> </thead> <tbody> <tr> <td>Model || Previous Best Single Model</td> ...
Table 5
table_5
D19-1544
8
emnlp2019
Table 5 gives the results of the proposed CapsuleNet SRL (with global node) on the in-domain test sets of all languages from CoNLL-2009. As shown in Table 5, the proposed model consistently outperforms the non-refinement baseline model and achieves state-of-the-art performance on Catalan (Ca), Czech (Cz), English (En),...
[1, 2, 1]
['Table 5 gives the results of the proposed CapsuleNet SRL (with global node) on the in-domain test sets of all languages from CoNLL-2009.', 'As shown in Table 5, the proposed model consistently outperforms the non-refinement baseline model and achieves state-of-the-art performance on Catalan (Ca), Czech (Cz), English ...
[['CapsuleNet SRL (This Work)'], ['CapsuleNet SRL (This Work)', 'Baseline Model', 'Ca', 'Cz', 'En', 'Es'], None]
1
D19-1545table_1
Exact Match and BLEU scores for our simplified model (Iyer-Simp) with and without idioms, compared with results from Iyer et al. (2018)† on the test (validation) set of CONCODE. Iyer-Simp achieves significantly better EM and BLEU score and reduces training time from 40 hours to 27 hours. Augmenting the decoding process...
2
[['Model', 'Seq2Seq'], ['Model', 'Seq2Prod'], ['Model', 'Iyer et al. (2018)â€\xa0'], ['Model', 'Iyer-Simp'], ['Model', 'Iyer-Simp + 200 idioms']]
1
[['Exact'], ['BLEU']]
[['3.2 (2.9)', '23.5 (21.0)'], ['6.7 (5.6)', '21.3 (20.6)'], ['8.6 (7.1)', '22.1 (21.3)'], ['12.5 (9.8)', '24.4 (23.2)'], ['12.2 (9.8)', '26.6 (24.0)']]
column
['Exact', 'BLEU']
['Iyer-Simp + 200 idioms', 'Iyer-Simp']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Exact</th> <th>BLEU</th> </tr> </thead> <tbody> <tr> <td>Model || Seq2Seq</td> <td>3.2 (2.9)</td> <td>23.5 (21.0)</td> </tr> <tr> <td>Model || Seq2Prod</td> <td>6.7 (5.6)<...
Table 1
table_1
D19-1545
6
emnlp2019
7 Results and Discussion. Table 1 presents exact match and BLEU scores on the original CONCODE train/validation/test split. Iyer-Simp yields a large improvement of 3.9 EM and 2.2 BLEU over the best model of Iyer et al. (2018), while also being significantly faster (27 hours for 30 training epochs as compared to 40 hour...
[2, 1, 1, 2, 1, 2, 2]
['7 Results and Discussion.', 'Table 1 presents exact match and BLEU scores on the original CONCODE train/validation/test split.', 'Iyer-Simp yields a large improvement of 3.9 EM and 2.2 BLEU over the best model of Iyer et al. (2018), while also being significantly faster (27 hours for 30 training epochs as compared to...
[None, ['Exact', 'BLEU'], ['Iyer-Simp', 'Exact', 'BLEU', 'Iyer et al. (2018)â€\xa0'], None, ['Iyer-Simp + 200 idioms', 'BLEU', 'Exact'], ['Iyer-Simp + 200 idioms'], None]
1
D19-1547table_5
Human evaluation on 100 random examples for MISP-SQL agents based on SQLNet, SQLova and SyntaxSQLNet, respectively.
3
[['System', 'SQLNet', 'no interaction'], ['System', 'SQLNet', 'MISP-SQL (simulation)'], ['System', 'SQLNet', 'MISP-SQL (real user)'], ['System', 'SQLova', 'no interaction'], ['System', 'SQLova', 'MISP-SQL (simulation)'], ['System', 'SQLova', 'MISP-SQL (real user)'], ['System', 'SQLova', '+ w/ full info.'], ['System', '...
1
[['Accqm/em'], ['Accex'], ['Avg. #q']]
[['0.580', '0.660', 'N/A'], ['0.770', '0.810', '1.800'], ['0.633', '0.717', '1.510'], ['0.830', '0.890', 'N/A'], ['0.920', '0.950', '0.550'], ['0.837', '0.880', '0.533'], ['0.907', '0.937', '0.547'], ['0.180', 'N/A', 'N/A'], ['0.290', 'N/A', '2.730'], ['0.230', 'N/A', '2.647']]
column
['Accqm/em', 'Accex', 'Avg. #q']
['MISP-SQL (real user)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accqm/em</th> <th>Accex</th> <th>Avg. #q</th> </tr> </thead> <tbody> <tr> <td>System || SQLNet || no interaction</td> <td>0.580</td> <td>0.660</td> <td>N/A</td> </tr> <tr>...
Table 5
table_5
D19-1547
8
emnlp2019
Table 5 shows the results. In all settings, MISP-SQL improves the base parser’s performance, demonstrating the benefit of involving human interaction. However, we also notice that the gain is not as large as in simulation, especially on SQLova. Through interviews with the human evaluators, we found that the major rea...
[1, 1, 1, 2, 2]
['Table 5 shows the results.', 'In all settings, MISP-SQL improves the base parser’s performance, demonstrating the benefit of involving human interaction.', 'However, we also notice that the gain is not as large as in simulation, especially on SQLova.', 'Through interviews with the human evaluators, we found that th...
[None, ['MISP-SQL (real user)'], ['MISP-SQL (real user)', 'SQLova'], None, None]
1
D19-1548table_2
Ablation results of our baseline system on the LDC2015E86 development set.
2
[['Model', 'Baseline'], ['Model', '-BPE'], ['Model', '-Share Vocab.'], ['Model', '-Both']]
1
[['BLEU'], ['Meteor'], ['CHRF++']]
[['24.93', '33.2', '60.3'], ['23.02', '31.6', '58.09'], ['23.24', '31.78', '58.43'], ['18.77', '28.04', '51.88']]
column
['BLEU', 'Meteor', 'CHRF++']
['-Both']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU</th> <th>Meteor</th> <th>CHRF++</th> </tr> </thead> <tbody> <tr> <td>Model || Baseline</td> <td>24.93</td> <td>33.2</td> <td>60.3</td> </tr> <tr> <td>Model || -B...
Table 2
table_2
D19-1548
5
emnlp2019
3.2 Experimental Results. We first show the performance of our baseline system. As mentioned before, BPE and sharing vocabulary are two techniques we applied to relieving data sparsity. Table 2 presents the results of the ablation test on the development set of LDC2015E86 by either removing BPE, or vocabulary sharing, ...
[2, 2, 2, 1, 1]
['3.2 Experimental Results.', 'We first show the performance of our baseline system.', 'As mentioned before, BPE and sharing vocabulary are two techniques we applied to relieving data sparsity.', 'Table 2 presents the results of the ablation test on the development set of LDC2015E86 by either removing BPE, or vocabular...
[None, None, None, ['-BPE', '-Share Vocab.', '-Both'], ['-Both', 'Baseline']]
1
D19-1548table_3
Comparison results of our approaches and related studies on the test sets of LDC2015E86 and LDC2017T10. #P indicates the size of parameters in millions. ∗ indicates seq2seq-based systems while † for graph-based models, and ‡ for other models. All our proposed systems are significant over the baseline at 0.01, tested by...
3
[['System', 'Baseline', 'Baseline'], ['System', 'Our Approach', 'feature-based'], ['System', 'Our Approach', 'avg-based'], ['System', 'Our Approach', 'sum-based'], ['System', 'Our Approach', 'SA-based'], ['System', 'Our Approach', 'CNN-based'], ['System', 'Previous works with single models', 'Konstas et al. (2017)'], [...
2
[['LDC2015E86', 'BLEU'], ['LDC2015E86', 'Meteor'], ['LDC2015E86', 'CHRF++'], ['LDC2015E86', '#P (M)'], ['LDC2017T10', 'BLEU'], ['LDC2017T10', 'Meteor'], ['LDC2017T10', 'CHRF++']]
[['25.50', '33.16', '59.88', '49.1', '27.43', '34.62', '61.85'], ['27.23', '34.53', '61.55', '49.4', '30.18', '35.83', '63.20'], ['28.37', '35.10', '62.29', '49.1', '29.56', '35.24', '62.86'], ['28.69', '34.97', '62.05', '49.1', '29.92', '35.68', '63.04'], ['29.66', '35.45', '63.00', '49.3', '31.54', '36.02', '63.84'],...
column
['BLEU', 'Meteor', 'CHRF++', '#P (M)', 'BLEU', 'Meteor', 'CHRF++']
['Our Approach']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>LDC2015E86 || BLEU</th> <th>LDC2015E86 || Meteor</th> <th>LDC2015E86 || CHRF++</th> <th>LDC2015E86 || #P (M)</th> <th>LDC2017T10 || BLEU</th> <th>LDC2017T10 || Meteor</th> <th>LDC2017T1...
Table 3
table_3
D19-1548
6
emnlp2019
Table 3 presents the comparison of our approach and related works on the test sets of LDC2015E86 and LDC2017T10. From the results we can see that the Transformer-based baseline outperforms most of graph-to-sequence models and is comparable with the latest work by Guo et al. (2019). The strong performance of the baselin...
[1, 1, 2, 2, 1, 1, 1, 1]
['Table 3 presents the comparison of our approach and related works on the test sets of LDC2015E86 and LDC2017T10.', 'From the results we can see that the Transformer-based baseline outperforms most of graph-to-sequence models and is comparable with the latest work by Guo et al. (2019).', 'The strong performance of the...
[['Our Approach', 'LDC2015E86', 'LDC2017T10'], ['Our Approach', 'Previous works with single models', 'Previous works with either ensemble models or unlabelled data or both', 'Guo et al. (2019)'], ['Baseline'], None, ['Our Approach', 'Baseline', 'BLEU', 'LDC2015E86', 'LDC2017T10'], ['SA-based', 'CNN-based', 'feature-bas...
1
D19-1548table_4
Performance on the test set of our approach with or without modeling structural information of indirectly connected concept pairs.
2
[['System', 'Baseline'], ['System', 'Our approach'], ['System', 'No indirectly connected concept pairs']]
1
[['BLEU']]
[['27.43'], ['31.82'], ['29.92']]
column
['BLEU']
['Our approach']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU</th> </tr> </thead> <tbody> <tr> <td>System || Baseline</td> <td>27.43</td> </tr> <tr> <td>System || Our approach</td> <td>31.82</td> </tr> <tr> <td>System || No ind...
Table 4
table_4
D19-1548
6
emnlp2019
Table 4 compares the performance of our approach with or without modeling structural information of indirectly connected concept pairs. It shows that by modeling structural information of indirectly connected concept pairs, our approach improves the performance on the test set from 29.92 to 31.82 in BLEU scores. It als...
[1, 1, 1]
['Table 4 compares the performance of our approach with or without modeling structural information of indirectly connected concept pairs.', 'It shows that by modeling structural information of indirectly connected concept pairs, our approach improves the performance on the test set from 29.92 to 31.82 in BLEU scores.',...
[['Our approach', 'No indirectly connected concept pairs'], ['No indirectly connected concept pairs', 'Our approach'], ['Our approach', 'Baseline']]
1
D19-1554table_7
Effects of different replacement actions. As we can see, neighbor and synonymy words contribute most to the performance.
2
[['Setting', 'all words (RNN)'], ['Setting', '-super words'], ['Setting', '-subordinate words'], ['Setting', '-synonymy words'], ['Setting', '-neighbor words'], ['Setting', 'all words (CNN)'], ['Setting', '-super words'], ['Setting', '-subordinate words'], ['Setting', '-synonymy words'], ['Setting', '-neighbor words']]
1
[['SST-2'], ['SST-5'], ['RT'], ['Average']]
[['81.60', '41.14', '75.76', '66.17'], ['80.72', '41.17', '75.02', '65.64'], ['80.56', '41.67', '75.48', '65.90'], ['80.45', '41.99', '76.12', '66.19'], ['80.56', '40.91', '74.66', '65.38'], ['80.18', '41.86', '74.84', '65.63'], ['79.96', '41.67', '76.22', '65.95'], ['81.58', '41.49', '75.39', '66.15'], ['79.24', '41.6...
column
['accuracy', 'accuracy', 'accuracy', 'accuracy']
['-neighbor words']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>SST-2</th> <th>SST-5</th> <th>RT</th> <th>Average</th> </tr> </thead> <tbody> <tr> <td>Setting || all words (RNN)</td> <td>81.60</td> <td>41.14</td> <td>75.76</td> <td...
Table 7
table_7
D19-1554
7
emnlp2019
4.5 Analysis. What is the effect of each action?. To show the effect of different actions, we take RNN and CNN on SST-2, SST-5, and RT as examples and conduct experiments by dropping one action at a time. As Table 7 shows, only some actions are useful for the robustness improvement. The average performance becomes bett...
[2, 2, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2]
['4.5 Analysis.', 'What is the effect of each action?.', 'To show the effect of different actions, we take RNN and CNN on SST-2, SST-5, and RT as examples and conduct experiments by dropping one action at a time.', 'As Table 7 shows, only some actions are useful for the robustness improvement.', 'The average performanc...
[None, None, ['all words (RNN)', 'all words (CNN)', 'SST-2', 'SST-5', 'RT'], None, ['Average'], ['-neighbor words'], ['-neighbor words', 'all words (RNN)', 'all words (CNN)'], ['-neighbor words'], ['-neighbor words'], None, None, None]
1
D19-1555table_1
Summary of the evaluation datasets.
2
[['Statistic', '# of reviews'], ['Statistic', '# of sentences'], ['Statistic', 'Sentence/Review'], ['Statistic', 'Words/Sentence']]
1
[['HotelUser'], ['ResUser'], ['HotelType'], ['HotelLoc']]
[['28165', '23873', '22984', '136446'], ['362153', '276008', '302920', '1428722'], ['13', '12', '13', '10'], ['8', '7', '7', '7']]
row
['# of reviews', '# of sentences', 'Sentence/Review', 'Words/Sentence']
['# of reviews']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>HotelUser</th> <th>ResUser</th> <th>HotelType</th> <th>HotelLoc</th> </tr> </thead> <tbody> <tr> <td>Statistic || # of reviews</td> <td>28165</td> <td>23873</td> <td>22984<...
Table 1
table_1
D19-1555
5
emnlp2019
To assess Trait‰Ûªs effectiveness, we select the hotel and restaurant domains and prepare four review datasets associated with three attributes: author, trip type, and location. HotelUser, HotelType, and HotelLoc are sets of hotel reviews collected from TripAdvisor. HotelUser contains 28165 reviews posted by 202 random...
[2, 2, 1, 2, 1, 2, 1, 1, 2]
['To assess Trait‰Ûªs effectiveness, we select the hotel and restaurant domains and prepare four review datasets associated with three attributes: author, trip type, and location.', 'HotelUser, HotelType, and HotelLoc are sets of hotel reviews collected from TripAdvisor.', 'HotelUser contains 28165 reviews posted by 20...
[None, ['HotelUser', 'HotelType', 'HotelLoc'], ['HotelUser', '# of reviews'], ['HotelType'], ['HotelLoc', '# of reviews'], ['ResUser'], ['ResUser', '# of reviews'], None, None]
1
D19-1557table_3
This table reports performance (Accuracy) on the MR and SST data sets. Results with * indicate that performance metric is reported on the test dataset after training on a subset of the original data set. Bold face indicates best performing algorithm
2
[['Algorithm', 'BoW (Generic)'], ['Algorithm', 'BoW (DA embeddings)'], ['Algorithm', 'Vanilla CNN'], ['Algorithm', 'Vanilla BiLSTM'], ['Algorithm', 'LR-Bi-LSTM'], ['Algorithm', 'Self-attention'], ['Algorithm', 'Adapted CNN'], ['Algorithm', 'Adapted BiLSTM'], ['Algorithm', 'BERT']]
1
[['MR'], ['SST']]
[['75.7*', '48.9*'], ['77.0*', '49.2*'], ['72.5*', '49.06*'], ['81.8*', '50.3*'], ['82.1', '50.6'], ['81.7', '48.9'], ['80.8*', '50.0*'], ['83.1*', '51.2'], ['74.4*', '51.5']]
column
['accuracy', 'accuracy']
['Adapted BiLSTM', 'Adapted CNN']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MR</th> <th>SST</th> </tr> </thead> <tbody> <tr> <td>Algorithm || BoW (Generic)</td> <td>75.7*</td> <td>48.9*</td> </tr> <tr> <td>Algorithm || BoW (DA embeddings)</td> <td...
Table 3
table_3
D19-1557
8
emnlp2019
4.4 Results. Table 2 presents results on the LibCon and the balanced (B) and imbalanced (I) Beauty, Book and Music data sets and Table 3 presents results on the SST and MR data sets. The performance metric reported in both tables is accuracy with with additional micro f-scores reported in Table 2. From Table 2, it is o...
[2, 1, 2, 1, 1]
['4.4 Results.', 'Table 2 presents results on the LibCon and the balanced (B) and imbalanced (I) Beauty, Book and Music data sets and Table 3 presents results on the SST and MR data sets.', 'The performance metric reported in both tables is accuracy with with additional micro f-scores reported in Table 2.', 'From Table...
[None, ['MR', 'SST'], None, ['Adapted BiLSTM', 'Adapted CNN', 'Vanilla CNN', 'Vanilla BiLSTM'], ['Adapted BiLSTM', 'BERT']]
1
D19-1563table_2
Experimental results on the Chinese dataset. Superscript ∗ indicates the results are reported in (Gui et al., 2017) and the rest are reprinted from the corresponding publications (p <0.001).
2
[['Method', 'RB*'], ['Method', 'CB*'], ['Method', 'SVM*'], ['Method', 'Word2vec*'], ['Method', 'Multi-kernel*'], ['Method', 'LambdaMART*'], ['Method', 'CNN*'], ['Method', 'ConvMS-Memnet*'], ['Method', 'CANN'], ['Method', 'HCS'], ['Method', 'MANN'], ['Method', 'RHNN']]
1
[['P'], ['R'], ['F']]
[['0.6747', '0.4287', '0.5243'], ['0.2672', '0.713', '0.3887'], ['0.42', '0.4375', '0.4285'], ['0.4301', '0.4233', '0.4136'], ['0.6588', '0.6972', '0.6752'], ['0.772', '0.7499', '0.7608'], ['0.6472', '0.5493', '0.5915'], ['0.7076', '0.6838', '0.6955'], ['0.7721', '0.6891', '0.7266'], ['0.7388', '0.7154', '0.7269'], ['0...
column
['P', 'R', 'F']
['RB*', 'CB*']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F</th> </tr> </thead> <tbody> <tr> <td>Method || RB*</td> <td>0.6747</td> <td>0.4287</td> <td>0.5243</td> </tr> <tr> <td>Method || CB*</td> ...
Table 2
table_2
D19-1563
6
emnlp2019
5.1 Main Results . The experimental results on both datasets are shown in Table 2 and Table 3, respectively. RB yields high precision but with low recall. CB has an opposite scenario from RB. A possible reason is that these linguistic-based methods depend on some cue words to identify the emotion cause, different rules...
[2, 1, 1, 1, 2]
['5.1 Main Results .', 'The experimental results on both datasets are shown in Table 2 and Table 3, respectively.', 'RB yields high precision but with low recall.', 'CB has an opposite scenario from RB.', 'A possible reason is that these linguistic-based methods depend on some cue words to identify the emotion cause, d...
[None, None, ['RB*', 'P', 'R'], ['CB*', 'RB*'], None]
1
D19-1565table_6
Fragment-level experiments (FLC task). (i) Spans checks only Shown are two evaluations: whether the model has identified the fragment spans correctly, while (ii) Full task is evaluation wrt the actual task of identifying the spans and also assigning the correct propaganda technique for each span.
3
[['Model', 'Metrics', 'BERT'], ['Model', 'Metrics', 'Joint'], ['Model', 'Metrics', 'Granu'], ['Model', 'Multi-Granularity', 'ReLU'], ['Model', 'Multi-Granularity', 'Sigmoid']]
2
[['Spans', 'P'], ['Spans', 'R'], ['Spans', 'F1'], ['Full Task', 'P'], ['Full Task', 'R'], ['Full Task', 'F1']]
[['39.57', '36.42', '37.9', '21.48', '21.39', '21.39'], ['39.26', '35.48', '37.25', '20.11', '19.74', '19.92'], ['43.08', '33.98', '37.93', '23.85', '20.14', '21.8'], ['43.29', '34.74', '38.28', '23.98', '20.33', '21.82'], ['44.12', '35.01', '38.98', '24.42', '21.05', '22.58']]
column
['P', 'R', 'F1', 'P', 'R', 'F1']
['Multi-Granularity']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Spans || P</th> <th>Spans || R</th> <th>Spans || F1</th> <th>Full Task || P</th> <th>Full Task || R</th> <th>Full Task || F1</th> </tr> </thead> <tbody> <tr> <td>Model || Metric...
Table 6
table_6
D19-1565
8
emnlp2019
Table 6 shows that joint learning (BERT-Joint) hurts the performance compared to single-task BERT. However, using additional information from the sentence-level for the token-level classification (BERT-Granularity) yields small improvements. The multi-granularity models outperform all baselines thanks to their higher p...
[1, 1, 1, 2]
['Table 6 shows that joint learning (BERT-Joint) hurts the performance compared to single-task BERT.', 'However, using additional information from the sentence-level for the token-level classification (BERT-Granularity) yields small improvements.', 'The multi-granularity models outperform all baselines thanks to their ...
[['Joint', 'BERT'], ['Granu', 'BERT'], ['Multi-Granularity', 'P'], None]
1
D19-1569table_2
Performance comparison on different models on the benchmark datasets. The best performance are bold-typed.
2
[['Model', 'SOTA'], ['Model', 'CNN+Position'], ['Model', 'LSTM+Position'], ['Model', 'CNN+ATT'], ['Model', 'Tnet (Li et al., 2018a)'], ['Model', 'PRET+MULT (He et al., 2018b)'], ['Model', 'SA-LSTM-P (Wang and Lu, 2018)'], ['Model', 'LSTM+SynATT+TarRep (He et al., 2018a)'], ['Model', 'MGAN (Fan et al., 2018b)'], ['Model...
2
[['Rest14', 'ACC'], ['Rest15', 'F1'], ['Laptop', 'ACC'], ['Laptop', 'F1'], ['Twitter', 'ACC'], ['Twitter', 'F1'], ['Rest16', 'ACC'], ['Rest17', 'F1']]
[['81.6', '71.91', '76.54', '71.75', '74.97', '73.6', '85.58', '69.76'], ['79.37', '68.64', '72.73', '68.28', '72.69', '70.92', '84.63', '64.75'], ['77.59', '67.05', '70.06', '64.46', '71.39', '69.45', '83.47', '62.69'], ['79.46', '69.44', '70.53', '64.27', '73.12', '71.01', '84.28', '60.86'], ['80.79', '70.84', '76.54...
column
['ACC', 'F1', 'ACC', 'F1', 'ACC', 'F1', 'ACC', 'F1']
['ASP-GCN']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Rest14 || ACC</th> <th>Rest15 || F1</th> <th>Laptop || ACC</th> <th>Laptop || F1</th> <th>Twitter || ACC</th> <th>Twitter || F1</th> <th>Rest16 || ACC</th> <th>Rest17 || F1</th> ...
Table 2
table_2
D19-1569
6
emnlp2019
From Table 2 and Figure 4, it is clear that GCN complements the BiLSTM to improve model performance. This means that the BiLSTM can identify opinion words within the context with respect to a specific aspect. However, in some complicated contexts, it might perform poorly. But the GCN can build upon BiLSTM to attend to ...
[1, 1, 1, 2]
['From Table 2 and Figure 4, it is clear that GCN complements the BiLSTM to improve model performance.', 'This means that the BiLSTM can identify opinion words within the context with respect to a specific aspect.', 'However, in some complicated contexts, it might perform poorly.', 'But the GCN can build upon BiLSTM to...
[['ASP-GCN'], ['ASP-GCN'], ['ASP-GCN'], ['ASP-GCN']]
1
D19-1570table_1
Main BLEU results (CTC=0.62).
2
[['Method', 'Baseline'], ['Method', 'RAML'], ['Method', 'SO'], ['Method', 'ST'], ['Method', 'TA'], ['Method', 'BT']]
1
[['Fr®En'], ['En®Fr'], ['Zh®En'], ['En®De']]
[['38.38 (5)', '38.88 (6)', '17.25 (6)', '26.19 (4)'], ['+0.22 (3)', '+0.67 (3)', '+0.23 (4)', '-0.16 (6)'], ['+0.01 (4)', '+0.62 (4)', '+0.02 (5)', '-0.15 (5)'], ['-0.13 (6)', '+0.46 (5)', '+1.51 (2)', '+0.83 (2)'], ['+0.62 (2)', '+1.13 (1)', '+2.41 (1)', '+1.01 (1)'], ['+0.82 (1)', '+0.99 (2)', '+1.06 (3)', '+0.39 (3...
column
['BLEU', 'BLEU', 'BLEU', 'BLEU']
['RAML', 'SO', 'ST', 'TA', 'BT']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Fr®En</th> <th>En®Fr</th> <th>Zh®En</th> <th>En®De</th> </tr> </thead> <tbody> <tr> <td>Method || Baseline</td> <td>38.38 (5)</td> <td>38.88 (6)</td> <td>17.25 (6)</td> ...
Table 1
table_1
D19-1570
2
emnlp2019
CTC Table 1 shows the main BLEU results of different methods on the test set. However, we cannot identify the best DA method because their rankings across the four translation tasks vary a bit. To measure the degree of consistency, we use a correlation measure called Kendall's coefficient of concordance (Kendall and Sm...
[1, 1, 2, 2, 2, 1, 2, 2, 2]
['CTC Table 1 shows the main BLEU results of different methods on the test set.', 'However, we cannot identify the best DA method because their rankings across the four translation tasks vary a bit.', "To measure the degree of consistency, we use a correlation measure called Kendall's coefficient of concordance (Kendal...
[None, None, None, None, None, None, None, None, None]
1
D19-1571table_1
Online decoding accuracy for a direct model (DIR), ensembling two direct models (DIR ENS) and the channel approach (CH+DIR+LM). We ablate the impact of using per word scores. Results are on WMT De-En. Table 4 in the appendix shows standard deviations.
1
[['DIR'], ['DIR ENS'], ['DIR+LM'], ['CH+DIR+LM'], [' - per word scores']]
1
[['news2016'], ['news2017']]
[['39', '34.3'], ['40', '35.3'], ['39.8', '35.2'], ['41', '36.2'], ['40', '35.1']]
column
['accuracy', 'accuracy']
['DIR+LM', 'CH+DIR+LM']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>news2016</th> <th>news2017</th> </tr> </thead> <tbody> <tr> <td>DIR</td> <td>39</td> <td>34.3</td> </tr> <tr> <td>DIR ENS</td> <td>40</td> <td>35.3</td> </tr> <...
Table 1
table_1
D19-1571
4
emnlp2019
Next, we evaluate online decoding with a noisy channel setup compared to just a direct model (DIR) as well as an ensemble of two direct models (DIR ENS). Table 1 shows that adding a language model to DIR (DIR+LM) gives a good improvement (Gulcehre et al., 2015) over a single direct model but ensembling two direct model...
[1, 1, 1, 1, 1]
['Next, we evaluate online decoding with a noisy channel setup compared to just a direct model (DIR) as well as an ensemble of two direct models (DIR ENS).', 'Table 1 shows that adding a language model to DIR (DIR+LM) gives a good improvement (Gulcehre et al., 2015) over a single direct model but ensembling two direct ...
[['DIR', 'DIR ENS'], ['DIR+LM', 'DIR ENS'], ['CH+DIR+LM'], [' - per word scores', 'DIR', 'CH+DIR+LM'], ['DIR ENS', 'CH+DIR+LM']]
1
D19-1576table_3
Comparison of the recent state-of-the-art approaches and G/G+I. Avg: Average DDA over 15 languages.
2
[['CODE', 'ET'], ['CODE', 'FI'], ['CODE', 'NL'], ['CODE', 'EN'], ['CODE', 'DE'], ['CODE', 'NO'], ['CODE', 'GRC'], ['CODE', 'HI'], ['CODE', 'JA'], ['CODE', 'FR'], ['CODE', 'IT'], ['CODE', 'LA'], ['CODE', 'BG'], ['CODE', 'SL'], ['CODE', 'EU'], ['Metrics', 'Avg']]
1
[['Convex MST'], ['LC-DMV'], ['D-J'], ['G'], ['G+I']]
[['49.4', '31.8', '44', '56', '56.4'], ['44.7', '26.9', '43.5', '50.7', '49.3'], ['45.3', '34.1', '43.5', '50.4', '50.6'], ['54', '56', '60.1', '51.7', '52.7'], ['51.4', '50.5', '55.7', '59.6', '61.4'], ['55.3', '45.5', '60.8', '61', '61.3'], ['43.4', '33.1', '44.9', '46.8', '46.2'], ['56.8', '54.2', '60', '47.4', '46....
column
['DDA', 'DDA', 'DDA', 'DDA', 'DDA']
['G+I']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Convex MST</th> <th>LC-DMV</th> <th>D-J</th> <th>G</th> <th>G+I</th> </tr> </thead> <tbody> <tr> <td>CODE || ET</td> <td>49.4</td> <td>31.8</td> <td>44</td> <td>5...
Table 3
table_3
D19-1576
4
emnlp2019
To measure the statistical significance of the advantage of our method, we performed the nonparametric Friedman's test to support/reject the claim (null hypothesis): there is no difference between the G+I model and the NDMV model in a multilingual setting. Based on the above sample data, the P-value 7.8911 x 10-4 would...
[0, 0, 1, 2, 1, 1]
["To measure the statistical significance of the advantage of our method, we performed the nonparametric Friedman's test to support/reject the claim (null hypothesis): there is no difference between the G+I model and the NDMV model in a multilingual setting.", 'Based on the above sample data, the P-value 7.8911 x 10-4 ...
[None, None, ['LC-DMV', 'D-J'], ['LC-DMV', 'D-J'], ['G+I', 'LC-DMV'], ['G+I', 'D-J']]
1
D19-1581table_3
Performance of various models on the ACP test set.
4
[['Training dataset', 'AL', 'Encoder', 'BiGRU'], ['Training dataset', 'AL', 'Encoder', 'BERT'], ['Training dataset', 'AL+CA+CO', 'Encoder', 'BiGRU'], ['Training dataset', 'AL+CA+CO', 'Encoder', 'BERT'], ['Training dataset', 'ACP', 'Encoder', 'BiGRU'], ['Training dataset', 'ACP', 'Encoder', 'BERT'], ['Training dataset',...
1
[['Acc']]
[['0.843'], ['0.863'], ['0.866'], ['0.835'], ['0.919'], ['0.933'], ['0.917'], ['0.913'], ['0.5'], ['0.503']]
column
['Acc']
['Random+Seed']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Acc</th> </tr> </thead> <tbody> <tr> <td>Training dataset || AL || Encoder || BiGRU</td> <td>0.843</td> </tr> <tr> <td>Training dataset || AL || Encoder || BERT</td> <td>0.863</td> ...
Table 3
table_3
D19-1581
6
emnlp2019
4.3 Results and Discussion . Table 3 shows accuracy. As the Random baseline suggests, positive and negative labels were distributed evenly. The Random+Seed baseline made use of the seed lexicon and output the corresponding label (or the reverse of it for negation) if the event's predicate is in the seed lexicon. We can...
[2, 1, 2, 2, 1]
['4.3 Results and Discussion .', 'Table 3 shows accuracy.', 'As the Random baseline suggests, positive and negative labels were distributed evenly.', "The Random+Seed baseline made use of the seed lexicon and output the corresponding label (or the reverse of it for negation) if the event's predicate is in the seed lexi...
[None, ['Acc'], ['Random'], ['Random+Seed'], ['Random+Seed']]
1
D19-1582table_1
Overall performance of different methods on the test set with gold-standard entities. † indicates that the method uses dependency arcs.
2
[['Method', 'Cross Event'], ['Method', 'DMCNN'], ['Method', 'JRNN'], ['Method', 'DEEB-RNN'], ['Method', 'dbRNN'], ['Method', 'GCN-ED'], ['Method', 'JMEE'], ['Method', 'MOGANED']]
1
[['P'], ['R'], ['F1']]
[['68.7', '68.9', '68.8'], ['75.6', '63.6', '69.1'], ['66', '73.9', '69.3'], ['72.3', '75.8', '74'], ['74.1', '69.8', '71.9'], ['77.9', '68.8', '73.1'], ['76.3', '71.3', '73.7'], ['79.5', '72.3', '75.7']]
column
['P', 'R', 'F1']
['MOGANED']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Method || Cross Event</td> <td>68.7</td> <td>68.9</td> <td>68.8</td> </tr> <tr> <td>Method || DMCNN</td>...
Table 1
table_1
D19-1582
4
emnlp2019
Table 1 presents the performance comparison between different methods. We can see that MOGANED achieves 1.6% and 1.7% improvement on precision and F1-measure, respectively, compared with the best baselines. MOGANED reaches a lower recall than two sequence based methods, JRNN and DEEB-RNN.
[1, 1, 1]
['Table 1 presents the performance comparison between different methods.', 'We can see that MOGANED achieves 1.6% and 1.7% improvement on precision and F1-measure, respectively, compared with the best baselines.', 'MOGANED reaches a lower recall than two sequence based methods, JRNN and DEEB-RNN.']
[None, ['MOGANED', 'P', 'F1'], ['MOGANED', 'JRNN', 'DEEB-RNN', 'R']]
1
D19-1585table_1
DYGIE++ achieves state-of-the-art results. Test set F1 scores of best model, on all tasks and datasets. We define the following notations for events: Trig: Trigger, Arg: argument, ID: Identification, C: Classification. * indicates the use of a 4-model ensemble for trigger detection. See Appendix E for details. The resu...
4
[['Dataset', 'ACE05', 'Task', 'Entity'], ['Dataset', 'ACE06', 'Task', 'Relation'], ['Dataset', 'ACE05-Event*', 'Task', 'Entity'], ['Dataset', 'ACE05-Event*', 'Task', 'Trig-ID'], ['Dataset', 'ACE05-Event*', 'Task', 'Trig-C'], ['Dataset', 'ACE05-Event*', 'Task', 'Arg-ID'], ['Dataset', 'ACE05-Event*', 'Task', 'Arg-C'], ['...
1
[['SOTA'], ['Ours'], ['D%']]
[['88.4', '88.6', '1.7'], ['63.2', '63.4', '0.5'], ['87.1', '90.7', '27.9'], ['73.9', '76.5', '9.6'], ['72', '73.6', '5.7'], ['57.2', '55.4', '-4.2'], ['52.4', '52.5', '0.2'], ['65.2', '67.5', '6.6'], ['41.6', '48.4', '11.6'], ['76.2', '77.9', '7.1'], ['79.5', '79.7', '1'], ['64.1', '65.9', '5']]
column
['F1', 'F1', 'F1']
['Ours']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>SOTA</th> <th>Ours</th> <th>D%</th> </tr> </thead> <tbody> <tr> <td>Dataset || ACE05 || Task || Entity</td> <td>88.4</td> <td>88.6</td> <td>1.7</td> </tr> <tr> <td>Da...
Table 1
table_1
D19-1585
3
emnlp2019
4 Results and Analyses . State-of-the-art Results . Table 1 shows test set F1 on the entity, relation and event extraction tasks. Our framework establishes a new state-of-the-art on all three high-level tasks, and on all subtasks except event argument identification. Relative error reductions range from 0.2 - 27.9% ove...
[2, 2, 1, 1, 1, 0, 0, 0]
['4 Results and Analyses .', 'State-of-the-art Results .', 'Table 1 shows test set F1 on the entity, relation and event extraction tasks.', 'Our framework establishes a new state-of-the-art on all three high-level tasks, and on all subtasks except event argument identification.', 'Relative error reductions range from 0...
[None, None, None, ['Ours'], ['Ours', 'SOTA', 'D%'], None, None, None]
1
D19-1588table_1
OntoNotes: BERT improves the c2f-coref model on English by 0.9% and 3.9% respectively for base and large variants. The main evaluation is the average F1 of three metrics – MUC, B3, and CEAFφ4 on the test set.
1
[['Martschat and Strube (2015)'], ['(Clark and Manning, 2015)'], ['(Wiseman et al., 2015)'], ['Wiseman et al. (2016)'], ['Clark and Manning (2016)'], ['e2e-coref (Lee et al., 2017)'], ['c2f-coref (Lee et al., 2018)'], ['Fei et al. (2019)'], ['EE (Kantor and Globerson, 2019)'], ['BERT-base + c2f-coref (independent)'], [...
2
[['MUC', 'P'], ['MUC', 'R'], ['MUC', 'F1'], ['B3', 'P'], ['B3', 'R'], ['B3', 'F1'], ['CEAF?4', 'P'], ['CEAF?4', 'R'], ['CEAF?4', 'F1'], ['Metrics', 'Avg. F1']]
[['76.7', '68.1', '72.2', '66.1', '54.2', '59.6', '59.5', '52.3', '55.7', '62.5'], ['76.1', '69.4', '72.6', '65.6', '56', '60.4', '59.4', '53', '56', '63'], ['76.2', '69.3', '72.6', '66.2', '55.8', '60.5', '59.4', '54.9', '57.1', '63.4'], ['77.5', '69.8', '73.4', '66.8', '57', '61.5', '62.1', '53.9', '57.7', '64.2'], [...
column
['P', 'R', 'F1', 'P', 'R', 'F1', 'P', 'R', 'F1', 'Avg. F1']
['BERT-large + c2f-coref (overlap)', 'BERT-large + c2f-coref (independent)', 'BERT-base + c2f-coref (overlap)', 'BERT-base + c2f-coref (independent)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MUC || P</th> <th>MUC || R</th> <th>MUC || F1</th> <th>B3 || P</th> <th>B3 || R</th> <th>B3 || F1</th> <th>CEAF?4 || P</th> <th>CEAF?4 || R</th> <th>CEAF?4 || F1</th> <th...
Table 1
table_1
D19-1588
3
emnlp2019
Table 1 shows that BERT-base offers an improvement of 0.9% over the ELMo-based c2fcoref model. Given how gains on coreference resolution have been hard to come by as evidenced by the table, this is still a considerable improvement. However, the magnitude of gains is relatively modest considering BERT's arguably better ...
[1, 1, 2, 2, 1, 1]
['Table 1 shows that BERT-base offers an improvement of 0.9% over the ELMo-based c2fcoref model.', 'Given how gains on coreference resolution have been hard to come by as evidenced by the table, this is still a considerable improvement.', "However, the magnitude of gains is relatively modest considering BERT's arguably...
[['BERT-base + c2f-coref (independent)'], None, None, None, ['BERT-large + c2f-coref (independent)'], ['BERT-large + c2f-coref (overlap)']]
1
D19-1589table_3
Comparison of different delta functions. Table 3 shows performance comparison among different delta operations: SUBTRACT, ADD, and
1
[['SUBTRACT'], ['ADD'], ['MLP']]
1
[['M'], ['VE']]
[['3.35', '67.2'], ['3.45', '65.35'], ['3.32', '62.97']]
column
['macro-averaged', 'macro-averaged']
['SUBTRACT', 'ADD', 'MLP']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>M</th> <th>VE</th> </tr> </thead> <tbody> <tr> <td>SUBTRACT</td> <td>3.35</td> <td>67.2</td> </tr> <tr> <td>ADD</td> <td>3.45</td> <td>65.35</td> </tr> <tr> ...
Table 3
table_3
D19-1589
4
emnlp2019
Table 3 shows performance comparison among different delta operations: SUBTRACT, ADD, and MLP which is a multi-layer perceptron network. All scores are macro-averaged across datasets. While ADD shows good performance on METEOR, SUBTRACT does on the soft metric (i.e., VecExt), indicating that subtraction can help the mo...
[1, 2, 1]
['Table 3 shows performance comparison among different delta operations: SUBTRACT, ADD, and MLP which is a multi-layer perceptron network.', 'All scores are macro-averaged across datasets.', 'While ADD shows good performance on METEOR, SUBTRACT does on the soft metric (i.e., VecExt), indicating that subtraction can hel...
[['SUBTRACT', 'ADD', 'MLP'], None, ['ADD', 'M', 'SUBTRACT', 'VE']]
1
D19-1590table_5
Results of ProposedRU and its variants
2
[['Model', 'ProposedRUwiki'], ['Model', 'ProposedRUweb'], ['Model', 'ProposedRU'], ['Model', 'ProposedRUweb+web'], ['Model', 'ProposedRUweb+pair'], ['Model', 'ProposedRU+BK']]
1
[['R'], ['P'], ['F'], ['Avg.P']]
[['57.4', '49.6', '53.2?', '53.3'], ['59', '50.9', '54.6?', '54.5'], ['64', '52', '57.4', '57.4'], ['62.5', '49', '54.9?', '54.8'], ['64.3', '48.2', '55.1?', '55.3'], ['67.4', '52.3', '58.9', '59.9']]
column
['R', 'P', 'F', 'Avg.P']
['ProposedRU+BK']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>R</th> <th>P</th> <th>F</th> <th>Avg.P</th> </tr> </thead> <tbody> <tr> <td>Model || ProposedRUwiki</td> <td>57.4</td> <td>49.6</td> <td>53.2?</td> <td>53.3</td> </...
Table 5
table_5
D19-1590
5
emnlp2019
in which text fragments embodying background knowledge are concatenated to the input sentence as explained in Section 2. Table 5 shows that ProposedRU+BK improved the average precision over ProposedRU by about 2.5% (i.e., ProposedRU+BK significantly outperformed the state-of-the-art method, MCNN, by about 5%), suggesti...
[2, 1, 2]
['in which text fragments embodying background knowledge are concatenated to the input sentence as explained in Section 2.', 'Table 5 shows that ProposedRU+BK improved the average precision over ProposedRU by about 2.5% (i.e., ProposedRU+BK significantly outperformed the state-of-the-art method, MCNN, by about 5%), sug...
[None, ['ProposedRU+BK', 'ProposedRU'], None]
1
D19-1596table_1
Performance comparison of baseline VQA trained on VQA2.0, baseline VQA finetuned on ConVQA, and VQA trained using our CTM. L-ConVQA is the human-cleaned Logical Consistent QA dataset, CS-ConVQA is the human annotated Common-sense Consistency Dataset and VG is Visual Genome. CTM-based training produces the best results ...
3
[['a) VQA', 'DATA', 'VQA2.0'], ['b) FineTune', 'DATA', 'CS-ConVQA'], ['c) FineTune', 'DATA', 'L/CS-ConVQA'], ['d) +CTM', 'DATA', 'L/CS-ConVQA'], ['e) FineTune', 'DATA', 'L/CS-ConVQA,VG'], ['f) +CTMvg', 'DATA', 'L/CS-ConVQA,VG']]
2
[['L-ConVQA', 'Perf Con'], ['L-ConVQA', 'Avg Con'], ['L-ConVQA', 'Top1'], ['CS-ConVQA', 'Perf Con'], ['CS-ConVQA', 'Avg Con'], ['CS-ConVQA', 'Top1'], ['CS-ConVQA', 'Yes/No'], ['CS-ConVQA', 'Num']]
[['36.25', '71.36', '70.34', '26.13', '59.61', '60.03', '65.49', '31.39'], ['34.54', '70.39', '69.48', '26.39', '59.65', '60.07', '65.8', '35.92'], ['54.68', '83.42', '83.16', '24.7', '59.3', '59.6', '65.14', '33.33'], ['54.6', '83.23', '82.79', '25.94', '60.39', '60.78', '66.63', '36.89'], ['36.4', '71.6', '70.94', '2...
column
['Perf Con', 'Avg Con', 'Top1', 'Perf Con', 'Avg Con', 'Top1', 'Yes/No', 'Num']
['L-ConVQA', 'CS-ConVQA']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>L-ConVQA || Perf Con</th> <th>L-ConVQA || Avg Con</th> <th>L-ConVQA || Top1</th> <th>CS-ConVQA || Perf Con</th> <th>CS-ConVQA || Avg Con</th> <th>CS-ConVQA || Top1</th> <th>CS-ConVQA ||...
Table 1
table_1
D19-1596
5
emnlp2019
Table 1 shows quantitative results on our LConVQA and CS-ConVQA datasets. We make a number of observations below. The state-of-the-art VQA has low consistency. The baseline VQA system (row a) retains similarly high top-1 accuracy on the ConVQA splits (63.58% on VQAv2 vs 70.34% / 60.03% on LConVQA / CS-ConVQA); however,...
[1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
['Table 1 shows quantitative results on our LConVQA and CS-ConVQA datasets.', 'We make a number of observations below.', 'The state-of-the-art VQA has low consistency.', 'The baseline VQA system (row a) retains similarly high top-1 accuracy on the ConVQA splits (63.58% on VQAv2 vs 70.34% / 60.03% on LConVQA / CS-ConVQA...
[['L-ConVQA', 'CS-ConVQA'], None, ['L/CS-ConVQA'], ['VQA2.0', 'L-ConVQA'], ['L-ConVQA'], ['L-ConVQA', 'L/CS-ConVQA'], ['L-ConVQA'], ['b) FineTune', 'c) FineTune', 'e) FineTune'], ['b) FineTune', 'c) FineTune', 'e) FineTune', 'CS-ConVQA'], ['b) FineTune', 'c) FineTune', 'e) FineTune', 'L/CS-ConVQA', 'Avg Con', 'L/CS-Con...
1
D19-1599table_1
Results on the validation set of OpenSQuAD.
2
[['Model', 'Single-sentence'], ['Model', 'Length-50'], ['Model', 'Length-100'], ['Model', 'Length-200'], ['Model', 'w/o sliding-window (same as (3))'], ['Model', 'w/ sliding-window'], ['Model', 'w/o passage ranker (same as (6))'], ['Model', 'w/ passage ranker'], ['Model', 'w/ passage scores'], ['Model', 'BERT+QANet'], ...
1
[['EM'], ['F1']]
[['34.8', '44.4'], ['35.5', '45.2'], ['35.7', '45.7'], ['34.8', '44.7'], ['35.7', '45.7'], ['40.4', '49.8'], ['40.4', '49.8'], ['41.3', '51.7'], ['42.8', '53.4'], ['18.3', '27.8'], ['35.5', '45.9'], ['36.2', '46.4']]
column
['EM', 'F1']
['BERT+QANet', 'BERT+QANet (fix BERT)', 'BERT+QANet (init. from (11))']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>EM</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Model || Single-sentence</td> <td>34.8</td> <td>44.4</td> </tr> <tr> <td>Model || Length-50</td> <td>35.5</td> <td...
Table 1
table_1
D19-1599
3
emnlp2019
Does explicit inter-sentence matching matter? . Almost all previous state-of-the-art QA and RC models find answers by matching passages with questions, aka inter-sentence matching (Wang and Jiang, 2017; Wang et al., 2016; Seo et al., 2017; Wang et al., 2017; Song et al., 2017). However, BERT model simply concatenates a...
[2, 2, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
['Does explicit inter-sentence matching matter? .', 'Almost all previous state-of-the-art QA and RC models find answers by matching passages with questions, aka inter-sentence matching (Wang and Jiang, 2017; Wang et al., 2016; Seo et al., 2017; Wang et al., 2017; Song et al., 2017).', 'However, BERT model simply concat...
[None, None, ['BERT+QANet', 'BERT+QANet (fix BERT)', 'BERT+QANet (init. from (11))'], ['BERT+QANet', 'BERT+QANet (fix BERT)', 'BERT+QANet (init. from (11))'], ['BERT+QANet', 'BERT+QANet (fix BERT)', 'BERT+QANet (init. from (11))'], ['BERT+QANet', 'BERT+QANet (fix BERT)', 'BERT+QANet (init. from (11))'], ['BERT+QANet'],...
1
D19-1616table_2
Evaluation results of our models on development (dev) and testing (test) sets. The automatic evaluation scores in terms of Rouge (R1 F1, R2 F1, RL F1) and BLEU for the output summaries are shown in the table.
4
[['Setting', 'S1', 'Dataset', 'Dev'], ['Setting', 'S2', 'Dataset', 'Test'], ['Setting', 'S2', 'Dataset', 'Dev'], ['Setting', 'S3', 'Dataset', 'Test'], ['Setting', 'S3', 'Dataset', 'Dev'], ['Setting', 'S4', 'Dataset', 'Test']]
1
[['R1_F1'], ['R2_F1'], ['RL_F1'], ['BLEU']]
[['43.9', '28.5', '46.3', '12.6'], ['39.7', '22.9', '42.2', '9'], ['45.4', '29.8', '47.4', '14'], ['55.7', '41.8', '57.6', '20.8'], ['44.3', '28.5', '46.4', '13.1'], ['40', '23', '42.3', '9.4']]
column
['R1_F1', 'R2_F1', 'RL_F1', 'BLEU']
['Dev', 'Test']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>R1_F1</th> <th>R2_F1</th> <th>RL_F1</th> <th>BLEU</th> </tr> </thead> <tbody> <tr> <td>Setting || S1 || Dataset || Dev</td> <td>43.9</td> <td>28.5</td> <td>46.3</td> <...
Table 2
table_2
D19-1616
3
emnlp2019
5 Evaluation and Discussion . We evaluate the results for every 10,000 iterations on the dev and test set. The automatic evaluation results based on the dev and test set are shown in Table 2. To evaluate the proposed algorithms, we use ROUGE (Recall-Oriented Understudy for Gisting Evaluation) score, which is a popular ...
[2, 2, 1, 2, 2, 2, 1]
['5 Evaluation and Discussion .', 'We evaluate the results for every 10,000 iterations on the dev and test set.', 'The automatic evaluation results based on the dev and test set are shown in Table 2.', 'To evaluate the proposed algorithms, we use ROUGE (Recall-Oriented Understudy for Gisting Evaluation) score, which is...
[None, None, None, None, ['R1_F1', 'R2_F1', 'RL_F1'], ['BLEU'], ['R1_F1', 'R2_F1', 'RL_F1', 'S3', 'S1', 'S2']]
1
D19-1627table_4
The results on Word-in-Context (WiC) data.
2
[['Model', 'Lee and Chen (2017)'], ['Model', 'Neelakantan et al. (2015)'], ['Model', 'Mancini et al. (2016)'], ['Model', 'Guo et al. (2019)'], ['Model', 'Chang et al. (2018)'], ['Model', 'Pilehvar and Collier (2016)'], ['Model', 'Proposed (BERT-base)']]
1
[['Accuracy (%)']]
[['52.14'], ['54'], ['54.56'], ['55.27'], ['57'], ['58.55'], ['68.64']]
column
['Accuracy (%)']
['Proposed (BERT-base)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy (%)</th> </tr> </thead> <tbody> <tr> <td>Model || Lee and Chen (2017)</td> <td>52.14</td> </tr> <tr> <td>Model || Neelakantan et al. (2015)</td> <td>54</td> </tr> <tr...
Table 4
table_4
D19-1627
5
emnlp2019
3.2 Word Sense Selection in Context. We further examine if the captured sense-specific cues help word sense disambiguation via Wordin-Context data (WiC) (Pilehvar and CamachoCollados, 2018), in which each instance contains a pair of two contexts sharing a target word, and the task is to decide whether their word senses...
[2, 2, 2, 1, 2]
['3.2 Word Sense Selection in Context.', 'We further examine if the captured sense-specific cues help word sense disambiguation via Wordin-Context data (WiC) (Pilehvar and CamachoCollados, 2018), in which each instance contains a pair of two contexts sharing a target word, and the task is to decide whether their word s...
[None, None, None, ['Proposed (BERT-base)'], None]
1
D19-1634table_7
Comparison of copying accuracies.
2
[['System', 'MS UEDIN'], ['System', 'COPYNET'], ['System', 'Ours']]
1
[[' Accuracy']]
[['64.63'], ['64.72'], ['65.61']]
column
['Accuracy']
['Ours']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy</th> </tr> </thead> <tbody> <tr> <td>System || MS UEDIN</td> <td>64.63</td> </tr> <tr> <td>System || COPYNET</td> <td>64.72</td> </tr> <tr> <td>System || Ours</t...
Table 7
table_7
D19-1634
8
emnlp2019
Table 7 shows the comparison of copying accuracies between MS UEDIN, COPYNET, and our approach. We find that our approach outperforms the two baselines. However, the copying accuracy of our approach is almost 20% lower than the prediction accuracy (i.e., 65.61% vs. 85.09%), indicating that it is much more challenging t...
[1, 1, 1]
['Table 7 shows the comparison of copying accuracies between MS UEDIN, COPYNET, and our approach.', 'We find that our approach outperforms the two baselines.', 'However, the copying accuracy of our approach is almost 20% lower than the prediction accuracy (i.e., 65.61% vs. 85.09%), indicating that it is much more chall...
[['MS UEDIN', 'COPYNET', 'Ours'], ['Ours'], ['Ours', ' Accuracy']]
1
D19-1638table_5
Evaluation III: Informativeness: The values represent percentage of times instructions generated by the model is chosen by a human evaluator.
2
[['Method', 'Set2MultipleSeq'], ['Method', 'Set2MultipleSeq+opt'], ['Method', 'Ambigous']]
1
[['%']]
[['30'], ['63'], ['7']]
column
['%']
['Set2MultipleSeq+opt']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>%</th> </tr> </thead> <tbody> <tr> <td>Method || Set2MultipleSeq</td> <td>30</td> </tr> <tr> <td>Method || Set2MultipleSeq+opt</td> <td>63</td> </tr> <tr> <td>Method || A...
Table 5
table_5
D19-1638
6
emnlp2019
We conducted separate evaluations for two (Set2SingleSeq, pairs of (Set2SingleSeq, Set2MultipleSeq) Set2MultipleSeq+Opt). The results are shown in Tables 5 and 6 respectively. Results shown in Table 5 explains that incorporating neural components for subset selection and content ordering helps in improving informative ...
[1, 1, 1, 1, 0, 0]
['We conducted separate evaluations for two (Set2SingleSeq, pairs of (Set2SingleSeq, Set2MultipleSeq) Set2MultipleSeq+Opt).', 'The results are shown in Tables 5 and 6 respectively.', 'Results shown in Table 5 explains that incorporating neural components for subset selection and content ordering helps in improving info...
[['Set2MultipleSeq', 'Set2MultipleSeq+opt'], None, ['Set2MultipleSeq+opt'], ['Set2MultipleSeq+opt'], None, None]
1
D19-1647table_2
Performance of the three proposed approaches in comparison with the baseline.
2
[['approach', 'baseline'], ['approach', 'WD'], ['approach', 'NER'], ['approach', 'BLSTM']]
1
[[' prec'], [' rec'], [' f1']]
[['49.80%', ' —', ' —'], ['67.30%', '93.00%', '78.10%'], ['71.80%', '81.30%', '76.20%'], ['86.90%', '85.30%', '86.10%']]
column
['prec', 'rec', 'f1']
['BLSTM']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>prec</th> <th>rec</th> <th>f1</th> </tr> </thead> <tbody> <tr> <td>approach || baseline</td> <td>49.80%</td> <td>—</td> <td>—</td> </tr> <tr> <td>approach || WD</td> ...
Table 2
table_2
D19-1647
4
emnlp2019
5.2 Baseline VA Candidates . We cannot determine recall for our approaches, instead we compute precision, recall and f-score based on the baseline dataset (see Section3), which is shown in Table 2 as well as precision of the baseline dataset. The automated approaches using Wikidata and named entity recognition can boos...
[2, 1, 1, 1, 1]
['5.2 Baseline VA Candidates .', 'We cannot determine recall for our approaches, instead we compute precision, recall and f-score based on the baseline dataset (see Section3), which is shown in Table 2 as well as precision of the baseline dataset.', 'The automated approaches using Wikidata and named entity recognition ...
[None, None, ['WD', 'NER', 'baseline', ' prec', ' rec'], ['BLSTM'], ['BLSTM', 'WD', ' rec', ' prec']]
1
D19-1648table_1
Experimental results. +P indicates consideration of paraphrases and -P does not.
2
[['Model', 'Baseline'], ['Model', 'VE-P'], ['Model', 'HanPaNE-P'], ['Model', 'VE+P'], ['Model', 'HanPaNE+P (Proposed)']]
1
[[' Precision'], [' Recall'], [' F-score']]
[['92.75', '92.15', '92.45'], ['93.11', '91.4', '92.25'], ['92.71', '91.94', '92.32'], ['93.15', '91.79', '92.47'], ['92.81', '92.33', '92.57']]
column
['Precision', 'Recall', 'F-score']
['HanPaNE+P (Proposed)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Precision</th> <th>Recall</th> <th>F-score</th> </tr> </thead> <tbody> <tr> <td>Model || Baseline</td> <td>92.75</td> <td>92.15</td> <td>92.45</td> </tr> <tr> <td>Mod...
Table 1
table_1
D19-1648
4
emnlp2019
3.2 Experimental Results. Table 1 shows the experimental results. We can see that HanPaNE+P showed the highest accuracy and HanPaNE+P and VE+P, with consideration of paraphrases, showed a higher accuracy than Baseline. In contrast, HanPaNE-P and VE-P, without consideration of paraphrases, did not. The results indicate ...
[2, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0]
['3.2 Experimental Results.', 'Table 1 shows the experimental results.', 'We can see that HanPaNE+P showed the highest accuracy and HanPaNE+P and VE+P, with consideration of paraphrases, showed a higher accuracy than Baseline.', 'In contrast, HanPaNE-P and VE-P, without consideration of paraphrases, did not.', 'The res...
[None, None, ['HanPaNE+P (Proposed)', 'VE+P', 'Baseline'], ['HanPaNE-P', 'VE-P'], ['HanPaNE+P (Proposed)', 'VE+P'], None, ['HanPaNE-P', 'HanPaNE+P (Proposed)'], ['Baseline'], None, ['HanPaNE-P', 'HanPaNE+P (Proposed)'], None, ['HanPaNE-P', 'HanPaNE+P (Proposed)']]
1
D19-1653table_4
Performance of the average F1. Max score if bold and significant differences with p < 0.05 if *.
2
[['BLC variant', 'sentence-level + structural'], ['BLC variant', 'sentence-level'], ['BLC variant', 'post-level']]
2
[[' unit identification', 'B'], [' unit identification', ' I'], [' unit identification', ' O'], [' unit identification', ' macro'], ['unit classification', ' V'], ['unit classification', ' R'], ['unit classification', ' P'], ['unit classification', ' T'], ['unit classification', ' F'], ['unit classification', ' macro']...
[['75.4', '92.8', '62.1', ' *76.8', ' *80.7', '61.5', ' *16.0', '43.1', '33.2', ' *49.2'], ['75.6', '92.8', '61', '76.5', '80.2', '60.9', '13', '41.3', '31.3', '47.7'], ['67.8', '92.9', ' *64.7', '75.1', '79.9', '49.4', '2.1', '35.3', '32.2', '43.8']]
column
['F1', 'F1', 'F1', 'F1', 'F1', 'F1', 'F1', 'F1', 'F1', 'F1']
['sentence-level + structural']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>unit identification || B</th> <th>unit identification || I</th> <th>unit identification || O</th> <th>unit identification || macro</th> <th>unit classification || V</th> <th>unit classif...
Table 4
table_4
D19-1653
5
emnlp2019
Results: Overall Performance . Table 4 shows that our proposed sentence-level BLC with structural information performs best regarding macro F1 in either boundary identification or unit classification tasks. The presence of structural features provides an excellent boost in classifying unit types. For the post-level BLC...
[2, 1, 2, 1]
['Results: Overall Performance .', 'Table 4 shows that our proposed sentence-level BLC with structural information performs best regarding macro F1 in either boundary identification or unit classification tasks.', 'The presence of structural features provides an excellent boost in classifying unit types.', 'For the pos...
[None, ['sentence-level + structural', ' macro', ' unit identification', 'unit classification'], None, ['post-level']]
1
D19-1659table_1
Results on the Yelp and Amazon test sets.
2
[['Model', 'Shen et al. (2017)'], ['Model', 'Fu et al. (2018)'], ['Model', 'Li et al. (2018)'], ['Model', 'This work'], ['Model', 'w/o Lsentiment'], ['Model', 'w/o Lcontent'], ['Model', 'w/o Lalignment'], ['Model', 'only Lsentiment']]
2
[['Yelp', 'Acc'], ['Yelp', 'BLEU'], ['Amazon', 'Acc'], ['Amazon', 'BLEU']]
[['74.5', '6.79', '74.4', '1.57'], ['46.8', '11.24', '70.3', '7.87'], ['88.3', '12.61', '53.4', '27.12'], ['88.5', '12.13', '53.8', '15.95'], ['3.4', '24.06', '18.2', '42.65'], ['86.4', '10.08', '53.9', '14.77'], ['84.7', '11.94', '51.6', '16.51'], ['85.4', '10.05', '53.4', '14.76']]
column
['Acc', 'BLEU', 'Acc', 'BLEU']
['This work']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Yelp || Acc</th> <th>Yelp || BLEU</th> <th>Amazon || Acc</th> <th>Amazon || BLEU</th> </tr> </thead> <tbody> <tr> <td>Model || Shen et al. (2017)</td> <td>74.5</td> <td>6.79</td...
Table 1
table_1
D19-1659
4
emnlp2019
Table 1 shows the results of various models. Our model based on the unified objective of Eq. (4) offers better balanced results compared to its variants. When removing Lsentiment, our model degrades to an input copy-like method, resulting in low classification accuracies but the highest BLEU scores. When removing Lcont...
[1, 1, 1, 1, 1, 1, 1]
['Table 1 shows the results of various models.', 'Our model based on the unified objective of Eq. (4) offers better balanced results compared to its variants.', 'When removing Lsentiment, our model degrades to an input copy-like method, resulting in low classification accuracies but the highest BLEU scores.', 'When rem...
[None, ['This work'], ['w/o Lsentiment', 'Acc', 'BLEU'], ['w/o Lcontent', 'BLEU'], ['w/o Lalignment', 'Acc', 'BLEU', 'Yelp'], ['Amazon', 'Acc', 'BLEU'], ['only Lsentiment']]
1
D19-1665table_1
Overall results for each dataset and model.
1
[['FT-BR'], ['MTL'], ['FT-LP'], ['MTL-LP'], ['MTL-XLD'], ['Sobhani (Seq2Seq)']]
1
[['BBC'], ['ETC'], ['MFTC']]
[['39.72', '52.24', '51.19'], ['48.57', '53.32', '53.97'], ['36.20', '53.57', '55.11'], ['55.60', '55.37', '62.98'], ['51.33', '52.22', '60.94'], ['NA', '54.81', 'NA']]
column
['accuracy', 'accuracy', 'accuracy']
['MTL-LP']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BBC</th> <th>ETC</th> <th>MFTC</th> </tr> </thead> <tbody> <tr> <td>FT-BR</td> <td>39.72</td> <td>52.24</td> <td>51.19</td> </tr> <tr> <td>MTL</td> <td>48.57</td...
Table 1
table_1
D19-1665
4
emnlp2019
In Table 1 we report the test set results for all models. The results for the MFTC dataset are averaged across the six discourse domains. Overall, MTL-LP is the best performing multilabel classification method across all the datasets. MTLLP is also better than the best performing model Seq2Seq reported in Sobhani et al...
[1, 2, 1, 1, 1, 2, 2]
['In Table 1 we report the test set results for all models.', 'The results for the MFTC dataset are averaged across the six discourse domains.', 'Overall, MTL-LP is the best performing multilabel classification method across all the datasets.', 'MTLLP is also better than the best performing model Seq2Seq reported in So...
[None, ['MFTC'], ['MTL-LP'], ['Sobhani (Seq2Seq)', 'MTL-LP'], ['MTL-XLD', 'BBC', 'MFTC', 'MTL', 'ETC'], ['BBC', 'MFTC'], ['BBC']]
1
D19-1667table_3
Results on total term prediction(%).
2
[['Model', 'CNN'], ['Model', 'RNN'], ['Model', 'RCNN'], ['Model', 'DGN']]
1
[['S'], ['EM'], ['Acc@0.1'], ['Acc@0.2']]
[['67.24', '8.41', '16.96', '35.58'], ['67.27', '8.04', '16.79', '35.11'], ['69.56', '8.54', '17.57', '35.75'], ['75.74', '8.64', '19.32', '40.43']]
column
['S', 'EM', 'Acc@0.1', 'Acc@0.2']
['DGN']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>S</th> <th>EM</th> <th>Acc@0.1</th> <th>Acc@0.2</th> </tr> </thead> <tbody> <tr> <td>Model || CNN</td> <td>67.24</td> <td>8.41</td> <td>16.96</td> <td>35.58</td> </...
Table 3
table_3
D19-1667
4
emnlp2019
Table 3 presents the results of the total term prediction. Although our method is not directly trained to make the final prediction, the performance of our model surpasses all baselines, which confirms that the breakdown charge-based analysis can indeed help the total prison term prediction.
[1, 1]
['Table 3 presents the results of the total term prediction.', 'Although our method is not directly trained to make the final prediction, the performance of our model surpasses all baselines, which confirms that the breakdown charge-based analysis can indeed help the total prison term prediction.']
[None, ['DGN']]
1
P16-1111table_2
Results of parsing abstract structure
1
[['BACKGROUND'], ['OBJECTIVE'], ['DATA'], ['DESIGN'], ['METHOD'], ['RESULT'], ['CONCLUSION'], ['ALL']]
1
[['Precision'], ['Recall'], ['F-measure'], ['Accuracy']]
[['74.6', '77.2', '75.8', '-'], ['85.2', '81.8', '83.5', '-'], ['82.6', '76.8', '79.6', '-'], ['68', '64.8', '66.3', '-'], ['80.4', '80.1', '80.2', '-'], ['90.8', '93.3', '92', '-'], ['93.8', '92', '92.9', '-'], ['-', '-', '-', '86.6']]
column
['Precision', 'Recall', 'F-measure', 'Accuracy']
['BACKGROUND', 'OBJECTIVE', 'DATA', 'DESIGN', 'METHOD', 'RESULT', 'CONCLUSION']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Precision</th> <th>Recall</th> <th>F-measure</th> <th>Accuracy</th> </tr> </thead> <tbody> <tr> <td>BACKGROUND</td> <td>74.6</td> <td>77.2</td> <td>75.8</td> <td>-</td...
Table 2
table_2
P16-1111
6
acl2016
Table 2 shows the results obtained on testing the final classifier system on the Test subset of SL. We obtain an overall high accuracy of 86.6% at the sentence level. While RESULT and CONCLUSION obtained F-measures above 90%, OBJECTIVE and METHOD reported reasonable F-measures above 80%. DESIGN obtained the lowest prec...
[1, 1, 1, 1, 0]
['Table 2 shows the results obtained on testing the final classifier system on the Test subset of SL.', 'We obtain an overall high accuracy of 86.6% at the sentence level.', 'While RESULT and CONCLUSION obtained F-measures above 90%, OBJECTIVE and METHOD reported reasonable F-measures above 80%.', 'DESIGN obtained the ...
[None, ['ALL', 'Accuracy'], ['RESULT', 'CONCLUSION', 'OBJECTIVE', 'METHOD', 'F-measure'], ['DESIGN', 'Precision', 'Recall', 'F-measure'], None]
1
P16-1111table_3
Results on classifying trajectories
2
[['System', 'Random'], ['System', 'Majority'], ['System', 'LR'], ['System', 'LR - LD-R']]
1
[['ALL'], ['BIO'], ['PHY'], ['CHM'], ['NEU']]
[['50.3', '47.2', '47.8', '50.9', '51.2'], ['56.1', '56.3', '81.6', '74.3', '56.6'], ['74.2', '81', '83.3', '81.9', '74.8'], ['71.3', '77.7', '81.6', '73.1', '70.5']]
column
['Accuracy', 'Accuracy', 'Accuracy', 'Accuracy', 'Accuracy']
['LR']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>ALL</th> <th>BIO</th> <th>PHY</th> <th>CHM</th> <th>NEU</th> </tr> </thead> <tbody> <tr> <td>System || Random</td> <td>50.3</td> <td>47.2</td> <td>47.8</td> <td>5...
Table 3
table_3
P16-1111
8
acl2016
Table 3 shows the performance of our model on this task. As expected a topic's label distribution over its entire life-time is very informative with respect to classifying the topic as growing or declining. We achieve a significant improvement over the baselines on the full dataset (32.3% relative improvement over majo...
[1, 1, 1, 2]
['Table 3 shows the performance of our model on this task.', "As expected a topic's label distribution over its entire life-time is very informative with respect to classifying the topic as growing or declining.", 'We achieve a significant improvement over the baselines on the full dataset (32.3% relative improvement o...
[None, ['LR'], ['LR', 'ALL', 'Majority'], ['LR']]
1
P16-1111table_4
Results on predicting trajectory
2
[['System', 'LD-% + LD-delta'], ['System', 'LD-% only'], ['System', 'LD-delta only']]
1
[['Accuracy on ALL']]
[['72.1'], ['71'], ['60.4']]
column
['Accuracy on ALL']
['LD-% + LD-delta']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy on ALL</th> </tr> </thead> <tbody> <tr> <td>System || LD-% + LD-delta</td> <td>72.1</td> </tr> <tr> <td>System || LD-% only</td> <td>71</td> </tr> <tr> <td>Syste...
Table 4
table_4
P16-1111
8
acl2016
Table 4 shows the performance of our model on this task. (The baseline performances are the same as in the classification task). These results show that we can accurately predict whether a topic will grow or decline using only a small amount of data. Moreover, we see that both percentage and delta features are necessar...
[1, 2, 1, 1]
['Table 4 shows the performance of our model on this task.', '(The baseline performances are the same as in the classification task).', 'These results show that we can accurately predict whether a topic will grow or decline using only a small amount of data.', 'Moreover, we see that both percentage and delta features a...
[None, None, ['Accuracy on ALL'], ['LD-% + LD-delta', 'LD-% only']]
1
P16-1112table_1
Performance of the CRF and alternative neural network structures on the public FCE dataset for token-level error detection in learner writing.
1
[['CRF'], ['CNN'], ['Deep CNN'], ['Bi-RNN'], ['Deep Bi-RNN'], ['Bi-LSTM'], ['Deep Bi-LSTM']]
2
[['Development', 'P'], ['Development', 'R'], ['Development', 'F0.5'], ['Test', 'predicted'], ['Test', 'correct'], ['Test', 'P'], ['Test', 'R'], ['Test', 'F0.5']]
[['62.2', '13.6', '36.3', '914', '516', '56.5', '8.2', '25.9'], ['52.4', '24.9', '42.9', '3518', '1620', '46', '25.7', '39.8'], ['48.4', '26.2', '41.4', '3992', '1651', '41.4', '26.2', '37.1'], ['63.9', '18', '42.3', '2333', '1196', '51.3', '19', '38.2'], ['60.3', '17.6', '40.6', '2543', '1255', '49.4', '19.9', '38.1']...
column
['P', 'R', 'F0.5', 'predicted', 'correct', 'P', 'R', 'F0.5']
['CRF']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Development || P</th> <th>Development || R</th> <th>Development || F0.5</th> <th>Test || predicted</th> <th>Test || correct</th> <th>Test || P</th> <th>Test || R</th> <th>Test || F...
Table 1
table_1
P16-1112
5
acl2016
Table 1 contains results for experiments comparing different composition architectures on the task of error detection. The CRF has the lowest F0.5 score compared to any of the neural models. It memorises frequent error sequences with high precision, but does not generalise sufficiently, resulting in low recall. The abi...
[1, 1, 1, 2]
['Table 1 contains results for experiments comparing different composition architectures on the task of error detection.', 'The CRF has the lowest F0.5 score compared to any of the neural models.', 'It memorises frequent error sequences with high precision, but does not generalise sufficiently, resulting in low recall....
[None, ['CRF', 'F0.5'], ['CRF', 'P', 'R'], None]
1
P16-1112table_2
Results on the public FCE test set when incrementally providing more training data to the error detection model.
2
[['Training Data', 'FCE-public'], ['Training Data', '+NUCLE A4'], ['Training Data', '+IELTS'], ['Training Data', '+FCE'], ['Training Data', '+CPE'], ['Training Data', '+CAE']]
2
[['Dev', 'F0.5'], ['Test', 'F0.5']]
[['46', '41.1'], ['39', '41'], ['45.6', '50.7'], ['57.2', '61.1'], ['59', '62.1'], ['60.7', '64.3']]
column
['F0.5', 'F0.5']
['Training Data']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Dev || F0.5</th> <th>Test || F0.5</th> </tr> </thead> <tbody> <tr> <td>Training Data || FCE-public</td> <td>46</td> <td>41.1</td> </tr> <tr> <td>Training Data || +NUCLE A4</td>...
Table 2
table_2
P16-1112
6
acl2016
Table 2 contains results obtained by incrementally adding training data to the Bi-LSTM model. We found that incorporating the NUCLE dataset does not improve performance over using only the FCE-public dataset, which is likely due to the two corpora containing texts with different domains and writing styles. The texts in...
[1, 1, 2, 1]
['Table 2 contains results obtained by incrementally adding training data to the Bi-LSTM model.', 'We found that incorporating the NUCLE dataset does not improve performance over using only the FCE-public dataset, which is likely due to the two corpora containing texts with different domains and writing styles.', 'The ...
[['Training Data'], ['+NUCLE A4', 'FCE-public'], ['FCE-public', '+NUCLE A4'], ['Training Data']]
1
P16-1113table_7
Results (in percentage) on the CoNLL2009 test sets for Chinese, German and Spanish.
2
[['Chinese', 'PathLSTM'], ['Chinese', 'Bjorkelund et al. (2009)'], ['Chinese', 'Zhao et al. (2009)'], ['German', 'PathLSTM'], ['German', 'Bjorkelund et al. (2009)'], ['German', 'Che et al. (2009)'], ['Spanish', 'Zhao et al. (2009)'], ['Spanish', 'PathLSTM'], ['Spanish', 'Bjorkelund et al. (2009)']]
1
[['P'], ['R'], ['F1']]
[['83.2', '75.9', '79.4'], ['82.4', '75.1', '78.6'], ['80.4', '75.2', '77.7'], ['81.8', '78.5', '80.1'], ['81.2', '78.3', '79.7'], ['82.1', '75.4', '78.6'], ['83.1', '78', '80.5'], ['83.2', '77.4', '80.2'], ['78.9', '74.3', '76.5']]
column
['P', 'R', 'F1']
['PathLSTM']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Chinese || PathLSTM</td> <td>83.2</td> <td>75.9</td> <td>79.4</td> </tr> <tr> <td>Chinese || Bjorkelund ...
Table 7
table_7
P16-1113
8
acl2016
The results, summarized in Table 7, indicate that PathLSTM performs better than the system by Bjorkelund et al. (2009) in all cases. For German and Chinese, PathLSTM achieves the best overall F1-scores of 80.1% and 79.4%, respectively.
[1, 1]
['The results, summarized in Table 7, indicate that PathLSTM performs better than the system by Bjorkelund et al. (2009) in all cases.', 'For German and Chinese, PathLSTM achieves the best overall F1-scores of 80.1% and 79.4%, respectively.']
[['PathLSTM', 'Bjorkelund et al. (2009)'], ['PathLSTM', 'Chinese', 'German', 'F1']]
1
P16-1116table_1
Overall performance with gold-standard entities, timex, and values, the candidate arguments are annotated in ACE 2005. “ET” means the pattern balancing event type classifier, “Regu” means the regularization method
2
[['Method', 'JET'], ['Method', 'Cross-Event'], ['Method', 'Cross-Entity'], ['Method', 'Joint'], ['Method', 'DMCNN'], ['Method', 'RBPB(JET)'], ['Method', 'RBPB(JET) + ET'], ['Method', 'RBPB(JET) + Regu'], ['Method', 'RBPB(JET) + ET + Regu']]
2
[['Trigger Classification', 'P'], ['Trigger Classification', 'R'], ['Trigger Classification', 'F1'], ['Argument Identification', 'P'], ['Argument Identification', 'R'], ['Argument Identification', 'F1'], ['argument Role', 'P'], ['argument Role', 'P'], ['argument Role', 'F1']]
[['67.6', '53.5', '59.7', '46.5', '37.2', '41.3', '41', '32.8', '36.5'], ['68.7', '68.9', '68.8', '50.9', '49.7', '50.3', '45.1', '44.1', '44.6'], ['72.9', '64.3', '68.3', '53.4', '52.9', '53.1', '51.6', '45.5', '48.3'], ['73.7', '62.3', '67.5', '69.8', '47.9', '56.8', '64.7', '44.4', '52.7'], ['75.6', '63.6', '69.1', ...
column
['P', 'R', 'F1', 'P', 'R', 'F1', 'P', 'P', 'F1']
['RBPB(JET)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Trigger Classification || P</th> <th>Trigger Classification || R</th> <th>Trigger Classification || F1</th> <th>Argument Identification || P</th> <th>Argument Identification || R</th> <th>Ar...
Table 1
table_1
P16-1116
7
acl2016
Table 1 shows the overall performance on the blind test set. We compare our results with the JET baseline as well as the Cross-Event, CrossEntity, and joint methods. When adding the event type classifier, in the line titled “+ ET”, we see a significant increase in the three measures over the JET baseline in recall. Alt...
[1, 1, 1, 1, 2, 1, 1, 1, 1]
['Table 1 shows the overall performance on the blind test set.', 'We compare our results with the JET baseline as well as the Cross-Event, CrossEntity, and joint methods.', 'When adding the event type classifier, in the line titled “+ ET”, we see a significant increase in the three measures over the JET baseline in rec...
[None, ['Cross-Event', 'Cross-Entity', 'Joint'], ['JET', 'RBPB(JET) + ET', 'R'], ['RBPB(JET)', 'Trigger Classification', 'P', 'F1'], ['JET', 'RBPB(JET)'], ['RBPB(JET) + Regu'], ['Cross-Event', 'Cross-Entity', 'Joint', 'DMCNN', 'RBPB(JET) + Regu', 'F1'], ['RBPB(JET)'], ['Trigger Classification', 'RBPB(JET)', 'Cross-Even...
1
P16-1116table_2
Overall performance with predicted entities, timex, and values, the candidate arguments are extracted by JET. “ET” is the pattern balancing event type classifier, “Regu” is the regularization method
2
[['Method', 'JET'], ['Method', 'Cross-Document'], ['Method', 'Joint'], ['Method', 'RBPB(JET)'], ['Method', 'RBPB(JET) + ET'], ['Method', 'RBPB(JET) + Regu'], ['Method', 'RBPB(JET) + ET + Regu']]
2
[['Trigger', 'F1'], ['Arg id', 'F1'], ['Arg id+cl', 'F1']]
[['59.7', '42.5', '36.6'], ['67.3', '46.2', '42.6'], ['65.6', '-', '41.8'], ['60.4', '44.3', '37.1'], ['66', '47.8', '39.7'], ['64.8', '54.6', '42'], ['67.8', '55.4', '43.8']]
column
['F1', 'F1', 'F1']
['RBPB(JET)', 'RBPB(JET) + ET', 'RBPB(JET) + Regu', 'RBPB(JET) + ET + Regu']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Trigger || F1</th> <th>Arg id || F1</th> <th>Arg id+cl || F1</th> </tr> </thead> <tbody> <tr> <td>Method || JET</td> <td>59.7</td> <td>42.5</td> <td>36.6</td> </tr> <tr> ...
Table 2
table_2
P16-1116
8
acl2016
We test the performance with argument candidates automatically extracted by JET in Table 2, our approach “+ ET” again significantly outperforms the JET baseline. Remarkably, our result is comparable with the Joint model although we only use lexical features. The line titled + Regu in Table 2 represents the performance ...
[1, 1, 1, 1, 1, 1]
['We test the performance with argument candidates automatically extracted by JET in Table 2, our approach “+ ET” again significantly outperforms the JET baseline.', 'Remarkably, our result is comparable with the Joint model although we only use lexical features.', 'The line titled + Regu in Table 2 represents the perf...
[['JET', 'RBPB(JET) + ET'], ['RBPB(JET) + ET', 'Joint'], ['RBPB(JET) + Regu'], ['RBPB(JET) + Regu', 'Trigger', 'F1', 'Arg id', 'Arg id+cl', 'JET', 'Cross-Document', 'Joint', 'RBPB(JET) + ET'], ['RBPB(JET)'], ['RBPB(JET)', 'RBPB(JET) + ET', 'RBPB(JET) + Regu', 'RBPB(JET) + ET + Regu', 'Trigger', 'F1', 'Cross-Document', ...
1
P16-1118table_4
System performance on test data (* indicates statistical significance)
2
[['System', 'Lucene'], ['System', 'EDITS'], ['System', 'TIE'], ['System', 'ENT']]
2
[['Newswire', 'Precision'], ['Newswire', 'Recall'], ['Newswire', 'F-score'], ['Clinical', 'Precision'], ['Clinical', 'Recall'], ['Clinical', 'F-score']]
[['0.47', '0.48', '0.47*', '0.16', '0.22', '0.19'], ['0.22', '0.57', '0.32', '0.23', '0.21', '0.20'], ['0.66', '0.21', '0.31', '0.43', '0.01', '0.02'], ['0.77', '0.26', '0.39', '0.42', '0.15', '0.23*']]
column
['Precision', 'Recall', 'F-score', 'Precision', 'Recall', 'F-score']
['ENT']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Newswire || Precision</th> <th>Newswire || Recall</th> <th>Newswire || F-score</th> <th>Clinical || Precision</th> <th>Clinical || Recall</th> <th>Clinical || F-score</th> </tr> </thead>...
Table 4
table_4
P16-1118
6
acl2016
Table 4 summarizes the system performance on newswire and clinical data. We observe that systems that did well on RTE datasets, were mediocre on the clinical dataset. We did not, however, put any effort into adaption of TIE and EDITS to the clinical data. So the mediocre performance on clinical is understandable. It is...
[1, 1, 2, 2, 1, 2, 2, 2, 1]
['Table 4 summarizes the system performance on newswire and clinical data.', 'We observe that systems that did well on RTE datasets, were mediocre on the clinical dataset.', 'We did not, however, put any effort into adaption of TIE and EDITS to the clinical data.', 'So the mediocre performance on clinical is understand...
[['Newswire', 'Clinical'], ['Clinical'], ['TIE', 'EDITS', 'Clinical'], ['Clinical'], ['ENT', 'Newswire', 'Clinical'], None, ['F-score'], ['EDITS', 'TIE', 'ENT', 'F-score'], ['System', 'Clinical', 'Newswire']]
1
P16-1120table_3
Performance of Translation Extraction
2
[['Method', 'Cue(BiLDA)'], ['Method', 'Cue(BiSTM)'], ['Method', 'Cue(BiSTM+TS)'], ['Method', 'Liu(BiLDA)'], ['Method', 'Liu(BiSTM)'], ['Method', 'Liu(BiSTM+TS)']]
2
[['ACC1', 'K=100'], ['ACC1', 'K=400'], ['ACC1', 'K=2000'], ['ACC10', 'K=100'], ['ACC10', 'K=400'], ['ACC10', 'K=2000']]
[['0.024', '0.056', '0.101', '0.093', '0.170', '0.281'], ['0.055', '0.112', '0.184', '0.218', '0.286', '0.410'], ['0.052', '0.107', '0.176', '0.196', '0.274', '0.398'], ['0.206', '0.345', '0.426', '0.463', '0.550', '0.603'], ['0.287', '0.414', '0.479', '0.531', '0.625', '0.671'], ['0.283', '0.406', '0.467', '0.536', '0...
column
['ACC1', 'ACC1', 'ACC1', 'ACC10', 'ACC10', 'ACC10']
['Cue(BiSTM+TS)', 'Liu(BiSTM+TS)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>ACC1 || K=100</th> <th>ACC1 || K=400</th> <th>ACC1 || K=2000</th> <th>ACC10 || K=100</th> <th>ACC10 || K=400</th> <th>ACC10 || K=2000</th> </tr> </thead> <tbody> <tr> <td>Method...
Table 3
table_3
P16-1120
7
acl2016
We measured the performance of translation extraction with top N accuracy (ACCN), the number of test words whose top N translation candidates contain a correct translation over the total number of test words (7,930). Table 3 summarizes ACC1 and ACC10 for each model. As can be seen, Cue/Liu(BiSTM) and Cue/Liu(BiSTM+TS) ...
[2, 1, 1, 1]
['We measured the performance of translation extraction with top N accuracy (ACCN), the number of test words whose top N translation candidates contain a correct translation over the total number of test words (7,930).', 'Table 3 summarizes ACC1 and ACC10 for each model.', 'As can be seen, Cue/Liu(BiSTM) and Cue/Liu(Bi...
[['ACC1', 'ACC10'], ['ACC1', 'ACC10'], ['Cue(BiSTM)', 'Liu(BiSTM)', 'Cue(BiSTM+TS)', 'Liu(BiSTM+TS)', 'Cue(BiLDA)', 'Liu(BiLDA)'], ['Cue(BiSTM+TS)', 'Liu(BiSTM+TS)']]
1
P16-1123table_3
Comparison with results published in the literature, where ‘∗’ refers to models from Nguyen and Grishman (2015).
3
[['Classifier', 'Manually Engineered Methods', 'SVM (Rink and Harabagiu 2010)'], ['Classifier', 'Dependency Methods', 'RNN (Socher et al. 2012)'], ['Classifier', 'Dependency Methods', 'MVRNN (Socher et al. 2012)'], ['Classifier', 'Dependency Methods', 'FCM (Yu et al. 2014)'], ['Classifier', 'Dependency Methods', 'Hybri...
1
[['F1']]
[['82.2'], ['77.6'], ['82.4'], ['83'], ['83.4'], ['83.7'], ['85.8'], ['84.5'], ['82.7'], ['84.1'], ['83.6'], ['85.6'], ['83.4'], ['84.1'], ['84.1'], ['87.5'], ['88']]
column
['F1']
['Our Architectures']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Classifier || Manually Engineered Methods || SVM (Rink and Harabagiu 2010)</td> <td>82.2</td> </tr> <tr> <td>Classifier || Dependency Methods || R...
Table 3
table_3
P16-1123
7
acl2016
Table 3 provides a detailed comparison of our Multi-Level Attention CNN model with previous approaches. We observe that our novel attention-based architecture achieves new state-of-the-art results on this relation classification dataset. Att Input-CNN relies only on the primal attention at the input level, performing s...
[1, 1, 2, 1, 1]
['Table 3 provides a detailed comparison of our Multi-Level Attention CNN model with previous approaches.', 'We observe that our novel attention-based architecture achieves new state-of-the-art results on this relation classification dataset.', 'Att Input-CNN relies only on the primal attention at the input level, perf...
[None, ['Our Architectures'], ['Att-Input-CNN'], ['Att-Input-CNN', 'F1', 'SVM (Rink and Harabagiu 2010)', 'CR-CNN (dos Santos et al. 2015)', 'DRNNs (Xu et al. 2016)'], ['Att-Pooling-CNN', 'F1']]
1
P16-1123table_4
Comparison between the main model and variants.
2
[['Classifier', 'Att-Input-CNN (Main)'], ['Classifier', 'Att-Input-CNN (Variant-1)'], ['Classifier', 'Att-Input-CNN (Variant-2)']]
1
[['F1']]
[['87.5'], ['87.2'], ['87.3']]
column
['F1']
['Att-Input-CNN (Main)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Classifier || Att-Input-CNN (Main)</td> <td>87.5</td> </tr> <tr> <td>Classifier || Att-Input-CNN (Variant-1)</td> <td>87.2</td> </tr> <...
Table 4
table_4
P16-1123
7
acl2016
Table 4 provides the experimental results for the two variants of the model given by Eqs.(7) and (8) in Section 3.3. Our main model outperforms the other variants on this dataset, although the variants may still prove useful when applied to other tasks.
[1, 1]
['Table 4 provides the experimental results for the two variants of the model given by Eqs.(7) and (8) in Section 3.3.', 'Our main model outperforms the other variants on this dataset, although the variants may still prove useful when applied to other tasks.']
[None, ['Att-Input-CNN (Main)', 'Att-Input-CNN (Variant-1)', 'Att-Input-CNN (Variant-2)']]
1
P16-1126table_3
Precision/Recall/F1 for the three models. The three starred categories resulted from the decomposition of the original Other category, which is excluded here. Categories are ordered in this table in descending order by frequency in the dataset.
2
[['Category', 'First Party Collection/Use'], ['Category', 'Third Party Sharing/Collection'], ['Category', 'User Choice/Control'], ['Category', 'Introductory/Generic*'], ['Category', 'Data Security'], ['Category', 'Internat’l and Specific Audiences'], ['Category', 'Privacy Contact Information*'], ['Category', 'User Acce...
2
[['LR', 'P'], ['LR', 'R'], ['LR', 'F'], ['SVM', 'P'], ['SVM', 'R'], ['SVM', 'F'], ['HMM', 'P'], ['HMM', 'R'], ['HMM', 'F']]
[['0.73', '0.67', '0.7', '0.76', '0.73', '0.75', '0.69', '0.76', '0.72'], ['0.64', '0.63', '0.63', '0.67', '0.73', '0.7', '0.63', '0.61', '0.62'], ['0.45', '0.62', '0.52', '0.65', '0.58', '0.61', '0.47', '0.33', '0.39'], ['0.51', '0.5', '0.5', '0.58', '0.49', '0.53', '0.54', '0.49', '0.51'], ['0.48', '0.75', '0.59', '0...
column
['P', 'R', 'F', 'P', 'R', 'F', 'P', 'R', 'F']
['HMM']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>LR || P</th> <th>LR || R</th> <th>LR || F</th> <th>SVM || P</th> <th>SVM || R</th> <th>SVM || F</th> <th>HMM || P</th> <th>HMM || R</th> <th>HMM || F</th> </tr> </thead> ...
Table 3
table_3
P16-1126
9
acl2016
We split the set of 115 policies into subsets of 75 for training and 40 for testing. The number of clusters in the HMM approach8 is set to 100 and the results are shown in Table 3 as means across 10 runs. The standard deviations for these performance figures are generally between 0.01 and 0.05; the one exception is Do ...
[2, 1, 2, 1]
['We split the set of 115 policies into subsets of 75 for training and 40 for testing.', 'The number of clusters in the HMM approach8 is set to 100 and the results are shown in Table 3 as means across 10 runs.', 'The standard deviations for these performance figures are generally between 0.01 and 0.05; the one exceptio...
[None, ['HMM'], None, ['HMM', 'SVM', 'LR', 'F']]
1
P16-1127table_2
Test results over different domains on SPO dataset. The numbers reported correspond to the proportion of cases in which the predicted LF is interpretable against the KB and returns the correct answer. LFP = Logical Form Prediction, CFP = Canonical Form Prediction, DSP = Derivation Sequence Prediction, DSP-C = Derivatio...
1
[['SPO'], ['LFP'], ['CFP'], ['DSP'], ['DSP-C'], ['DSP-CL']]
1
[['Basketball'], ['Social'], ['Publication'], ['Blocks'], ['Calendar'], ['Housing'], ['Restaurants'], ['Avg']]
[['46.3', '48.2', '59', '41.9', '74.4', '54', '75.9', '57.1'], ['73.1', '70.2', '72', '55.4', '71.4', '61.9', '76.5', '68.6'], ['80.3', '79.5', '70.2', '54.1', '73.2', '63.5', '71.1', '70.3'], ['71.6', '67.5', '64', '53.9', '64.3', '55', '76.8', '64.7'], ['80.5', '80', '75.8', '55.6', '75', '61.9', '80.1', '72.7'], ['8...
column
['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy']
['LFP', 'CFP', 'DSP', 'DSP-C', 'DSP-CL']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Basketball</th> <th>Social</th> <th>Publication</th> <th>Blocks</th> <th>Calendar</th> <th>Housing</th> <th>Restaurants</th> <th>Avg</th> </tr> </thead> <tbody> <tr> <...
Table 2
table_2
P16-1127
8
acl2016
Results on test data Table 2 shows the test results of SPO and our different systems over the seven domains. It can be seen that all of our sequence-based systems are performing better than SPO by a large margin on these tests. When averaging over the seven domains, our ‘worst’ system DSP scores at 64.7% compared to SP...
[1, 1, 1, 2, 1]
['Results on test data Table 2 shows the test results of SPO and our different systems over the seven domains.', 'It can be seen that all of our sequence-based systems are performing better than SPO by a large margin on these tests.', 'When averaging over the seven domains, our ‘worst’ system DSP scores at 64.7% compar...
[['SPO', 'LFP', 'CFP', 'DSP', 'DSP-C', 'DSP-CL'], ['LFP', 'CFP', 'DSP', 'DSP-C', 'DSP-CL', 'SPO'], ['DSP', 'SPO'], ['DSP'], ['LFP', 'CFP', 'DSP']]
1
P16-1129table_3
Results of feature validation
2
[['Method', 'RF'], ['Method', 'RF - w/o novel'], ['Method', 'RF - w/o trad.']]
1
[['R-1'], ['R-2'], ['R-SU4']]
[['0.38559', '0.11887', '0.14907'], ['0.37297', '0.10964', '0.14021'], ['0.36314', '0.0991', '0.13102']]
column
['R-1', 'R-2', 'R-SU4']
['RF']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>R-1</th> <th>R-2</th> <th>R-SU4</th> </tr> </thead> <tbody> <tr> <td>Method || RF</td> <td>0.38559</td> <td>0.11887</td> <td>0.14907</td> </tr> <tr> <td>Method || RF ...
Table 3
table_3
P16-1129
7
acl2016
Different groups of features may play different roles in the LTR models. In order to validate the impact of both the traditional features and the novel task-specific features, we conduct experiments with different combinations by removing each group of features respectively. Table 3 shows the results, with w/o denotes ...
[2, 2, 1, 1]
['Different groups of features may play different roles in the LTR models.', 'In order to validate the impact of both the traditional features and the novel task-specific features, we conduct experiments with different combinations by removing each group of features respectively.', 'Table 3 shows the results, with w/o ...
[None, None, None, ['RF']]
1
P16-1130table_2
Translation results. The bold numbers stand for the best systems.
1
[['Base'], ['MERS'], ['CSRS'], ['MERS-MINI'], ['CSRS-MINI']]
1
[['ED'], ['EF'], ['EC'], ['EJ']]
[['15', '26.76', '29.42', '37.1'], ['15.62', '27.33', '29.75', '37.76'], ['16.15', '28.05', '30.12', '37.83'], ['15.77', '28.13', '30.53', '38.14'], ['16.49', '28.3', '31.63', '38.32']]
column
['BLEU', 'BLEU', 'BLEU', 'BLEU']
['CSRS', 'MERS', 'CSRS-MINI', 'MERS-MINI']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>ED</th> <th>EF</th> <th>EC</th> <th>EJ</th> </tr> </thead> <tbody> <tr> <td>Base</td> <td>15</td> <td>26.76</td> <td>29.42</td> <td>37.1</td> </tr> <tr> <td...
Table 2
table_2
P16-1130
6
acl2016
Table 2 shows the translation results using bootstrap resampling (Koehn, 2004). Base stands for the baseline system without any. MERS, CSRS, MERS-MINI and CSRS-MINI means the outputs of the baseline system were reranked using features from the MERS, CSRS, MERS-MINI and CSRS-MINI models respectively. Generally, the CSRS...
[1, 2, 2, 1]
['Table 2 shows the translation results using bootstrap resampling (Koehn, 2004).', 'Base stands for the baseline system without any.', 'MERS, CSRS, MERS-MINI and CSRS-MINI means the outputs of the baseline system were reranked using features from the MERS, CSRS, MERS-MINI and CSRS-MINI models respectively.', 'Generall...
[None, ['Base'], ['MERS', 'CSRS', 'MERS-MINI', 'CSRS-MINI'], ['CSRS', 'MERS', 'CSRS-MINI', 'MERS-MINI']]
1
P16-1131table_2
Comparisons of results on the test sets.
3
[['Methods', 'Graph-NN:proposed', 'o3-adding'], ['Methods', 'Graph-NN:proposed', 'o3-perceptron'], ['Methods', 'Graph-NN:others', 'Pei et al. (2015)'], ['Methods', 'Graph-NN:others', 'Fonseca and Aluı́sio (2015)'], ['Methods', 'Graph-NN:others', 'Zhang and Zhao (2015)'], ['Methods', 'Graph-Linear', 'Koo and Collins (20...
2
[['PTB-Y&M', 'UAS'], ['PTB-Y&M', 'LAS'], ['PTB-Y&M', 'CM'], ['PTB-SD', 'UAS'], ['PTB-SD', 'LAS'], ['PTB-SD', 'CM'], ['PTB-LTH', 'UAS'], ['PTB-LTH', 'LAS'], ['PTB-LTH', 'CM'], ['CTB', 'UAS'], ['CTB', 'LAS'], ['CTB', 'CM']]
[['93.20', '92.12', '48.92', '93.42', '91.29', '50.37', '93.14', '90.07', '43.38', '87.55', '86.19', '35.65'], ['93.31', '92.23', '50.00', '93.42', '91.26', '49.92', '93.12', '89.53', '43.83', '87.65', '86.17', '36.07'], ['93.29', '92.13', '–', '–', '–', '–', '–', '–', '–', '–', '–', '–'], ['–', '–', '–', '–', '–', '–'...
column
['UAS', 'LAS', 'CM', 'UAS', 'LAS', 'CM', 'UAS', 'LAS', 'CM', 'UAS', 'LAS', 'CM']
['Graph-NN:proposed']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>PTB-Y&amp;M || UAS</th> <th>PTB-Y&amp;M || LAS</th> <th>PTB-Y&amp;M || CM</th> <th>PTB-SD || UAS</th> <th>PTB-SD || LAS</th> <th>PTB-SD || CM</th> <th>PTB-LTH || UAS</th> <th>PTB-L...
Table 2
table_2
P16-1131
8
acl2016
We show the results of two of the best proposed parsers: third-order adding (o3-adding) and third-order perceptron (o3-perceptron) methods, and compare with the reported results of some previous work in Table 2. We compare with three categories of models: other Graph-based NN (neural network) models, traditional Graph-...
[1, 1, 2, 2, 1, 2, 1]
['We show the results of two of the best proposed parsers: third-order adding (o3-adding) and third-order perceptron (o3-perceptron) methods, and compare with the reported results of some previous work in Table 2.', 'We compare with three categories of models: other Graph-based NN (neural network) models, traditional G...
[['Graph-NN:proposed', 'o3-adding', 'o3-perceptron'], ['Graph-NN:others', 'Graph-Linear', 'Transition-NN'], None, None, ['Graph-NN:proposed'], ['Graph-NN:proposed', 'Graph-Linear'], ['Graph-NN:proposed']]
1
P16-1134table_3
Results of grSemi-CRF with external information, measured in F1 score. None = no external information, Emb = Senna embeddings, Brown = Brown clusters, Gaz = gazetteers and All = Emb + Brown + Gaz. NYT and RCV1 in the parenthesis denote the corpus used to generate Brown clusters. “–” means no results. Notice that gazett...
2
[['Input Features', 'None'], ['Input Features', 'Brown(NYT)'], ['Input Features', 'Brown(RCV1)']]
1
[['CONLL 2000'], ['CONLL 2003']]
[['93.92', '84.66'], ['94.18', '86.57'], ['94.05', '88.22']]
column
['F1', 'F1']
['Brown(NYT)', 'Brown(RCV1)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>CONLL 2000</th> <th>CONLL 2003</th> </tr> </thead> <tbody> <tr> <td>Input Features || None</td> <td>93.92</td> <td>84.66</td> </tr> <tr> <td>Input Features || Brown(NYT)</td> ...
Table 3
table_3
P16-1134
7
acl2016
As Table 3 shows, external information improve the performance of grSemi-CRFs for both tasks. Compared to text chunking, we can find out that external information plays an extremely important role in NER, which coincides with the general idea that NER is a knowledge-intensive task (Ratinov and Roth, 2009). Another inte...
[1, 1, 1, 2, 2]
['As Table 3 shows, external information improve the performance of grSemi-CRFs for both tasks.', 'Compared to text chunking, we can find out that external information plays an extremely important role in NER, which coincides with the general idea that NER is a knowledge-intensive task (Ratinov and Roth, 2009).', 'Anot...
[['Input Features'], ['Input Features'], ['Brown(NYT)', 'CONLL 2000', 'Brown(RCV1)', 'CONLL 2003'], ['CONLL 2000', 'CONLL 2003'], ['CONLL 2000', 'CONLL 2003']]
1
P16-1134table_4
F1 scores of grSemi-CRF with scalar or vectorial gating coefficients.
2
[['Gating Coefficients', 'Scalars'], ['Gating Coefficients', 'Vectors']]
1
[['CONLL 2000'], ['CONLL 2003']]
[['94.47', '89.27'], ['95.01', '89.44']]
column
['F1', 'F1']
['Gating Coefficients']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>CONLL 2000</th> <th>CONLL 2003</th> </tr> </thead> <tbody> <tr> <td>Gating Coefficients || Scalars</td> <td>94.47</td> <td>89.27</td> </tr> <tr> <td>Gating Coefficients || Vect...
Table 4
table_4
P16-1134
8
acl2016
4.4.2 Impact of Vectorial Gating Coefficients. As Table 4 shows, a grSemi-CRF using vectorial gating coefficients (i.e., Eq. (7)) performs better than that using scalar gating coefficients (i.e., Eq. (6)), which provides evidences for the theoretical intuition that vectorial gating coefficients can make a detailed mode...
[2, 1]
['4.4.2 Impact of Vectorial Gating Coefficients.', 'As Table 4 shows, a grSemi-CRF using vectorial gating coefficients (i.e., Eq. (7)) performs better than that using scalar gating coefficients (i.e., Eq. (6)), which provides evidences for the theoretical intuition that vectorial gating coefficients can make a detailed...
[None, ['Gating Coefficients', 'Scalars', 'Vectors']]
1
P16-1137table_4
Loss function runtime comparison (seconds per epoch) of the DNN models.
1
[['random'], ['mix'], ['max']]
2
[['DNN AVG', 'CE'], ['DNN AVG', 'hinge'], ['DNN LSTM', 'CE'], ['DNN LSTM', 'hinge']]
[['124', '230', '710', '783'], ['20755', '21045', '25928', '26380'], ['39338', '41867', '49583', '49427']]
column
['runtime', 'runtime', 'runtime', 'runtime']
['random', 'mix', 'max']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>DNN AVG || CE</th> <th>DNN AVG || hinge</th> <th>DNN LSTM || CE</th> <th>DNN LSTM || hinge</th> </tr> </thead> <tbody> <tr> <td>random</td> <td>124</td> <td>230</td> <td>71...
Table 4
table_4
P16-1137
7
acl2016
Table 4 shows a runtime comparison of the losses and sampling strategies. We find random sampling to be orders of magnitude faster than the others while also performing the best.
[1, 1]
['Table 4 shows a runtime comparison of the losses and sampling strategies.', 'We find random sampling to be orders of magnitude faster than the others while also performing the best.']
[None, ['random']]
1
P16-1140table_4
Comparison of original and shuffled character-based word representation on decoding POS tag.
2
[['Lan.', 'Russian'], ['Lan.', 'Slovenian']]
1
[['Raw'], ['Shuf.']]
[['0.906', '0.671'], ['0.8', '0.653']]
column
['correlation', 'correlation']
['Raw', 'Shuf.']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Raw</th> <th>Shuf.</th> </tr> </thead> <tbody> <tr> <td>Lan. || Russian</td> <td>0.906</td> <td>0.671</td> </tr> <tr> <td>Lan. || Slovenian</td> <td>0.8</td> <td>0.65...
Table 4
table_4
P16-1140
7
acl2016
To prove that word form could provides informative and explicit cues for grammatical functions, we train another shuffled character-based word representation, which means that the autoencoder inputs shuffled letters and outputs the shuffled letters again. We use the hidden layer of the shuffled autoencoder as the repre...
[2, 2, 1]
['To prove that word form could provides informative and explicit cues for grammatical functions, we train another shuffled character-based word representation, which means that the autoencoder inputs shuffled letters and outputs the shuffled letters again.', 'We use the hidden layer of the shuffled autoencoder as the ...
[['Shuf.'], ['Shuf.'], ['Raw', 'Shuf.']]
1
P16-1140table_5
Comparison of morpho-phonological knowledge transfer on different language pairs. The reconstruction accuracy is correlated with the overlapping proportion of grapheme patterns between source language and target language.
2
[['Target Language', 'Bigram type overlap.'], ['Target Language', 'Bigram token overlap.'], ['Target Language', 'Trigram type overlap.'], ['Target Language', 'Trigram token overlap.']]
3
[['Source Language', 'Arabic', 'fa'], ['Source Language', 'Arabic', 'ud'], ['Source Language', 'Finnish', 'en'], ['Source Language', 'Finnish', 'shuf en'], ['Source Language', 'Finnish', 'rand']]
[['0.176', '0.761', '0.891', '0.864', '0.648'], ['0.689', '0.881', '0.999', '0.993', '0.65'], ['0.523', '0.522', '0.665', '0.449', '0.078'], ['0.526', '0.585', '0.978', '0.796', '0.078']]
column
['correlation', 'correlation', 'correlation', 'correlation', 'correlation']
['Bigram token overlap.', 'Trigram token overlap.']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Source Language || Arabic || fa</th> <th>Source Language || Arabic || ud</th> <th>Source Language || Finnish || en</th> <th>Source Language || Finnish || shuf en</th> <th>Source Language || Finni...
Table 5
table_5
P16-1140
7
acl2016
To explain the behaviour of AE, we calculate the correlation between the bigram character frequency in the words of the training language (e.g.Finnish) and the bigram character frequency in the words of the testing language (e.g. English). Table 5 reveals that phonological knowledge can be transferred if two languages ...
[2, 1, 1]
['To explain the behaviour of AE, we calculate the correlation between the bigram character frequency in the words of the training language (e.g.Finnish) and the bigram character frequency in the words of the testing language (e.g. English).', 'Table 5 reveals that phonological knowledge can be transferred if two langu...
[['Bigram token overlap.', 'Trigram token overlap.'], ['Bigram token overlap.', 'Trigram token overlap.'], ['Finnish', 'en', 'shuf en']]
1
P16-1148table_4
Emotion classification results (one vs. all for each emotion and 6 way for ALL) using our models compared to others.
2
[['#Emotion', '#anger'], ['#Emotion', '#disgust'], ['#Emotion', '#fear'], ['#Emotion', '#joy'], ['#Emotion', '#sadness'], ['#Emotion', '#surprise'], ['#Emotion', 'ALL']]
1
[['Wang (2012)'], ['Roberts (2012)'], ['Qadir (2013)'], ['Mohammad (2014)'], ['This work']]
[['0.72', '0.64', '0.44', '0.28', '0.80'], ['–', '0.67', '–', '0.19', '0.92'], ['0.44', '0.74', '0.54', '0.51', '0.77'], ['0.72', '0.68', '0.59', '0.62', '0.79'], ['0.65', '0.69', '0.46', '0.39', '0.62'], ['0.14', '0.61', '–', '0.45', '0.64'], ['–', '0.67', '0.53', '0.49', '0.78']]
column
['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy']
['This work']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Wang (2012)</th> <th>Roberts (2012)</th> <th>Qadir (2013)</th> <th>Mohammad (2014)</th> <th>This work</th> </tr> </thead> <tbody> <tr> <td>#Emotion || #anger</td> <td>0.72</td> ...
Table 4
table_4
P16-1148
5
acl2016
We demonstrate our emotion model prediction quality using 10-fold c.v. on our hashtag emotion dataset and compare it to other existing datasets in Table 4. Our results significantly outperform the existing approaches and are comparable with the state-of-the-art system for Twitter sentiment classification (Mohammad et a...
[1, 1]
['We demonstrate our emotion model prediction quality using 10-fold c.v. on our hashtag emotion dataset and compare it to other existing datasets in Table 4.', 'Our results significantly outperform the existing approaches and are comparable with the state-of-the-art system for Twitter sentiment classification (Mohammad...
[None, ['This work', 'Wang (2012)', 'Roberts (2012)', 'Qadir (2013)', 'Mohammad (2014)']]
1
P16-1150table_4
Correlation results on UKPConvArgRank.
1
[['Pearson’s r'], ['Spearman’s ρ']]
1
[['SVM'], ['BLSTM']]
[['.351', '.270'], ['.402', '.354']]
row
['Pearson’s r', 'Spearman’s ρ']
['SVM', 'BLSTM']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>SVM</th> <th>BLSTM</th> </tr> </thead> <tbody> <tr> <td>Pearson’s r</td> <td>.351</td> <td>.270</td> </tr> <tr> <td>Spearman’s ρ</td> <td>.402</td> <td>.354</td> <...
Table 4
table_4
P16-1150
8
acl2016
Without any modifications, we use the same SVM and features as described in Section 4.1. Regarding the BLSTM, we only replace the output layer with a linear activation function and optimize mean absolute error loss. Table 4 shows that SVM outperforms BLSTM.
[2, 2, 1]
['Without any modifications, we use the same SVM and features as described in Section 4.1.', 'Regarding the BLSTM, we only replace the output layer with a linear activation function and optimize mean absolute error loss.', 'Table 4 shows that SVM outperforms BLSTM.']
[['SVM'], ['BLSTM'], ['SVM', 'BLSTM']]
1
P16-1154table_3
Testing performance of LCSTS, where “RNN” is canonical Enc-Dec, and “RNN context” its attentive variant.
2
[['Models', 'RNN (Hu et al. 2015) +C'], ['Models', 'RNN (Hu et al. 2015) +W'], ['Models', 'RNN context (Hu et al. 2015) +C'], ['Models', 'RNN context (Hu et al. 2015) +W'], ['Models', 'COPYNET +C'], ['Models', 'COPYNET +W']]
1
[['R-1'], ['R-2'], ['R-L']]
[['21.5', '8.9', '18.6'], ['17.7', '8.5', '15.8'], ['29.9', '17.4', '27.2'], ['26.8', '16.1', '24.1'], ['34.4', '21.6', '31.3'], ['35', '22.3', '32']]
column
['R-1', 'R-2', 'R-L']
['COPYNET +C', 'COPYNET +W']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>R-1</th> <th>R-2</th> <th>R-L</th> </tr> </thead> <tbody> <tr> <td>Models || RNN (Hu et al. 2015) +C</td> <td>21.5</td> <td>8.9</td> <td>18.6</td> </tr> <tr> <td>Mode...
Table 3
table_3
P16-1154
6
acl2016
It is clear from Table 3 that COPYNET beats the competitor models with big margin. Hu et al.(2015) reports that the performance of a word-based model is inferior to a character-based one. One possible explanation is that a word-based model, even with a much larger vocabulary (50000 words in Hu et al. (2015)), still has...
[1, 2, 2, 2]
['It is clear from Table 3 that COPYNET beats the competitor models with big margin.', 'Hu et al.(2015) reports that the performance of a word-based model is inferior to a character-based one.', 'One possible explanation is that a word-based model, even with a much larger vocabulary (50000 words in Hu et al. (2015)), s...
[['COPYNET +C', 'COPYNET +W'], None, None, ['COPYNET +C', 'COPYNET +W']]
1
P16-1159table_5
Subjective evaluation of MLE and MRT on Chinese-English translation.
1
[['evaluator 1'], ['evaluator 2']]
1
[['MLE < MRT'], ['MLE = MRT'], ['MLE > MRT']]
[['54%', '24%', '22%'], ['53%', '22%', '25%']]
row
['percentage', 'percentage']
['MLE < MRT', 'MLE = MRT', 'MLE > MRT']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MLE &lt; MRT</th> <th>MLE = MRT</th> <th>MLE &gt; MRT</th> </tr> </thead> <tbody> <tr> <td>evaluator 1</td> <td>54%</td> <td>24%</td> <td>22%</td> </tr> <tr> <td>eval...
Table 5
table_5
P16-1159
7
acl2016
Table 5 shows the results of subjective evaluation. The two human evaluators made close judgements: around 54% of MLE translations are worse than MRE, 23% are equal, and 23% are better.
[1, 1]
['Table 5 shows the results of subjective evaluation.', 'The two human evaluators made close judgements: around 54% of MLE translations are worse than MRE, 23% are equal, and 23% are better.']
[None, ['MLE < MRT', 'MLE = MRT', 'MLE > MRT']]
1
P16-1159table_7
Comparison with previous work on English-French translation. The BLEU scores are casesensitive. “PosUnk” denotes Luong et al. (2015b)’s technique of handling rare words.
6
[['Existing end-to-end NMT systems', 'Bahdanau et al. (2015)', 'Architecture', 'gated RNN with search', 'Training', 'MLE'], ['Existing end-to-end NMT systems', 'Jean et al. (2015)', 'Architecture', 'gated RNN with search', 'Training', 'MLE'], ['Existing end-to-end NMT systems', 'Jean et al. (2015)', 'Architecture', 'ga...
1
[['Vocab'], ['BLEU']]
[['30000', '28.45'], ['30000', '29.97'], ['30000', '33.08'], ['40000', '29.50'], ['40000', '31.80'], ['40000', '30.40'], ['40000', '32.70'], ['80000', '30.59'], ['30000', '29.88'], ['30000', '31.30'], ['30000', '34.23']]
column
['Vocab', 'BLEU']
['this work']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Vocab</th> <th>BLEU</th> </tr> </thead> <tbody> <tr> <td>Existing end-to-end NMT systems || Bahdanau et al. (2015) || Architecture || gated RNN with search || Training || MLE</td> <td>30000</t...
Table 7
table_7
P16-1159
8
acl2016
Table 7 shows the results on English-French translation. We list existing end-to-end NMT systems that are comparable to our system. All these systems use the same subset of the WMT 2014 training corpus and adopt MLE as the training criterion. They differ in network architectures and vocabulary sizes. Our RNNSEARCH-MLE ...
[1, 2, 2, 1, 1, 1, 2]
['Table 7 shows the results on English-French translation.', 'We list existing end-to-end NMT systems that are comparable to our system.', 'All these systems use the same subset of the WMT 2014 training corpus and adopt MLE as the training criterion.', 'They differ in network architectures and vocabulary sizes.', 'Our ...
[None, None, None, ['Vocab'], ['gated RNN with search', 'Training', 'MLE', 'Jean et al. (2015)', 'BLEU'], ['gated RNN with search', 'Training', 'MRT', 'BLEU', 'Vocab', 'Luong et al. (2015b)'], ['this work']]
1
P16-1161table_2
BLEU scores obtained on the WMT14 test set. We report the performance of the baseline, the source-context model and the full model.
2
[['data size', 'baseline'], ['data size', '+source'], ['data size', '+target']]
1
[['small'], ['medium'], ['full']]
[['10.7', '15.2', '16.7'], ['10.7', '16', '17.3'], ['11.2', '16.4', '17.5']]
column
['BLEU', 'BLEU', 'BLEU']
['+target', '+source']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>small</th> <th>medium</th> <th>full</th> </tr> </thead> <tbody> <tr> <td>data size || baseline</td> <td>10.7</td> <td>15.2</td> <td>16.7</td> </tr> <tr> <td>data size...
Table 2
table_2
P16-1161
10
acl2016
Table 2 shows the obtained results. Statistically significant differences (alpha=0.01) are marked in bold. The source-context model does not help in the small data setting but brings a substantial improvement of 0.7-0.8 BLEU points for the medium and full data settings, which is an encouraging result. Target-side conte...
[1, 2, 1, 1, 1, 1, 1]
['Table 2 shows the obtained results.', 'Statistically significant differences (alpha=0.01) are marked in bold.', 'The source-context model does not help in the small data setting but brings a substantial improvement of 0.7-0.8 BLEU points for the medium and full data settings, which is an encouraging result.', 'Target...
[None, None, ['+source', 'small', 'medium', 'full'], ['+target', 'small'], ['+target', 'full'], ['+target', '+source'], ['+target', '+source']]
1
P16-1168table_2
Evaluation Metrics
1
[['monolingual'], ['alternate'], ['transfer']]
1
[['BLEU-1'], ['BLEU-2'], ['BLEU-3'], ['BLEU-4'], ['ROUGE-L'], ['CIDEr-D']]
[['0.715', '0.573', '0.468', '0.379', '0.616', '0.58'], ['0.709', '0.565', '0.46', '0.37', '0.611', '0.568'], ['0.717', '0.574', '0.469', '0.38', '0.619', '0.625']]
column
['BLEU-1', 'BLEU-2', 'BLEU-3', 'BLEU-4', 'ROUGE-L', 'CIDEr-D']
['transfer']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU-1</th> <th>BLEU-2</th> <th>BLEU-3</th> <th>BLEU-4</th> <th>ROUGE-L</th> <th>CIDEr-D</th> </tr> </thead> <tbody> <tr> <td>monolingual</td> <td>0.715</td> <td>0.573...
Table 2
table_2
P16-1168
8
acl2016
Table 2 shows the evaluation metrics for various settings of cross-lingual transfer learning. All values were calculated for Japanese captions generated for test set images. Our proposed model is labeled “transfer”. As you can see, it outperformed the other two models for every metric. In particular, the CIDEr-D score ...
[1, 2, 2, 1, 1, 2, 1]
['Table 2 shows the evaluation metrics for various settings of cross-lingual transfer learning.', 'All values were calculated for Japanese captions generated for test set images.', 'Our proposed model is labeled “transfer”.', 'As you can see, it outperformed the other two models for every metric.', 'In particular, the ...
[None, None, ['transfer'], ['transfer'], ['transfer', 'CIDEr-D', 'monolingual'], ['alternate'], ['alternate', 'monolingual']]
1
P16-1170table_3
Image captioning results
1
[['Bing'], ['MS COCO']]
1
[['BLEU'], ['METEOR']]
[['0.101', '0.151'], ['0.291', '0.247']]
column
['BLEU', 'METEOR']
['Bing']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU</th> <th>METEOR</th> </tr> </thead> <tbody> <tr> <td>Bing</td> <td>0.101</td> <td>0.151</td> </tr> <tr> <td>MS COCO</td> <td>0.291</td> <td>0.247</td> </tr> ...
Table 3
table_3
P16-1170
9
acl2016
Table 3 shows the results of testing the state-of-the-art MSR captioning system on the CaptionsBing-5000 dataset as compared to the MS COCO dataset, measured by the standard BLEU (Papineni et al., 2002) and METEOR (Denkowski and Lavie, 2014) metrics. The wide gap in the results further confirms that indeed the V QGBing...
[1, 1, 2]
['Table 3 shows the results of testing the state-of-the-art MSR captioning system on the CaptionsBing-5000 dataset as compared to the MS COCO dataset, measured by the standard BLEU (Papineni et al., 2002) and METEOR (Denkowski and Lavie, 2014) metrics.', 'The wide gap in the results further confirms that indeed the V Q...
[['MS COCO', 'Bing', 'BLEU', 'METEOR'], ['Bing'], None]
1
P16-1173table_5
Parsing performance on learner English.
2
[['Parser', 'Petrov (2010)'], ['Parser', 'Stanford'], ['Parser', 'Charniak-Johnson']]
1
[['R'], ['P'], ['F'], ['CMR']]
[['0.863', '0.865', '-', '0.358'], ['0.812', '0.832', '0.822', '0.398'], ['0.845', '0.865', '0.855', '0.465']]
column
['R', 'P', 'F', 'CMR']
['Stanford', 'Charniak-Johnson']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>R</th> <th>P</th> <th>F</th> <th>CMR</th> </tr> </thead> <tbody> <tr> <td>Parser || Petrov (2010)</td> <td>0.863</td> <td>0.865</td> <td>-</td> <td>0.358</td> </tr>...
Table 5
table_5
P16-1173
8
acl2016
Table 5 shows the results. To our surprise, both parsers perform very well on the learner corpora despite the fact that it contains a number of grammatical errors and also syntactic tags that are not defined in PTB-II. Their performance is comparable to, or even better than, that on the Penn Treebank (reported in Petro...
[1, 1, 1]
['Table 5 shows the results.', 'To our surprise, both parsers perform very well on the learner corpora despite the fact that it contains a number of grammatical errors and also syntactic tags that are not defined in PTB-II.', 'Their performance is comparable to, or even better than, that on the Penn Treebank (reported ...
[None, ['Stanford', 'Charniak-Johnson'], ['Stanford', 'Charniak-Johnson', 'Petrov (2010)']]
1
P16-1178table_6
Average performance across all ten folds for the GL model and for different feature sets. Aspects Fluency Conciseness Completeness Referencing Descriptiveness Novelty Richness Attractiveness Formality Popularity Technicality Subjectivity Polarity Sentimentality
2
[['Aspect', 'Fluency'], ['Aspect', 'Conciseness'], ['Aspect', 'Completeness'], ['Aspect', 'Referencing'], ['Aspect', 'Descriptiveness'], ['Aspect', 'Novelty'], ['Aspect', 'Richness'], ['Aspect', 'Attractiveness'], ['Aspect', 'Formality'], ['Aspect', 'Popularity'], ['Aspect', 'Technicality'], ['Aspect', 'Subjectivity'],...
1
[['BoW'], ['Shallow'], ['BaselineM']]
[['1.1571', '1.1181', '1.1462'], ['1.2622', '1.1968', '1.2456'], ['0.8408', '0.7945', '0.813'], ['0.7047', '0.6613', '0.7048'], ['0.926', '0.873', '0.9073'], ['0.7994', '0.7607', '0.7797'], ['0.9866', '0.9454', '0.9568'], ['0.7048', '0.6702', '0.6907'], ['0.7025', '0.6691', '0.692'], ['0.8329', '0.7825', '0.825'], ['0....
column
['RMSE', 'RMSE', 'RMSE']
['BoW', 'Shallow']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BoW</th> <th>Shallow</th> <th>BaselineM</th> </tr> </thead> <tbody> <tr> <td>Aspect || Fluency</td> <td>1.1571</td> <td>1.1181</td> <td>1.1462</td> </tr> <tr> <td>Asp...
Table 6
table_6
P16-1178
8
acl2016
As a reference for future research with the proposed corpus, we trained GLM regression models to predict each aspect individually. Table 6 presents the RMSE for each aspect, for two different sets of feature: a standard BoW and the shallow features described previously, as well as the BaselineM. Despite the simplicity ...
[2, 1, 2, 1]
['As a reference for future research with the proposed corpus, we trained GLM regression models to predict each aspect individually.', 'Table 6 presents the RMSE for each aspect, for two different sets of feature: a standard BoW and the shallow features described previously, as well as the BaselineM.', 'Despite the sim...
[None, ['BoW', 'Shallow', 'BaselineM'], None, ['BoW', 'BaselineM', 'Shallow']]
1
P16-1181table_3
Sentence boundary detection results (F1) on test sets.
1
[['MARMOT'], ['NOSYNTAX'], ['JOINT']]
1
[['WSJ'], ['Switchboard'], ['WSJ*']]
[['97.64', '71.87', '53.02'], ['98.21', '76.31†', '55.15'], ['98.21', '76.65†', '65.34†‡']]
column
['F1', 'F1', 'F1']
['JOINT', 'NOSYNTAX', 'MARMOT']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>WSJ</th> <th>Switchboard</th> <th>WSJ*</th> </tr> </thead> <tbody> <tr> <td>MARMOT</td> <td>97.64</td> <td>71.87</td> <td>53.02</td> </tr> <tr> <td>NOSYNTAX</td> ...
Table 3
table_3
P16-1181
8
acl2016
Table 3 gives the performance of the sentence boundary detectors on test sets. On WSJ all systems are close to 98 and this high number once again affirms that the task of segmenting newspaper-quality text does not leave much space for improvement. Although the parsing models outperform MARMOT, the improvements in F1 ar...
[1, 1, 2, 2, 2, 1, 1, 1, 1]
['Table 3 gives the performance of the sentence boundary detectors on test sets.', 'On WSJ all systems are close to 98 and this high number once again affirms that the task of segmenting newspaper-quality text does not leave much space for improvement.', 'Although the parsing models outperform MARMOT, the improvements ...
[None, ['WSJ'], ['MARMOT'], ['WSJ*'], ['NOSYNTAX', 'MARMOT'], ['JOINT', 'NOSYNTAX', 'MARMOT'], ['MARMOT', 'Switchboard'], ['NOSYNTAX', 'Switchboard'], ['JOINT', 'Switchboard']]
1
P16-1188table_2
Results for RST Discourse Treebank (Carlson et al., 2001). Differences between our system and the Tree Knapsack system of Yoshida et al. (2014) are not statistically significant, reflecting the high variance in this small (20 document) test set.
1
[['First k words'], ['Tree Knapsack'], ['Full']]
1
[['ROUGE-1'], ['ROUGE-2']]
[['23.5', '8.3'], ['25.1', '8.7'], ['26.3', '8']]
column
['ROUGE-1', 'ROUGE-2']
['Full']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>ROUGE-1</th> <th>ROUGE-2</th> </tr> </thead> <tbody> <tr> <td>First k words</td> <td>23.5</td> <td>8.3</td> </tr> <tr> <td>Tree Knapsack</td> <td>25.1</td> <td>8.7</t...
Table 2
table_2
P16-1188
9
acl2016
Table 2 shows the results on the RST corpus. Our system is roughly comparable to Tree Knapsack here, and we note that none of the differences in the table are statistically significant. We also observed significant variation between multiple runs on this corpus, with scores changing by 1-2 ROUGE points for slightly dif...
[1, 1, 1]
['Table 2 shows the results on the RST corpus.', 'Our system is roughly comparable to Tree Knapsack here, and we note that none of the differences in the table are statistically significant.', 'We also observed significant variation between multiple runs on this corpus, with scores changing by 1-2 ROUGE points for slig...
[None, ['Full', 'Tree Knapsack'], ['Full', 'Tree Knapsack', 'First k words']]
1
P16-1191table_6
Weighted F-score performance on supersense prediction for the development set and two test sets provided by Johannsen et al. (2004). Our system performs comparably to state-of-the-art systems. † For the system of Ciaramita et al, the publicly avaliable reimplementation of Heilman was used
3
[['System/Data:', 'Baseline and upper bound', 'Most frequent sense'], ['System/Data:', 'Baseline and upper bound', 'Inter-annotator agreement'], ['System/Data:', 'SemCor-trained systems', '(Ciaramita and Altun 2006)â€\xa0'], ['System/Data:', 'SemCor-trained systems', 'Searn (Johannsen et al. 2014)'], ['System/Data:', '...
1
[['Tw-R-dev'], ['Tw-R-eval'], ['Tw-J-eval']]
[['47.54', '44.98', '38.65'], ['-', '69.15', '61.15'], ['48.96', '45.03', '39.65'], ['56.59', '50.89', '40.50'], ['57.14', '50.98', '41.84'], ['54.47', '50.30', '35.61'], ['67.72', '57.14', '42.42'], ['60.66', '51.40', '41.60'], ['61.12', '57.16', '41.97']]
column
['F-score', 'F-score', 'F-score']
['Ours Semcor', 'Ours Twitter']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Tw-R-dev</th> <th>Tw-R-eval</th> <th>Tw-J-eval</th> </tr> </thead> <tbody> <tr> <td>System/Data: || Baseline and upper bound || Most frequent sense</td> <td>47.54</td> <td>44.98</td>...
Table 6
table_6
P16-1191
6
acl2016
5.2 Supersense Prediction. We evaluate our system on the same Twitter dataset with provided training and development (Tw-R-dev) set and two test sets: Tw-R-eval, reported by Johannsen et al. as RITTER, and Tw-J-eval, reported by Johannsen et al. as INHOUSE. Our results are shown in table 6 and compared to results repor...
[2, 2, 1, 1]
['5.2 Supersense Prediction.', 'We evaluate our system on the same Twitter dataset with provided training and development (Tw-R-dev) set and two test sets: Tw-R-eval, reported by Johannsen et al. as RITTER, and Tw-J-eval, reported by Johannsen et al. as INHOUSE.', 'Our results are shown in table 6 and compared to resul...
[None, ['Tw-R-dev', 'Tw-R-eval', 'Tw-J-eval'], ['Ours Semcor', 'Ours Twitter', 'HMM (Johannsen et al. 2014)', '(Ciaramita and Altun 2006)â€\xa0', 'Most frequent sense'], ['Ours Semcor', 'Ours Twitter']]
1
P16-1195table_3
Performance on EVAL for the GEN task. Performance on DEV is indicated in parentheses.
3
[['C#', 'Model', 'IR'], ['C#', 'Model', 'MOSES'], ['C#', 'Model', 'SUM-NN'], ['C#', 'Model', 'CODE-NN'], ['SQL', 'Model', 'IR'], ['SQL', 'Model', 'MOSES'], ['SQL', 'Model', 'SUM-NN'], ['SQL', 'Model', 'CODE-NN']]
1
[['METEOR'], ['BLEU-4']]
[['7.9 (6.1)', '13.7 (12.6)'], ['9.1 (9.7)', '11.6 (11.5)'], ['10.6 (10.3)', '19.3 (18.2)'], ['12.3 (13.4)', '20.5 (20.4)'], ['6.3 (8.0)', '13.5 (13.0)'], ['8.3 (9.7)', '15.4 (15.9)'], ['6.4 (8.7)', '13.3 (14.2)'], ['10.9 (14.0)', '18.4 (17.0)']]
column
['METEOR', 'BLEU-4']
['CODE-NN']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>METEOR</th> <th>BLEU-4</th> </tr> </thead> <tbody> <tr> <td>C# || Model || IR</td> <td>7.9 (6.1)</td> <td>13.7 (12.6)</td> </tr> <tr> <td>C# || Model || MOSES</td> <td>9.1...
Table 3
table_3
P16-1195
7
acl2016
Table 3 shows automatic evaluation metrics for our model and baselines. CODE-NN outperforms all the other methods in terms of METEOR and BLEU-4 score. We attribute this to its ability to perform better content selection, focusing on the more salient parts of the code by using its attention mechanism jointly with its LS...
[1, 1, 2, 1, 2, 2]
['Table 3 shows automatic evaluation metrics for our model and baselines.', 'CODE-NN outperforms all the other methods in terms of METEOR and BLEU-4 score.', 'We attribute this to its ability to perform better content selection, focusing on the more salient parts of the code by using its attention mechanism jointly wit...
[None, ['CODE-NN', 'BLEU-4', 'METEOR'], ['CODE-NN'], ['CODE-NN', 'C#', 'SQL'], ['C#', 'SQL'], ['SQL']]
1
P16-1201table_6
Automatic evaluations of events from FN.
2
[['Training Corpus', 'ACE-ANN-FN'], ['Training Corpus', 'ACE-SF-FN'], ['Training Corpus', 'ACE-RF-FN'], ['Training Corpus', 'ACE-SL-FN'], ['Training Corpus', 'ACE-PSL-FN']]
1
[['Pre'], ['Rec'], ['F1']]
[['77.2', '63.5', '69.7'], ['73.2', '64.1', '68.4'], ['72.6', '63.9', '68.0'], ['77.5', '64.3', '70.3'], ['77.6', '65.2', '70.7']]
column
['Pre', 'Rec', 'F1']
['ACE-PSL-FN']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Pre</th> <th>Rec</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Training Corpus || ACE-ANN-FN</td> <td>77.2</td> <td>63.5</td> <td>69.7</td> </tr> <tr> <td>Training...
Table 6
table_6
P16-1201
7
acl2016
Table 6 presents the results where we measure precision, recall and F1. Compared with ACE-ANN-FN, events from SF and RF hurt the performance. As analyzed in previous section, SF and RF yield quite a few false events, which dramatically hurt the accuracy. Moreover, ACE-SL-FN obtains a score of 70.3% in F1 measure, which...
[1, 1, 2, 1, 2, 1]
['Table 6 presents the results where we measure precision, recall and F1.', 'Compared with ACE-ANN-FN, events from SF and RF hurt the performance.', 'As analyzed in previous section, SF and RF yield quite a few false events, which dramatically hurt the accuracy.', 'Moreover, ACE-SL-FN obtains a score of 70.3% in F1 mea...
[['Pre', 'Rec', 'F1'], ['ACE-ANN-FN', 'ACE-SF-FN', 'ACE-RF-FN'], ['ACE-SF-FN', 'ACE-RF-FN'], ['ACE-SL-FN', 'ACE-ANN-FN'], None, ['ACE-PSL-FN']]
1
P16-1206table_2
Model evaluation with standard metric and our new metric. Models vary in the amount of training data and feature types.
3
[['Data', 'PKU', 'Corpus'], ['Data', 'PKU', 'Corpus'], ['Data', 'PKU', 'Corpus'], ['Data', 'PKU', 'Corpus'], ['Data', 'MSR', 'Corpus'], ['Data', 'MSR', 'Corpus'], ['Data', 'MSR', 'Corpus'], ['Data', 'MSR', 'Corpus'], ['Data', 'NCC', 'Corpus'], ['Data', 'NCC', 'Corpus'], ['Data', 'NCC', 'Corpus'], ['Data', 'NCC', 'Corpu...
1
[['Size'], ['p'], ['r'], ['f1'], ['pb'], ['rb'], ['fb']]
[['20%', '90.04', '89.9', '89.97', '45.22', '43.37', '44.28'], ['50%', '92.87', '91.58', '92.22', '54.24', '49.12', '51.55'], ['80%', '94.07', '92.21', '93.13', '61.8', '54.74', '58.05'], ['100%', '94.03', '92.91', '93.47', '64.22', '59.16', '61.59'], ['20%', '92.93', '92.58', '92.76', '45.76', '44.13', '44.93'], ['50%...
column
['Size', 'p', 'r', 'f1', 'pb', 'rb', 'fb']
['pb', 'rb', 'fb']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Size</th> <th>p</th> <th>r</th> <th>f1</th> <th>pb</th> <th>rb</th> <th>fb</th> </tr> </thead> <tbody> <tr> <td>Data || PKU || Corpus</td> <td>20%</td> <td>90.04<...
Table 2
table_2
P16-1206
8
acl2016
Table 2 shows the different evaluation results with standard metric and our balanced metric. We can see that the proposed evaluation metric generally gives lower and more distinguishable score, compared with the standard metric.
[1, 1]
['Table 2 shows the different evaluation results with standard metric and our balanced metric.', 'We can see that the proposed evaluation metric generally gives lower and more distinguishable score, compared with the standard metric.']
[None, ['pb', 'rb', 'fb']]
1
P16-1209table_3
Performance of various models using 25 dimensional CE features, A:Disease name recognition, B: Disease classification task
3
[['Task A', 'Model', 'NN+CE'], ['Task A', 'Model', 'Bi-RNN+CE'], ['Task A', 'Model', 'Bi-GRU+CE'], ['Task A', 'Model', 'Bi-LSTM+CE'], ['Task B', 'Model', 'NN+CE'], ['Task B', 'Model', 'Bi-RNN+CE'], ['Task B', 'Model', 'Bi-GRU+CE'], ['Task B', 'Model', 'Bi-LSTM+CE']]
2
[['Validation Set', 'Precision'], ['Validation Set', 'Recall'], ['Validation Set', 'F1 Score'], ['Test Set', 'Precision'], ['Test Set', 'Recall'], ['Test Set', 'F1 Score']]
[['76.98', '75.80', '76.39', '78.51', '72.75', '75.52'], ['71.96', '74.90', '73.40', '74.14', '72.12', '73.11'], ['76.28', '74.14', '75.19', '76.03', '69.81', '72.79'], ['81.52', '72.86', '76.94', '76.98', '75.80', '76.39'], ['67.27', '53.45', '59.57', '67.90', '49.95', '57.56'], ['61.34', '56.32', '58.72', '60.32', '5...
column
['Precision', 'Recall', 'F1 Score', 'Precision', 'Recall', 'F1 Score']
['Model']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Validation Set || Precision</th> <th>Validation Set || Recall</th> <th>Validation Set || F1 Score</th> <th>Test Set || Precision</th> <th>Test Set || Recall</th> <th>Test Set || F1 Score</th...
Table 3
table_3
P16-1209
6
acl2016
Table 3 shows the results obtained by different RNN models with only character level word embedding features. For the task A (Disease name recognition) Bi-LSTM and NN models gave competitive performance on the test set, while Bi-RNN and Bi-GRU did not perform so well. On the other hand for the task B, there is 2.08% âˆ...
[1, 1, 1, 1, 2, 2, 2]
['Table 3 shows the results obtained by different RNN models with only character level word embedding features.', 'For the task A (Disease name recognition) Bi-LSTM and NN models gave competitive performance on the test set, while Bi-RNN and Bi-GRU did not perform so well.', 'On the other hand for the task B, there is ...
[None, ['Task A', 'Bi-LSTM+CE', 'NN+CE'], ['Task B', 'Bi-RNN+CE', 'NN+CE'], ['Task B', 'Bi-LSTM+CE', 'NN+CE'], ['Task B', 'Task A'], ['Bi-RNN+CE'], None]
1
P16-1218table_3
Comparison with previous state-of-the-art models on Penn-YM, Penn-SD and CTB5.
2
[['Method', '(Zhang and Nivre 2011)'], ['Method', '(Bernd Bohnet 2012)'], ['Method', '(Zhang and McDonald 2014)'], ['Method', '(Dyer et al. 2015)'], ['Method', '(Weiss et al. 2015)'], ['Method', 'Our basic model + segment']]
2
[['Penn-YM', 'UAS'], ['Penn-YM', 'LAS'], ['Penn-SD', 'UAS'], ['Penn-SD', 'LAS'], ['CTB5', 'UAS'], ['CTB5', 'LAS']]
[['92.9', '91.8', '-', '-', '86.0', '84.4'], ['93.39', '92.38', '-', '-', '87.5', '85.9'], ['93.57', '92.48', '93.01', '90.64', '87.96', '86.34'], ['-', '-', '93.1', '90.9', '87.2', '85.7'], ['-', '-', '93.99', '92.05', '-', '-'], ['93.51', '92.45', '94.08', '91.82', '87.55', '86.23']]
column
['UAS', 'LAS', 'UAS', 'LAS', 'UAS', 'LAS']
['Our basic model + segment']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Penn-YM || UAS</th> <th>Penn-YM || LAS</th> <th>Penn-SD || UAS</th> <th>Penn-SD || LAS</th> <th>CTB5 || UAS</th> <th>CTB5 || LAS</th> </tr> </thead> <tbody> <tr> <td>Method || (...
Table 3
table_3
P16-1218
7
acl2016
Table 3 lists the performances of our model as well as previous state-of-the-art systems on on PennYM, Penn-SD and CTB5. We compare to conventional state-of-the-art graph-based model (Zhang and McDonald, 2014), conventional state-of-theart transition-based model using beam search (Zhang and Nivre, 2011), transition-bas...
[1, 2, 1, 1, 1]
['Table 3 lists the performances of our model as well as previous state-of-the-art systems on on PennYM, Penn-SD and CTB5.', 'We compare to conventional state-of-the-art graph-based model (Zhang and McDonald, 2014), conventional state-of-theart transition-based model using beam search (Zhang and Nivre, 2011), transitio...
[['Penn-YM', 'Penn-SD', 'CTB5'], ['(Zhang and McDonald 2014)', '(Zhang and Nivre 2011)', '(Bernd Bohnet 2012)', '(Dyer et al. 2015)', '(Weiss et al. 2015)'], ['Our basic model + segment', 'Penn-YM', 'Penn-SD', 'CTB5'], ['Our basic model + segment', '(Zhang and McDonald 2014)', 'Penn-YM', 'CTB5', 'Penn-SD'], ['Our basic...
1
P16-1218table_4
Model performance of different way to learn segment embeddings.
2
[['Method', 'Average'], ['Method', 'LSTM-Minus']]
1
[['Peen-YM'], ['Peen-SD'], ['CTB5']]
[['93.23', '93.83', '87.24'], ['93.51', '94.08', '87.55']]
column
['UAS', 'UAS', 'UAS']
['LSTM-Minus']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Peen-YM</th> <th>Peen-SD</th> <th>CTB5</th> </tr> </thead> <tbody> <tr> <td>Method || Average</td> <td>93.23</td> <td>93.83</td> <td>87.24</td> </tr> <tr> <td>Method ...
Table 4
table_4
P16-1218
7
acl2016
To make comparison as fair as possible, we let two models have almost the same number parameters. Table 4 lists the UAS of two methods on test set. As we can see, LSTM-Minus shows better performance because our method further incorporates more sentence-level information into our model.
[2, 1, 1]
['To make comparison as fair as possible, we let two models have almost the same number parameters.', 'Table 4 lists the UAS of two methods on test set.', 'As we can see, LSTM-Minus shows better performance because our method further incorporates more sentence-level information into our model.']
[None, ['Method'], ['LSTM-Minus']]
1
P16-1220table_1
Results on the test set.
3
[['Method', 'Previous work', 'Berant et al. (2013)'], ['Method', 'Previous work', 'Yao and Van Durme (2014)'], ['Method', 'Previous work', 'Xu et al. (2014)'], ['Method', 'Previous work', 'Berant and Liang (2014)'], ['Method', 'Previous work', 'Bao et al. (2014)'], ['Method', 'Previous work', 'Bordes et al. (2014)'], [...
1
[['average F1']]
[['35.7'], ['33.0'], ['39.1'], ['39.9'], ['37.5'], ['39.2'], ['40.8'], ['44.3'], ['49.4'], ['49.7'], ['50.3'], ['52.5'], ['44.1'], ['47.1'], ['47.0'], ['53.3']]
column
['average F1']
['This work']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>average F1</th> </tr> </thead> <tbody> <tr> <td>Method || Previous work || Berant et al. (2013)</td> <td>35.7</td> </tr> <tr> <td>Method || Previous work || Yao and Van Durme (2014)</td>...
Table 1
table_1
P16-1220
6
acl2016
5.3.3 Impact of the Inference on Unstructured Data. As shown in Table 1, when structured inference is augmented with the unstructured inference, we see an improvement of 2.9% (from 44.1% to 47.0%). And when Structured + Joint uses unstructured inference, the performance boosts by 6.2% (from 47.1% to 53.3%) achieving a ...
[2, 1, 1, 2]
['5.3.3 Impact of the Inference on Unstructured Data.', 'As shown in Table 1, when structured inference is augmented with the unstructured inference, we see an improvement of 2.9% (from 44.1% to 47.0%).', 'And when Structured + Joint uses unstructured inference, the performance boosts by 6.2% (from 47.1% to 53.3%) achi...
[None, ['Structured', 'Structured + Unstructured'], ['Structured + Joint + Unstructured'], None]
1
P16-1222table_2
Overall performance comparison against baselines.
2
[['Algorithm', 'Standard SMT (Koehn et al., 2003)'], ['Algorithm', 'Couplet SMT (Jiang and Zhou, 2008)'], ['Algorithm', 'LSTM-RNN (Sutskever et al., 2014)'], ['Algorithm', 'iPoet (Yan et al., 2013)'], ['Algorithm', 'Poetry SMT (He et al., 2012)'], ['Algorithm', 'RNNPG (Zhang and Lapata, 2014)'], ['Algorithm', 'Neural C...
1
[['Perplexity'], ['BLEU'], ['Human Evaluation (Syntactic)'], ['Human Evaluation (Semantic)'], ['Human Evaluation (Overall)']]
[['128', '21.68', '0.563', '0.248', '0.811'], ['97', '28.71', '0.916', '0.503', '1.419'], ['85', '24.23', '0.648', '0.233', '0.881'], ['143', '13.77', '0.228', '0.435', '0.663'], ['121', '23.11', '0.802', '0.516', '1.318'], ['99', '25.83', '0.853', '0.6', '1.453'], ['68', '32.62', '0.925', '0.631', '1.556']]
column
['Perplexity', 'BLEU', 'Human Evaluation (Syntactic)', 'Human Evaluation (Semantic)', 'Human Evaluation (Overall)']
['Neural Couplet Machine (NCM)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Perplexity</th> <th>BLEU</th> <th>Human Evaluation (Syntactic)</th> <th>Human Evaluation (Semantic)</th> <th>Human Evaluation (Overall)</th> </tr> </thead> <tbody> <tr> <td>Algorithm...
Table 2
table_2
P16-1222
8
acl2016
5.4 Performance. In Table 2 we show the overall performance of our proposed NCM system compared with strong competing methods as described above. We see that, for perplexity, BLEU and human judgments, our system outperforms other baseline models.
[2, 1, 1]
['5.4 Performance.', 'In Table 2 we show the overall performance of our proposed NCM system compared with strong competing methods as described above.', 'We see that, for perplexity, BLEU and human judgments, our system outperforms other baseline models.']
[None, ['Neural Couplet Machine (NCM)', 'Perplexity', 'BLEU', 'Human Evaluation (Syntactic)', 'Human Evaluation (Semantic)', 'Human Evaluation (Overall)'], ['Neural Couplet Machine (NCM)']]
1
P16-1223table_2
Accuracy of all models on the CNN and Daily Mail datasets. Results marked † are from (Hermann et al., 2015) and results marked ‡ are from (Hill et al., 2016). Classifier and Neural net denote our entity-centric classifier and neural network systems respectively. The numbers marked with ∗ indicate that the results are f...
2
[['Model', 'Frame-semantic model†'], ['Model', 'Word distance model†'], ['Model', 'Deep LSTM Reader†'], ['Model', 'Attentive Reader†'], ['Model', 'Impatient Reader†'], ['Model', 'MemNNs (window memory)‡'], ['Model', 'MemNNs (window memory+self-sup.)‡'], ['Model', 'MemNNs (ensemble)‡'], ['Model', 'Ours: Classifier'], ['...
2
[['CNN', 'Dev'], ['CNN', 'Test'], ['Daily Mail', 'Dev'], ['Daily Mail', 'Test']]
[['36.3', '40.2', '35.5', '35.5'], ['50.5', '50.9', '56.4', '55.5'], ['55.0', '57.0', '63.3', '62.2'], ['61.6', '63.0', '70.5', '69.0'], ['61.8', '63.8', '69.0', '68.0'], ['58.0', '60.6', 'N/A', 'N/A'], ['63.4', '66.8', 'N/A', 'N/A'], ['66.2∗', '69.4∗', 'N/A', 'N/A'], ['67.1', '67.9', '69.1', '68.3'], ['72.4', '72.4', ...
column
['accuracy', 'accuracy', 'accuracy', 'accuracy']
['Ours: Classifier', 'Ours: Neural net']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>CNN || Dev</th> <th>CNN || Test</th> <th>Daily Mail || Dev</th> <th>Daily Mail || Test</th> </tr> </thead> <tbody> <tr> <td>Model || Frame-semantic model†</td> <td>36.3</td> <td...
Table 2
table_2
P16-1223
6
acl2016
Table 2 presents our main results. The conventional feature-based classifier obtains 67.9% accuracy on the CNN test set. Not only does this significantly outperform any of the symbolic approaches reported in (Hermann et al., 2015), it also outperforms all the neural network systems from their paper and the best single-...
[1, 1, 1, 2, 1, 2]
['Table 2 presents our main results.', 'The conventional feature-based classifier obtains 67.9% accuracy on the CNN test set.', 'Not only does this significantly outperform any of the symbolic approaches reported in (Hermann et al., 2015), it also outperforms all the neural network systems from their paper and the best...
[None, ['Ours: Classifier', 'CNN', 'Test'], ['Ours: Classifier', 'CNN', 'Test'], None, ['Ours: Neural net', 'CNN', 'Test', 'Daily Mail'], None]
1
P16-1226table_4
Performance scores of our method compared to the path-based baselines and the state-of-the-art distributional methods for hypernymy detection, on both variations of the dataset – with lexical and random split to train / test / validation.
3
[['method', 'Path-based', 'Snow'], ['method', 'Path-based', 'Snow + Gen'], ['method', 'Path-based', 'HypeNET Path-based']]
2
[['random split', 'precision'], ['random split', 'recall'], ['random split', 'F1'], ['lexical split', 'precision'], ['lexical split', 'recall'], ['lexical split', 'F1']]
[['0.843', '0.452', '0.589', '0.760', '0.438', '0.556'], ['0.852', '0.561', '0.676', '0.759', '0.530', '0.624'], ['0.811', '0.716', '0.761', '0.691', '0.632', '0.660']]
column
['precision', 'recall', 'F1', 'precision', 'recall', 'F1']
['HypeNET Path-based']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>random split || precision</th> <th>random split || recall</th> <th>random split || F1</th> <th>lexical split || precision</th> <th>lexical split || recall</th> <th>lexical split || F1</th> ...
Table 4
table_4
P16-1226
7
acl2016
Table 4 displays performance scores of HypeNET and the baselines. HypeNET Path-based is our path-based recurrent neural network model (Section 3.1). Comparing the path-based methods shows that generalizing paths improves recall while maintaining similar levels of precision, reassessing the behavior found in Nakashole e...
[1, 2, 1, 1, 2]
['Table 4 displays performance scores of HypeNET and the baselines.', 'HypeNET Path-based is our path-based recurrent neural network model (Section 3.1).', 'Comparing the path-based methods shows that generalizing paths improves recall while maintaining similar levels of precision, reassessing the behavior found in Nak...
[None, ['HypeNET Path-based'], ['HypeNET Path-based'], ['HypeNET Path-based'], None]
1
P16-1228table_2
Performance of different rule integration methods on SST2. 1) CNN is the base network; 2) “-but-clause” takes the clause after “but” as input; 3) “-‘2-reg” imposes a regularization term γkσθ(S) − σθ(Y )k2 to the CNN objective, with the strength γ selected on dev set; 4) “-project” projects the trained base CNN to the r...
2
[['Model', 'CNN (Kim, 2014)'], ['Model', '-but-clause'], ['Model', '-l2-reg'], ['Model', '-project'], ['Model', '-opt-project'], ['Model', '-pipeline'], ['Model', '-Rule-p'], ['Model', '-Rule-q']]
1
[['Accuracy (%)']]
[['87.2'], ['87.3'], ['87.5'], ['87.9'], ['88.3'], ['87.9'], ['88.8'], ['89.3']]
column
['Accuracy (%)']
['-Rule-p', '-Rule-q']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy (%)</th> </tr> </thead> <tbody> <tr> <td>Model || CNN (Kim, 2014)</td> <td>87.2</td> </tr> <tr> <td>Model || -but-clause</td> <td>87.3</td> </tr> <tr> <td>Model ...
Table 2
table_2
P16-1228
8
acl2016
To further investigate the effectiveness of our framework in integrating structured rule knowledge, we compare with an extensive array of other possible integration approaches. Table 2 lists these methods and their performance on the SST2 task. We see that: 1) Although all methods lead to different degrees of improveme...
[2, 1, 1, 1, 2, 1, 2, 1]
['To further investigate the effectiveness of our framework in integrating structured rule knowledge, we compare with an extensive array of other possible integration approaches.', 'Table 2 lists these methods and their performance on the SST2 task.', 'We see that: 1) Although all methods lead to different degrees of i...
[None, None, ['-Rule-p', '-Rule-q'], ['-Rule-p', '-Rule-q', '-pipeline'], ['-Rule-p', '-Rule-q'], ['-Rule-p', '-project', '-opt-project'], ['-Rule-p'], ['-opt-project', '-but-clause']]
1
P16-1230table_2
Statistical evaluation of the prediction of the on-line GP systems with respect to Subj rating.
2
[['Subj', 'Fail'], ['Subj', 'Suc.'], ['Subj', 'Total']]
1
[['Prec.'], ['Recall'], ['F-measure'], ['Number']]
[['1.00', '0.52', '0.68', '204'], ['0.95', '1.00', '0.97', '1892'], ['0.96', '0.95', '0.95', '2096']]
column
['Prec.', 'Recall', 'F-measure', 'Number']
['Subj']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Prec.</th> <th>Recall</th> <th>F-measure</th> <th>Number</th> </tr> </thead> <tbody> <tr> <td>Subj || Fail</td> <td>1.00</td> <td>0.52</td> <td>0.68</td> <td>204</td> ...
Table 1
table_1
P16-1230
9
acl2016
Here we investigate further the accuracy of the model in predicting the subjective success rate. An evaluation of the on-line GP reward model between 1 and 850 training dialogues is presented in Table 2. Since three reward models were learnt each with 850 dialogues, there were a total of 2550 training dialogues. Of the...
[1, 1, 2, 2, 1, 1, 1, 1, 1]
['Here we investigate further the accuracy of the model in predicting the subjective success rate.', 'An evaluation of the on-line GP reward model between 1 and 850 training dialogues is presented in Table 2.', 'Since three reward models were learnt each with 850 dialogues, there were a total of 2550 training dialogues...
[None, None, None, ['Total', 'Number'], ['Total', 'Number'], ['Fail', 'Suc.', 'Number'], ['Recall', 'Fail', 'Suc.'], ['Prec.', 'Fail', 'Suc.'], ['Suc.', 'F-measure']]
1
P16-1231table_1
Final POS tagging test set results on English WSJ and Treebank Union as well as CoNLL’09. We also show the performance of our pre-trained open source model, “Parsey McParseface.”
2
[['Method', 'Linear CRF'], ['Method', 'Ling et al. (2015)'], ['Method', 'Our Local (B=1)'], ['Method', 'Our Local (B=8)'], ['Method', 'Our Global (B=8)'], ['Method', 'Parsey McParseface']]
2
[['WSJ', 'En'], ['News', 'En-Union'], ['Web', 'En-Union'], ['QTB', 'En-Union'], ['CoNLL ’09', 'Ca'], ['CoNLL 09', 'Ch'], ['CoNLL 09', 'Cz'], ['CoNLL 09', 'En'], ['CoNLL 09', 'Ge'], ['CoNLL 09', 'Ja'], ['CoNLL 09', 'Sp'], ['Avg', '-']]
[['97.17', '97.60', '94.58', '96.04', '98.81', '94.45', '98.90', '97.50', '97.14', '97.90', '98.79', '97.17'], ['97.78', '97.44', '94.03', '96.18', '98.77', '94.38', '99.00', '97.60', '97.84', '97.06', '98.71', '97.16'], ['97.44', '97.66', '94.46', '96.59', '98.91', '94.56', '98.96', '97.36', '97.35', '98.02', '98.88',...
column
['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy']
['Our Local (B=1)', 'Our Local (B=8)', 'Our Global (B=8)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>WSJ || En</th> <th>News || En-Union</th> <th>Web || En-Union</th> <th>QTB || En-Union</th> <th>CoNLL ’09 || Ca</th> <th>CoNLL 09 || Ch</th> <th>CoNLL 09 || Cz</th> <th>CoNLL 09 |...
Table 1
table_1
P16-1231
5
acl2016
In Table 1 we compare our model to a linear CRF and to the compositional character-to-word LSTM model of Ling et al. (2015). The CRF is a first-order linear model with exact inference and the same emission features as our model. It additionally also has transition features of the word, cluster and character n-gram up t...
[1, 2, 2, 2, 1, 1, 1, 2]
['In Table 1 we compare our model to a linear CRF and to the compositional character-to-word LSTM model of Ling et al. (2015).', 'The CRF is a first-order linear model with exact inference and the same emission features as our model.', 'It additionally also has transition features of the word, cluster and character n-g...
[['Our Local (B=1)', 'Our Local (B=8)', 'Our Global (B=8)', 'Linear CRF', 'Ling et al. (2015)'], ['Linear CRF'], ['Linear CRF'], ['Ling et al. (2015)'], ['Our Local (B=1)', 'Our Local (B=8)', 'Avg'], ['Our Local (B=8)'], ['CoNLL 09', 'Our Global (B=8)'], ['Our Global (B=8)']]
1
P16-1231table_4
Sentence compression results on News data. Automatic refers to application of the same automatic extraction rules used to generate the News training corpus.
2
[['Method', 'Filippova et al. (2015)'], ['Method', 'Automatic'], ['Method', 'Our Local (B=1)'], ['Method', 'Our Local (B=8)'], ['Method', 'Our Global (B=8)']]
2
[['Generated corpus', 'A'], ['Generated corpus', 'F1'], ['Human eval', 'readability'], ['Human eval', 'informativeness']]
[['35.36', '82.83', '4.66', '4.03'], ['-', '-', '4.31', '3.77'], ['30.51', '78.72', '4.58', '4.03'], ['31.19', '75.69', '-', '-'], ['35.16', '81.41', '4.67', '4.07']]
column
['A', 'F1', 'readability', 'informativeness']
['Our Local (B=1)', 'Our Local (B=8)', 'Our Global (B=8)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Generated corpus || A</th> <th>Generated corpus || F1</th> <th>Human eval || readability</th> <th>Human eval || informativeness</th> </tr> </thead> <tbody> <tr> <td>Method || Filippova et...
Table 4
table_4
P16-1231
7
acl2016
Table 4 shows our sentence compression results. Our globally normalized model again significantly outperforms the local model. Beam search with a locally normalized model suffers from severe label bias issues that we discuss on a concrete example in Section 5. We also compare to the sentence compression system from Fil...
[1, 1, 2, 2, 1, 2]
['Table 4 shows our sentence compression results.', 'Our globally normalized model again significantly outperforms the local model.', 'Beam search with a locally normalized model suffers from severe label bias issues that we discuss on a concrete example in Section 5.', 'We also compare to the sentence compression syst...
[None, ['Our Global (B=8)', 'Our Local (B=1)', 'Our Local (B=8)'], ['Our Local (B=1)', 'Our Local (B=8)'], ['Filippova et al. (2015)'], ['Our Global (B=8)', 'Generated corpus', 'Human eval'], None]
1
P16-2002table_1
Performance comparison between different embeddings style.
1
[['alarm'], ['apps'], ['calendar'], ['communication'], ['finance'], ['flights'], ['games'], ['hotel'], ['livemovie'], ['livetv'], ['movies'], ['music'], ['mystuff'], ['note'], ['ondevice'], ['places'], ['reminder'], ['sports'], ['timer'], ['travel'], ['tv'], ['weather'], ['Average']]
1
[['w/o Embed'], ['6B-50d'], ['840B-300d'], ['SENT'], ['SENT+']]
[['97.25', '97.68', '97.5', '97.68', '97.74'], ['89.16', '91.07', '92.52', '94.24', '94.3'], ['91.34', '92.43', '92.32', '92.53', '92.43'], ['99.1', '99.13', '99.08', '99.08', '99.12'], ['90.44', '90.84', '90.72', '90.76', '90.82'], ['94.19', '92.99', '93.99', '94.59', '94.59'], ['90.16', '91.79', '92.09', '93.08', '92...
row
['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy']
['SENT+']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>w/o Embed</th> <th>6B-50d</th> <th>840B-300d</th> <th>SENT</th> <th>SENT+</th> </tr> </thead> <tbody> <tr> <td>alarm</td> <td>97.25</td> <td>97.68</td> <td>97.5</td> ...
Table 1
table_1
P16-2002
4
acl2016
3.6 Results of Intent Classification Task. Table 1 shows the performance of intent classification across domains. For the baseline, SVM without embedding (w/o Embed) achieved 91.99% accuracy, which is already very competitive. However, the models with word embedding trained on 6 billion tokens (6B-50d) and 840 billion ...
[2, 1, 1, 1, 2, 2, 1, 1]
['3.6 Results of Intent Classification Task.', 'Table 1 shows the performance of intent classification across domains.', 'For the baseline, SVM without embedding (w/o Embed) achieved 91.99% accuracy, which is already very competitive.', 'However, the models with word embedding trained on 6 billion tokens (6B-50d) and 8...
[None, None, ['w/o Embed', 'Average'], ['6B-50d', '840B-300d', 'Average'], None, None, ['SENT', 'w/o Embed', 'Average'], ['SENT+', 'Average']]
1
P16-2002table_2
Performance for selected domains as the number of unlabeled data increases.
1
[['apps'], ['music'], ['tv']]
1
[['0'], ['10%'], ['20%'], ['30%'], ['40%'], ['50%'], ['60%'], ['70%'], ['80%'], ['90%'], ['100%']]
[['89.16', '89.83', '90.04', '90.26', '90.88', '91.9', '92.41', '92.41', '92.95', '93.72', '94.3'], ['87.87', '89.12', '89.61', '90.4', '90.83', '91.26', '91.31', '91.33', '91.38', '91.33', '91.33'], ['91.42', '92.28', '92.83', '93.61', '93.96', '94.67', '94.91', '95.12', '95.34', '95.44', '95.47']]
row
['accuracy', 'accuracy', 'accuracy']
['0', '10%', '20%', '30%', '40%', '50%', '60%', '70%', '80%', '90%', '100%']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>0</th> <th>10%</th> <th>20%</th> <th>30%</th> <th>40%</th> <th>50%</th> <th>60%</th> <th>70%</th> <th>80%</th> <th>90%</th> <th>100%</th> </tr> </thead> <tbody>...
Table 2
table_2
P16-2002
5
acl2016
As in Table 2 , we also measured performance of our method (SENT+) as a function of the percentage of unlabeled data we used from total unlabeled sentences. The overall trend is clear: as the number of sentences are added to the data for inducing sentence representation, the test performance improves because of both be...
[1, 1, 1]
['As in Table 2 , we also measured performance of our method (SENT+) as a function of the percentage of unlabeled data we used from total unlabeled sentences.', 'The overall trend is clear: as the number of sentences are added to the data for inducing sentence representation, the test performance improves because of bo...
[None, ['0', '10%', '20%', '30%', '40%', '50%', '60%', '70%', '80%', '90%', '100%'], ['100%']]
1
P16-2006table_2
Development and test set results for shiftreduce dependency parser on Penn Treebank using only (s1, s0, q0) positional features.
2
[['Parser', 'C & M 2014'], ['Parser', 'Dyer et al. 2015'], ['Parser', 'Weiss et al. 2015'], ['Parser', '+ Percept./Beam'], ['Parser', 'Bi-LSTM'], ['Parser', '2-Layer Bi-LSTM']]
2
[['Dev', 'UAS'], ['Dev', 'LAS'], ['Test', 'UAS'], ['Test', 'LAS']]
[['92.0', '89.7', '91.8', '89.6'], ['93.2', '90.9', '93.1', '90.9'], ['-', '-', '93.19', '91.18'], ['-', '-', '93.99', '92.05'], ['93.31', '91.01', '93.21', '91.16'], ['93.67', '91.48', '93.42', '91.36']]
column
['UAS', 'LAS', 'UAS', 'LAS']
['Bi-LSTM', '2-Layer Bi-LSTM']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Dev || UAS</th> <th>Dev || LAS</th> <th>Test || UAS</th> <th>Test || LAS</th> </tr> </thead> <tbody> <tr> <td>Parser || C &amp; M 2014</td> <td>92.0</td> <td>89.7</td> <td>...
Table 2
table_2
P16-2006
4
acl2016
Table 2 shows results for English Penn Treebank using Stanford dependencies. Despite the minimally designed feature representation, relatively few training iterations, and lack of precomputed embeddings, the parser performed on par with state-of-the-art incremental dependency parsers, and slightly outperformed the stat...
[1, 1]
['Table 2 shows results for English Penn Treebank using Stanford dependencies.', 'Despite the minimally designed feature representation, relatively few training iterations, and lack of precomputed embeddings, the parser performed on par with state-of-the-art incremental dependency parsers, and slightly outperformed the...
[None, ['Bi-LSTM', '2-Layer Bi-LSTM']]
1
P16-2011table_3
Results on Chinese event detection.
2
[['Model', 'MaxEnt'], ['Model', 'Rich-C'], ['Model', 'HNN']]
2
[['Trigger Identification', 'P'], ['Trigger Identification', 'R'], ['Trigger Identification', 'F'], ['Trigger Classification', 'P'], ['Trigger Classification', 'R'], ['Trigger Classification', 'F']]
[['50.0', '77.0', '60.6', '47.5', '73.1', '57.6'], ['62.2', '71.9', '66.7', '58.9', '68.1', '63.2'], ['74.2', '63.1', '68.2', '77.1', '53.1', '63.0']]
column
['P', 'R', 'F', 'P', 'R', 'F']
['HNN']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Trigger Identification || P</th> <th>Trigger Identification || R</th> <th>Trigger Identification || F</th> <th>Trigger Classification || P</th> <th>Trigger Classification || R</th> <th>Trigg...
Table 3
table_3
P16-2011
5
acl2016
Table 3 shows the comparison results between our model and the state-of-the-art methods (Li et al., 2013; Chen et al., 2012). MaxEnt (Li et al., 2013) is a pipeline model, which employs human-designed lexical and syntactic features. Rich-C is developed by Chen et al. (2012), which also incorporates Chinese-specific fea...
[1, 2, 2, 1]
['Table 3 shows the comparison results between our model and the state-of-the-art methods (Li et al., 2013; Chen et al., 2012).', 'MaxEnt (Li et al., 2013) is a pipeline model, which employs human-designed lexical and syntactic features.', 'Rich-C is developed by Chen et al. (2012), which also incorporates Chinese-spec...
[None, ['MaxEnt'], ['Rich-C'], ['HNN', 'Trigger Identification', 'F']]
1
P16-2018table_3
Per category performance.
2
[['Category', 'brand'], ['Category', 'model'], ['Category', 'product'], ['Category', 'product family'], ['Category', 'Overall']]
2
[['CRF', 'P (%)'], ['CRF', 'R (%)'], ['CRF', 'F1'], ['SEARN', 'P (%)'], ['SEARN', 'R (%)'], ['SEARN', 'F1'], ['STRUCTPERCEPTRON', 'P (%)'], ['STRUCTPERCEPTRON', 'R (%)'], ['STRUCTPERCEPTRON', 'F1'], ['LSTM-CRF', 'P (%)'], ['LSTM-CRF', 'R (%)'], ['LSTM-CRF', 'F1']]
[['91.79', '87.93', '89.82', '89.3', '89.3', '89.3', '93.99', '91.20', '92.57', '95.15', '92.29', '93.70'], ['86.28', '80.71', '83.40', '80.7', '78.9', '79.8', '85.56', '80.89', '83.16', '87.25', '85.90', '86.57'], ['87.85', '88.16', '88.00', '83.4', '85.0', '84.2', '87.90', '87.92', '87.91', '91.94', '90.98', '91.46']...
column
['P (%)', 'R (%)', 'F1', 'P (%)', 'R (%)', 'F1', 'P (%)', 'R (%)', 'F1', 'P (%)', 'R (%)', 'F1']
['LSTM-CRF']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>CRF || P (%)</th> <th>CRF || R (%)</th> <th>CRF || F1</th> <th>SEARN || P (%)</th> <th>SEARN || R (%)</th> <th>SEARN || F1</th> <th>STRUCTPERCEPTRON || P (%)</th> <th>STRUCTPERCEPT...
Table 3
table_3
P16-2018
4
acl2016
Table 3 shows the performance of the algorithms with the manually designed features against the automatically induced ones with LSTM-CRF. We show the performance of each individual product entity category. Compared to all models and settings, LSTM-CRF reaches the best performance of 90.92 F1 score. The most challenging...
[1, 1, 1, 1]
['Table 3 shows the performance of the algorithms with the manually designed features against the automatically induced ones with LSTM-CRF.', 'We show the performance of each individual product entity category.', 'Compared to all models and settings, LSTM-CRF reaches the best performance of 90.92 F1 score.', 'The most ...
[['LSTM-CRF'], None, ['LSTM-CRF', 'F1'], ['product family', 'model']]
1
P17-1003table_4
Evaluation of the programs with the highest F1 score in the beam (abest
2
[['Settings', 'No curriculum'], ['Settings', 'Curriculum']]
1
[['Prec.'], ['Rec.'], ['F1'], ['Acc.']]
[['79.1', '91.1', '78.5', '67.2'], ['88.6', '96.1', '89.5', '79.8']]
column
['Prec.', 'Rec.', 'F1', 'Acc.']
['Settings']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Prec.</th> <th>Rec.</th> <th>F1</th> <th>Acc.</th> </tr> </thead> <tbody> <tr> <td>Settings || No curriculum</td> <td>79.1</td> <td>91.1</td> <td>78.5</td> <td>67.2</t...
Table 4
table_4
P17-1003
8
acl2017
We compare the performance of the best programs found with and without curriculum learning in Table 4. We find that the best programs found with curriculum learning are substantially better than those found without curriculum learning by a large margin on every metric.
[1, 1]
['We compare the performance of the best programs found with and without curriculum learning in Table 4.', 'We find that the best programs found with curriculum learning are substantially better than those found without curriculum learning by a large margin on every metric.']
[['Curriculum', 'No curriculum'], ['Curriculum', 'No curriculum']]
1
P17-1005table_4
GRAPHQUESTIONS results. Numbers for comparison systems are from Su et al. (2016).
2
[['Models', 'SEMPRE (Berant et al. 2013)'], ['Models', 'PARASEMPRE (Berant and Liang 2014)'], ['Models', 'JACANA (Yao and Van Durme 2014)'], ['Models', 'Neural Baseline'], ['Models', 'SCANNER']]
1
[['F1']]
[['10.80'], ['12.79'], ['5.08'], ['16.24'], ['17.02']]
column
['F1']
['SCANNER']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Models || SEMPRE (Berant et al. 2013)</td> <td>10.80</td> </tr> <tr> <td>Models || PARASEMPRE (Berant and Liang 2014)</td> <td>12.79</td> ...
Table 4
table_4
P17-1005
7
acl2017
Finally, Table 4 presents our results on GRAPHQUESTIONS. We report F1 for SCANNER,the neural baseline model, and three symbolic systems presented in Su et al. (2016). SCANNER achieves a new state of the art on this dataset with a gain of 4.23 F1 points over the best previously reported model.
[1, 1, 1]
['Finally, Table 4 presents our results on GRAPHQUESTIONS.', 'We report F1 for SCANNER,the neural baseline model, and three symbolic systems presented in Su et al. (2016).', 'SCANNER achieves a new state of the art on this dataset with a gain of 4.23 F1 points over the best previously reported model.']
[None, ['F1', 'SCANNER', 'SEMPRE (Berant et al. 2013)', 'PARASEMPRE (Berant and Liang 2014)', 'JACANA (Yao and Van Durme 2014)'], ['SCANNER', 'PARASEMPRE (Berant and Liang 2014)', 'F1']]
1
P17-1005table_6
SPADES results.
2
[['models', 'Unsupervised CCG (Bisk et al. 2016)'], ['models', 'Semi-supervised CCG (Bisk et al. 2016)'], ['models', 'Neural baseline'], ['models', 'Supervised CCG (Bisk et al. 2016)'], ['models', 'Rule-based system (Bisk et al. 2016)'], ['models', 'SCANNER']]
1
[['F1']]
[['24.8'], ['28.4'], ['28.6'], ['30.9'], ['31.4'], ['31.5']]
column
['F1']
['SCANNER']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>models || Unsupervised CCG (Bisk et al. 2016)</td> <td>24.8</td> </tr> <tr> <td>models || Semi-supervised CCG (Bisk et al. 2016)</td> <td>28....
Table 6
table_6
P17-1005
7
acl2017
Table 6 reports SCANNER's performance on SPADES. For all Freebase related datasets we use average F1 (Berant et al., 2013) as our evaluation metric. Previous work on this dataset has used a semantic parsing framework similar to ours where natural language is converted to an intermediate syntactic representation and the...
[1, 1, 2, 2, 1, 2, 1]
["Table 6 reports SCANNER's performance on SPADES.", 'For all Freebase related datasets we use average F1 (Berant et al., 2013) as our evaluation metric.', 'Previous work on this dataset has used a semantic parsing framework similar to ours where natural language is converted to an intermediate syntactic representation...
[None, ['F1'], None, ['Unsupervised CCG (Bisk et al. 2016)', 'Semi-supervised CCG (Bisk et al. 2016)', 'Supervised CCG (Bisk et al. 2016)', 'Rule-based system (Bisk et al. 2016)'], ['SCANNER', 'Unsupervised CCG (Bisk et al. 2016)', 'Semi-supervised CCG (Bisk et al. 2016)', 'Supervised CCG (Bisk et al. 2016)', 'Rule-bas...
1
P17-1005table_8
Evaluation of predicates induced by SCANNER against EASYCCG. We report F1(%) across datasets. For SPADES, we also provide a breakdown for various utterance types.
3
[['Dataset', 'SPADES', '-'], ['Dataset', 'linguistic constructions of spades', 'conj (1422)'], ['Dataset', 'linguistic constructions of spades', 'control (132)'], ['Dataset', 'linguistic constructions of spades', 'pp (3489)'], ['Dataset', 'linguistic constructions of spades', 'subord (76)'], ['Dataset', 'WEBQUESTIONS',...
1
[['SCANNER'], ['Baseline']]
[['51.2', '45.5'], ['56.1', '66.4'], ['28.3', '40.5'], ['46.2', '23.1'], ['37.9', '52.9'], ['42.1', '25.5'], ['11.9', '15.3']]
column
['F1', 'F1']
['SCANNER']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>SCANNER</th> <th>Baseline</th> </tr> </thead> <tbody> <tr> <td>Dataset || SPADES || -</td> <td>51.2</td> <td>45.5</td> </tr> <tr> <td>Dataset || linguistic constructions of spa...
Table 8
table_8
P17-1005
8
acl2017
As shown in Table 8, on SPADES and WEBQUESTIONS, the predicates learned by our model match the output of EASY CCG more closely than the heuristic baseline. But for GRAPHQUESTIONS which contains more compositional questions, the mismatch is higher. However, since the key idea of our model is to capture salien...
[1, 1, 2, 1, 1]
['As shown in Table 8, on SPADES and WEBQUESTIONS, the predicates learned by our model match the output of EASY CCG more closely than the heuristic baseline.', 'But for GRAPHQUESTIONS which contains more compositional questions, the mismatch is higher.', 'However, since the key idea of our model is to capture...
[['SPADES', 'WEBQUESTIONS', 'SCANNER', 'Baseline'], ['GRAPHQUESTIONS', 'SCANNER', 'Baseline'], ['SCANNER'], ['linguistic constructions of spades', 'conj (1422)', 'control (132)', 'pp (3489)', 'subord (76)'], ['linguistic constructions of spades']]
1
P17-1009table_2
Results of all three tasks on the KBP 2016 evaluation sets. The KBP2016 results are those achieved by the best-performing coreference resolver in the official KBP 2016 evaluation. ∆ is the performance difference between the JOINT model and the corresponding INDEP. model. All results are expressed in terms of F-score.
2
[['English', 'KBP2016'], ['English', 'INDEP.'], ['English', 'JOINT'], ['English', 'delta over INDEP.']]
1
[['MUC'], ['B3'], ['CEAFe'], ['BLANC'], ['AVG-F'], ['Trigger'], ['Anaphoricity']]
[['26.37', '37.49', '34.21', '22.25', '30.08', '46.99', '-'], ['22.71', '40.72', '39', '22.71', '31.28', '48.82', '27.35'], ['27.41', '40.9', '39', '25', '33.08', '49.3', '31.94'], ['4.7', '0.18', '0', '2.29', '1.8', '0.48', '4.59']]
column
['MUC', 'B3', 'CEAFe', 'BLANC', 'AVG-F', 'Trigger', 'Anaphoricity']
['JOINT']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MUC</th> <th>B3</th> <th>CEAFe</th> <th>BLANC</th> <th>AVG-F</th> <th>Trigger</th> <th>Anaphoricity</th> </tr> </thead> <tbody> <tr> <td>English || KBP2016</td> <td>26...
Table 2
table_2
P17-1009
7
acl2017
Results are shown in Table 2 where performance on all three tasks (event coreference, trigger detection, and anaphoricity determination) is expressed in terms of F-score. Table 2 shows the results on the English evaluation set. Specifically, row 1 shows the performance of the best event coreference system participating...
[1, 1, 1, 2, 2, 2, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1]
['Results are shown in Table 2 where performance on all three tasks (event coreference, trigger detection, and anaphoricity determination) is expressed in terms of F-score.', 'Table 2 shows the results on the English evaluation set.', 'Specifically, row 1 shows the performance of the best event coreference system parti...
[['AVG-F', 'Trigger', 'Anaphoricity'], ['English'], ['KBP2016'], ['KBP2016'], ['KBP2016'], ['KBP2016'], ['KBP2016', 'AVG-F', 'Trigger'], ['INDEP.'], ['INDEP.'], ['INDEP.', 'KBP2016', 'AVG-F', 'Trigger'], ['JOINT'], ['delta over INDEP.'], ['delta over INDEP.', 'Trigger', 'Anaphoricity'], ['JOINT', 'KBP2016', 'AVG-F', 'T...
1
P17-1009table_3
Results of model ablations on the KBP 2016 evaluation sets. Each row of ablation results is obtained by either adding one type of interaction factor to the INDEP. model or deleting one type of interaction factor from the JOINT model. For each column, the results are expressed in terms of changes to the INDEP. model’s F...
1
[['INDEP.'], ['INDEP.+CorefTrigger'], ['INDEP.+CorefAnaph'], ['INDEP.+TriggerAnaph'], ['JOINT-CorefTrigger'], ['JOINT-CorefAnaph'], ['JOINT-TriggerAnaph'], ['JOINT']]
2
[['English', 'Coref'], ['English', 'Trigger'], ['English', 'Anaph'], ['Chinese', 'Coref'], ['Chinese', 'Trigger'], ['Chinese', 'Anaph']]
[['31.28', '48.82', '27.35', '25.84', '39.82', '19.31'], ['0.39', '0.42', '-0.05', '0.95', '0.56', '-0.37'], ['0.4', '-0.08', '3.45', '0.37', '0.04', '-0.11'], ['0.11', '0.38', '1.35', '0.14', '0.52', '0.02'], ['0.56', '-0.06', '4.41', '0.19', '0.35', '3.34'], ['0.63', '0.66', '1.46', '1.5', '0.88', '0.28'], ['1.89', '...
row
['F-score', 'delta F-score', 'delta F-score', 'delta F-score', 'delta F-score', 'delta F-score', 'delta F-score', 'delta F-score']
['JOINT']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>English || Coref</th> <th>English || Trigger</th> <th>English || Anaph</th> <th>Chinese || Coref</th> <th>Chinese || Trigger</th> <th>Chinese || Anaph</th> </tr> </thead> <tbody> <tr...
Table 3
table_3
P17-1009
8
acl2017
Table 3 shows the results on the English and Chinese datasets when we add each type of joint factors to the independent model and remove each type of joint factors from the full joint model. The results of each task are expressed in terms of changes to the corresponding independent model’s F-score. Among the three ty...
[1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1]
['Table 3 shows the results on the English and Chinese datasets when we add each type of joint factors to the independent model and remove each type of joint factors from the full joint model.', 'The results of each task are expressed in terms of changes to the corresponding independent model’s F-score.', 'Among the ...
[['English', 'Chinese'], None, ['INDEP.+CorefTrigger', 'JOINT'], ['INDEP.+CorefTrigger', 'Trigger'], ['JOINT', 'Anaph', 'JOINT-CorefTrigger'], ['INDEP.+CorefAnaph', 'Coref', 'Anaph'], ['JOINT', 'Coref', 'Anaph'], ['INDEP.+CorefAnaph', 'Trigger'], ['INDEP.+TriggerAnaph', 'Trigger', 'Anaph'], ['JOINT', 'Anaph', 'Chinese'...
1
P17-1011table_6
Evaluation results of AES on three datasets. Basic: the basic feature sets; mode: discourse mode features.
1
[['SVR-Basic'], ['SVR-Basic+mode'], ['BLRR-Basic'], ['BLRR-Basic+mode']]
2
[['QWK Score', 'Prompt 1'], ['QWK Score', 'Prompt 2'], ['QWK Score', 'Prompt 3']]
[['0.554', '0.468', '0.457'], ['0.6', '0.501', '0.481'], ['0.683', '0.557', '0.513'], ['0.696', '0.565', '0.527']]
column
['QWK Score', 'QWK Score', 'QWK Score']
['SVR-Basic', 'SVR-Basic+mode', 'BLRR-Basic', 'BLRR-Basic+mode']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>QWK Score || Prompt 1</th> <th>QWK Score || Prompt 2</th> <th>QWK Score || Prompt 3</th> </tr> </thead> <tbody> <tr> <td>SVR-Basic</td> <td>0.554</td> <td>0.468</td> <td>0.457</...
Table 6
table_6
P17-1011
8
acl2017
Table 6 shows the evaluation results of AES on three datasets. We can see that the BLRR algorithm performs better than the SVR algorithm. No matter which algorithm is adopted, adding discourse mode features make positive contributions for AES compared with using basic feature sets. The trends are consistent ove...
[1, 1, 1, 1]
['Table 6 shows the evaluation results of AES on three datasets.', 'We can see that the BLRR algorithm performs better than the SVR algorithm.', 'No matter which algorithm is adopted, adding discourse mode features make positive contributions for AES compared with using basic feature sets.', 'The trends are con...
[None, ['BLRR-Basic', 'SVR-Basic'], ['SVR-Basic', 'SVR-Basic+mode', 'BLRR-Basic', 'BLRR-Basic+mode'], ['Prompt 1', 'Prompt 2', 'Prompt 3']]
1
P17-1012table_1
Accuracy of encoders with position features (wrd+pos) and without (wrd) in terms of BLEU and perplexity (PPL) on IWSLT’14 German to English translation; results include unknown word replacement. Deep Convolutional 6/3 is the only multi-layer configuration, more layers for the LSTMs did not improve accuracy on this data...
2
[['System/Encoder', 'Phrase-based'], ['System/Encoder', 'LSTM'], ['System/Encoder', 'BiLSTM'], ['System/Encoder', 'Pooling'], ['System/Encoder', 'Convolutional'], ['System/Encoder', 'Deep Convolutional 6/3']]
2
[['BLEU', 'wrd+pos'], ['BLEU', 'wrd'], ['PPL', 'wrd+pos']]
[['-', '28.4', '-'], ['27.4', '27.3', '10.8'], ['29.7', '29.8', '9.9'], ['26.1', '19.7', '11'], ['29.9', '20.1', '9.1'], ['30.4', '25.2', '8.9']]
column
['BLEU', 'BLEU', 'PPL']
['wrd+pos']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU || wrd+pos</th> <th>BLEU || wrd</th> <th>PPL || wrd+pos</th> </tr> </thead> <tbody> <tr> <td>System/Encoder || Phrase-based</td> <td>-</td> <td>28.4</td> <td>-</td> </tr...
Table 1
table_1
P17-1012
5
acl2017
Table 1 shows that a single-layer convolutional model with position embeddings (Convolutional) can outperform both a uni-directional LSTM encoder (LSTM) as well as a bi-directional LSTM encoder (BiLSTM). Next, we increase the depth of the convolutional encoder. We choose a good setting by independently varying the numb...
[1, 2, 2, 1, 2, 1, 1, 1, 2, 2]
['Table 1 shows that a single-layer convolutional model with position embeddings (Convolutional) can outperform both a uni-directional LSTM encoder (LSTM) as well as a bi-directional LSTM encoder (BiLSTM).', 'Next, we increase the depth of the convolutional encoder.', 'We choose a good setting by independently varying ...
[['wrd+pos', 'Convolutional', 'LSTM', 'BiLSTM'], None, None, ['wrd+pos', 'Deep Convolutional 6/3', 'BiLSTM', 'BLEU'], ['Convolutional'], ['wrd+pos', 'Convolutional', 'BiLSTM', 'BLEU'], ['wrd+pos', 'Pooling', 'LSTM', 'BiLSTM', 'BLEU'], ['wrd', 'Convolutional', 'wrd+pos'], ['Pooling', 'Convolutional'], ['wrd+pos']]
1
P17-1012table_2
Accuracy on three WMT tasks, including results published in previous work. For deep convolutional encoders, we include the number of layers in CNN-a and CNN-c, respectively.
3
[['(Sennrich et al. 2016a)', 'Encoder', 'BiGRU'], ['Single-layer decoder', 'Encoder', 'BiLSTM'], ['Single-layer decoder', 'Encoder', 'Convolutional'], ['Single-layer decoder', 'Encoder', 'Deep Convolutional 8/4']]
1
[['Vocabulary Size'], ['BLEU']]
[['90K', '28.1'], ['80K', '27.5'], ['80K', '27.1'], ['80K', '27.8']]
column
['Vocabulary Size', 'BLEU']
['Deep Convolutional 8/4']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Vocabulary Size</th> <th>BLEU</th> </tr> </thead> <tbody> <tr> <td>(Sennrich et al. 2016a) || Encoder || BiGRU</td> <td>90K</td> <td>28.1</td> </tr> <tr> <td>Single-layer decod...
Table 2
table_2
P17-1012
6
acl2017
The results (Table 2) show that a deep convolutional encoder can perform competitively to the state of the art on this dataset (Sennrich et al., 2016a). Our bi-directional LSTM encoder baseline is 0.6 BLEU lower than the state of the art but uses only 512 hidden units compared to 1024. A singlelayer convolutional encod...
[1, 1, 1, 1]
['The results (Table 2) show that a deep convolutional encoder can perform competitively to the state of the art on this dataset (Sennrich et al., 2016a).', 'Our bi-directional LSTM encoder baseline is 0.6 BLEU lower than the state of the art but uses only 512 hidden units compared to 1024.', 'A singlelayer convolution...
[['Deep Convolutional 8/4', '(Sennrich et al. 2016a)'], ['Encoder', 'BiLSTM', '(Sennrich et al. 2016a)', 'BLEU'], ['Single-layer decoder', 'Encoder', 'Convolutional', 'BLEU'], ['Deep Convolutional 8/4', 'BLEU', '(Sennrich et al. 2016a)', 'Convolutional', 'BiLSTM']]
1
P17-1014table_3
BLEU results for AMR Generation. *Model has been trained on a previous release of the corpus (LDC2014T12).
2
[['Model', 'GIGA-20M'], ['Model', 'GIGA-2M'], ['Model', 'GIGA-200k'], ['Model', 'AMR-ONLY'], ['Model', 'PBMT* (Pourdamghani et al. 2016)'], ['Model', 'TSP (Song et al. 2016)'], ['Model', 'TREETOSTR (Flanigan et al. 2016)']]
1
[['Dev'], ['Test']]
[['33.1', '33.8'], ['31.8', '32.3'], ['27.2', '27.4'], ['21.7', '22'], ['27.2', '26.9'], ['21.1', '22.4'], ['23', '23']]
column
['BLEU', 'BLEU']
['GIGA-20M', 'GIGA-2M', 'GIGA-200k']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Dev</th> <th>Test</th> </tr> </thead> <tbody> <tr> <td>Model || GIGA-20M</td> <td>33.1</td> <td>33.8</td> </tr> <tr> <td>Model || GIGA-2M</td> <td>31.8</td> <td>32.3<...
Table 3
table_3
P17-1014
6
acl2017
Table 3 summarizes our AMR generation results on the development and test set. We outperform all previous state-of-theart systems by the first round of self-training and further improve with the next rounds. Our final model trained on GIGA-20M outperforms TSP and TREETOSTR trained on LDC2015E86, by over 9 BLEU points. ...
[1, 1, 1, 2, 2]
['Table 3 summarizes our AMR generation results on the development and test set.', 'We outperform all previous state-of-theart systems by the first round of self-training and further improve with the next rounds.', 'Our final model trained on GIGA-20M outperforms TSP and TREETOSTR trained on LDC2015E86, by over 9 BLEU ...
[None, ['GIGA-20M', 'GIGA-2M', 'GIGA-200k'], ['GIGA-20M', 'TSP (Song et al. 2016)', 'TREETOSTR (Flanigan et al. 2016)'], ['GIGA-20M'], None]
1
P17-1051table_5
Performance of MORSE against Morfessor on the non-canonical version of SD17
1
[['Morfessor'], ['MORSE'], ['MORSE-CV']]
1
[['P'], ['R'], ['F1']]
[['65.95', '51.13', '57.60'], ['75.35', '83.60', '79.26'], ['84.6', '78.36', '81.29']]
column
['P', 'R', 'F1']
['MORSE']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Morfessor</td> <td>65.95</td> <td>51.13</td> <td>57.60</td> </tr> <tr> <td>MORSE</td> <td>75.35</td...
Table 5
table_5
P17-1051
7
acl2017
Based on the results in Table 5, we make the following observations. Comparing MORSE-CV to MORSE reflects the fundamental difference between SD17 and MC datasets. Comparing MORSE-CV to Morfessor, we observe a significant jump in performance (an increase of 24%).
[1, 1, 1]
['Based on the results in Table 5, we make the following observations.', 'Comparing MORSE-CV to MORSE reflects the fundamental difference between SD17 and MC datasets.', 'Comparing MORSE-CV to Morfessor, we observe a significant jump in performance (an increase of 24%).']
[None, ['MORSE-CV', 'MORSE'], ['MORSE-CV', 'Morfessor']]
1