table_id_paper
stringlengths
15
15
caption
stringlengths
14
1.88k
row_header_level
int32
1
9
row_headers
large_stringlengths
15
1.75k
column_header_level
int32
1
6
column_headers
large_stringlengths
7
1.01k
contents
large_stringlengths
18
2.36k
metrics_loc
stringclasses
2 values
metrics_type
large_stringlengths
5
532
target_entity
large_stringlengths
2
330
table_html_clean
large_stringlengths
274
7.88k
table_name
stringclasses
9 values
table_id
stringclasses
9 values
paper_id
stringlengths
8
8
page_no
int32
1
13
dir
stringclasses
8 values
description
large_stringlengths
103
3.8k
class_sentence
stringlengths
3
120
sentences
large_stringlengths
110
3.92k
header_mention
stringlengths
12
1.8k
valid
int32
0
1
D16-1007table_2
Comparison of different position features.
2
[['Position Feature', 'plain text PF'], ['Position Feature', 'TPF1'], ['Position Feature', 'TPF2']]
1
[['F1']]
[['83.21'], ['83.99'], ['83.90']]
column
['F1']
['Position Feature']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Position Feature || plain text PF</td> <td>83.21</td> </tr> <tr> <td>Position Feature || TPF1</td> <td>83.99</td> </tr> <tr> <td>P...
Table 2
table_2
D16-1007
8
emnlp2016
Table 2 summarizes the performances of proposed model when different position features are exploited. To concentrate on studying the effect of position features, we do not involve lexical features in this section. As the table shows, the position feature on plain text is still effective in our model and we accredit its...
[1, 2, 1, 1, 1, 2, 2, 0, 0]
['Table 2 summarizes the performances of proposed model when different position features are exploited.', 'To concentrate on studying the effect of position features, we do not involve lexical features in this section.', 'As the table shows, the position feature on plain text is still effective in our model and we accr...
[None, None, ['plain text PF', 'TPF1', 'TPF2'], ['TPF1', 'TPF2'], ['TPF1', 'TPF2'], None, ['TPF1', 'TPF2'], None, None]
1
D16-1010table_3
Pearson correlation values between human and model preferences for each construction and the verb-bias score; training on raw frequencies and 2 constructions. All correlations significant with p-value < 0.001, except the one value with *. Best result for each row is marked in boldface.
1
[['DO'], ['PD'], ['DO-PD']]
2
[['AB (Connectionist)', '-'], ['BFS (Bayesian)', 'Level 1'], ['BFS (Bayesian)', 'Level 2']]
[['0.06*', '0.23', '0.25'], ['0.33', '0.38', '0.32'], ['0.39', '0.53', '0.59']]
column
['correlation', 'correlation', 'correlation']
['AB (Connectionist)', 'BFS (Bayesian)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>AB (Connectionist) || -</th> <th>BFS (Bayesian) || Level 1</th> <th>BFS (Bayesian) || Level 2</th> </tr> </thead> <tbody> <tr> <td>DO</td> <td>[0.06]</td> <td>0.23</td> <td>0.25...
Table 3
table_3
D16-1010
8
emnlp2016
Table 3 presents the correlation results for the two models’ preferences for each construction and the verb bias score. The AB model does not correlate with the judgments for the DO. However, the model produces significant positive correlations with the PD judgments and with the verb bias score. The BFS model, on the...
[1, 1, 1, 1, 1]
['Table 3 presents the correlation results for the two models’ preferences for each construction and the verb bias score.', 'The AB model does not correlate with the judgments for the DO.', 'However, the model produces significant positive correlations with the PD judgments and with the verb bias score.', 'The BFS mo...
[['AB (Connectionist)', 'BFS (Bayesian)'], ['AB (Connectionist)', 'DO'], ['AB (Connectionist)', 'PD'], ['DO', 'PD', 'DO-PD', 'AB (Connectionist)', 'BFS (Bayesian)'], ['Level 2', 'BFS (Bayesian)']]
1
D16-1011table_4
Comparison between rationale models (middle and bottom rows) and the baselines using full title or body (top row).
1
[['Full title'], ['Full body'], ['Independent'], ['Independent'], ['Dependent'], ['Dependent']]
1
[['MAP (dev)'], ['MAP (test)'], ['% words']]
[['56.5', '60.0', '10.1'], ['54.2', '53.0', '89.9'], ['55.7', '53.6', '9.7'], ['56.3', '52.6', '19.7'], ['56.1', '54.6', '11.6'], ['56.5', '55.6', '32.8']]
column
['MAP (dev)', 'MAP (test)', '% words']
['Independent', 'Dependent']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MAP (dev)</th> <th>MAP (test)</th> <th>% words</th> </tr> </thead> <tbody> <tr> <td>Full title</td> <td>56.5</td> <td>60.0</td> <td>10.1</td> </tr> <tr> <td>Full body...
Table 4
table_4
D16-1011
8
emnlp2016
Results. Table 4 presents the results of our rationale model. We explore a range of hyper-parameter values. We include two runs for each version. The first one achieves the highest MAP on the development set, The second run is selected to compare the models when they use roughly 10% of question text (7 words on average...
[2, 1, 2, 2, 1, 2, 1, 1]
['Results.', 'Table 4 presents the results of our rationale model.', 'We explore a range of hyper-parameter values.', 'We include two runs for each version.', 'The first one achieves the highest MAP on the development set, The second run is selected to compare the models when they use roughly 10% of question text (7 wo...
[None, None, None, None, ['Independent', 'Dependent', 'MAP (dev)'], None, ['Dependent', 'MAP (dev)', 'Full title'], ['Independent', 'Dependent', 'Full title', 'Full body']]
1
D16-1018table_2
Spearman’s rank correlation results on the SCWS dataset
4
[['Model', 'Huang', 'Similarity Metrics', 'AvgSim'], ['Model', 'Huang', 'Similarity Metrics', 'AvgSimC'], ['Model', 'Chen', 'Similarity Metrics', 'AvgSim'], ['Model', 'Chen', 'Similarity Metrics', 'AvgSimC'], ['Model', 'Neelakantan', 'Similarity Metrics', 'AvgSim'], ['Model', 'Neelakantan', 'Similarity Metrics', 'AvgSi...
1
[['ρ × 100']]
[['62.8'], ['65.7'], ['66.2'], ['68.9'], ['67.2'], ['69.2'], ['69.7'], ['63.6'], ['65.4'], ['61.2'], ['64.3'], ['65.6'], ['64.9'], ['66.1']]
column
['correlation']
['Ours + CBOW', 'Ours + Skip-gram']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>ρ × 100</th> </tr> </thead> <tbody> <tr> <td>Model || Huang || Similarity Metrics || AvgSim</td> <td>62.8</td> </tr> <tr> <td>Model || Huang || Similarity Metrics || AvgSimC</td> <t...
Table 2
table_2
D16-1018
7
emnlp2016
Table 2 shows the results of our contextdependent sense embedding models on the SCWS dataset. In this table, ρ refers to the Spearman’s rank correlation and a higher value of ρ indicates better performance. The baseline performances are from Huang et al. (2012), Chen et al. (2014), Neelakantan et al. (2014), Li and Jur...
[1, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 2]
['Table 2 shows the results of our contextdependent sense embedding models on the SCWS dataset.', 'In this table, ρ refers to the Spearman’s rank correlation and a higher value of ρ indicates better performance.', 'The baseline performances are from Huang et al. (2012), Chen et al. (2014), Neelakantan et al. (2014), Li...
[None, None, ['Huang', 'Chen', 'Neelakantan', 'Li', 'Tian', 'Bartunov'], ['Ours + CBOW', 'Ours + Skip-gram'], None, ['AvgSim', 'AvgSimC'], ['Model_M', 'Model_W', 'HardSim', 'SoftSim'], ['Model'], ['Model'], ['Model'], None, None, None, None]
1
D16-1021table_4
Examples of attention weights in different hops for aspect level sentiment classification. The model only uses content attention. The hop columns show the weights of context words in each hop, indicated by values and gray color. This example shows the results of sentence “great food but the service was dreadful!” with ...
1
[['great'], ['food'], ['but'], ['the'], ['was'], ['dreadful'], ['!']]
2
[['hop 1', 'service'], ['hop 1', 'food'], ['hop 2', 'service'], ['hop 2', 'food'], ['hop 3', 'service'], ['hop 3', 'food'], ['hop 4', 'service'], ['hop 4', 'food'], ['hop 5', 'service'], ['hop 5', 'food']]
[['0.20', '0.22', '0.15', '0.12', '0.14', '0.14', '0.13', '0.12', '0.23', '0.20'], ['0.11', '0.21', '0.07', '0.11', '0.08', '0.10', '0.12', '0.11', '0.06', '0.12'], ['0.20', '0.03', '0.10', '0.11', '0.10', '0.08', '0.12', '0.11', '0.13', '0.06'], ['0.03', '0.11', '0.07', '0.11', '0.08', '0.08', '0.12', '0.11', '0.06', ...
column
['weights', 'weights', 'weights', 'weights', 'weights', 'weights', 'weights', 'weights', 'weights', 'weights']
['service', 'food']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>hop 1 || service</th> <th>hop 1 || food</th> <th>hop 2 || service</th> <th>hop 2 || food</th> <th>hop 3 || service</th> <th>hop 3 || food</th> <th>hop 4 || service</th> <th>hop 4 |...
Table 4
table_4
D16-1021
7
emnlp2016
From Table 4, we can find that in the first hop the context words “great”, “but” and “dreadful” contribute equally to the aspect “service”. While after the second hop, the weight of “dreadful” increases and finally the model correctly predict the polarity towards “service” as negative. This case shows the effects of mu...
[1, 1, 1, 1, 1, 2]
['From Table 4, we can find that in the first hop the context words “great”, “but” and “dreadful” contribute equally to the aspect “service”.', 'While after the second hop, the weight of “dreadful” increases and finally the model correctly predict the polarity towards “service” as negative.', 'This case shows the effec...
[['great', 'but', 'dreadful', 'service'], None, ['dreadful', 'service'], ['dreadful', 'food'], ['food'], None]
1
D16-1025table_2
Overall results on the HE Set: BLEU, computed against the original reference translation, and TER, computed with respect to the targeted post-edit (HTER) and multiple postedits (mTER).
2
[['system', 'PBSY'], ['system', 'HPB'], ['system', 'SPB'], ['system', 'NMT']]
1
[['BLEU'], ['HTER'], ['mTER']]
[['25.3', '28.0', '21.8'], ['24.6', '29.9', '23.4'], ['25.8', '29.0', '22.7'], ['31.1*', '21.1*', '16.2*']]
column
['BLEU', 'HTER', 'mTER']
['NMT']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU</th> <th>HTER</th> <th>mTER</th> </tr> </thead> <tbody> <tr> <td>system || PBSY</td> <td>25.3</td> <td>28.0</td> <td>21.8</td> </tr> <tr> <td>system || HPB</td> ...
Table 2
table_2
D16-1025
4
emnlp2016
4 Overall Translation Quality. Table 2 presents overall system results according to HTER and mTER, as well as BLEU computed against the original TED Talks reference translation. We can see that NMT clearly outperforms all other approaches both in terms of BLEU and TER scores. Focusing on mTER results, the gain obtained...
[2, 1, 1, 1, 1, 2, 0, 0]
['4 Overall Translation Quality.', 'Table 2 presents overall system results according to HTER and mTER, as well as BLEU computed against the original TED Talks reference translation.', 'We can see that NMT clearly outperforms all other approaches both in terms of BLEU and TER scores.', 'Focusing on mTER results, the ga...
[None, ['system', 'HTER', 'mTER', 'BLEU'], ['NMT', 'BLEU', 'HTER', 'mTER'], ['mTER', 'NMT', 'PBSY'], ['mTER', 'HTER'], ['HTER', 'mTER'], None, None]
1
D16-1025table_4
Word reordering evaluation in terms of shift operations in HTER calculation and of KRS. For each system, the number of generated words, the number of shift errors and their corresponding percentages are reported.
2
[['system', 'PBSY'], ['system', 'HPB'], ['system', 'SPB'], ['system', 'NMT']]
1
[['#words'], ['#shifts'], ['%shifts'], ['KRS']]
[['11517', '354', '3.1', '84.6'], ['11417', '415', '3.6', '84.3'], ['11420', '398', '3.5', '84.5'], ['11284', '173', '1.5*', '88.3*']]
column
['#words', '#shifts', '%shifts', 'KRS']
['NMT']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>#words</th> <th>#shifts</th> <th>%shifts</th> <th>KRS</th> </tr> </thead> <tbody> <tr> <td>system || PBSY</td> <td>11517</td> <td>354</td> <td>3.1</td> <td>84.6</td> ...
Table 4
table_4
D16-1025
7
emnlp2016
5.3 Word order errors. To analyse reordering errors, we start by focusing on shift operations identified by the HTER metrics. The first three columns of Table 4 show, respectively: (i) the number of words generated by each system (ii) the number of shifts required to align each system output to the corresponding post-e...
[2, 2, 2, 1, 1, 2, 2, 2, 2, 2, 1, 1]
['5.3 Word order errors.', 'To analyse reordering errors, we start by focusing on shift operations identified by the HTER metrics.', 'The first three columns of Table 4 show, respectively: (i) the number of words generated by each system (ii) the number of shifts required to align each system output to the correspondin...
[None, None, ['#words', '#shifts', '%shifts'], ['NMT', 'system', '#shifts', '%shifts'], ['NMT', 'PBSY'], None, None, ['KRS'], ['KRS'], ['KRS'], ['KRS', 'NMT'], ['KRS', 'NMT', 'system']]
1
D16-1032table_2
Human evaluation results on the generated and true recipes. Scores range in [1, 5].
2
[['Model', 'Attention'], ['Model', 'EncDec'], ['Model', 'NN'], ['Model', 'NN-Swap'], ['Model', 'Checklist'], ['Model', 'Checklist+'], ['Model', 'Truth']]
1
[['Syntax'], ['Ingredient use'], ['Follows goal']]
[['4.47', '3.02', '3.47'], ['4.58', '3.29', '3.61'], ['4.22', '3.02', '3.36'], ['4.11', '3.51', '3.78'], ['4.58', '3.80', '3.94'], ['4.39', '3.95', '4.10'], ['4.39', '4.03', '4.34']]
column
['Syntax', 'Ingridient use', 'Follows goal']
['Checklist', 'Checklist+']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Syntax</th> <th>Ingredient use</th> <th>Follows goal</th> </tr> </thead> <tbody> <tr> <td>Model || Attention</td> <td>4.47</td> <td>3.02</td> <td>3.47</td> </tr> <tr> ...
Table 2
table_2
D16-1032
8
emnlp2016
Table 2 shows the averaged scores over the responses. The checklist models outperform all baselines in generating recipes that follow the provided agenda closely and accomplish the desired goal, where NN in particular often generates the wrong dish. Perhaps surprisingly, both the Attention and EncDec baselines and the ...
[1, 1, 1, 2]
['Table 2 shows the averaged scores over the responses.', 'The checklist models outperform all baselines in generating recipes that follow the provided agenda closely and accomplish the desired goal, where NN in particular often generates the wrong dish.', 'Perhaps surprisingly, both the Attention and EncDec baselines ...
[None, ['Checklist', 'Checklist+', 'NN', 'Model'], ['Attention', 'EncDec', 'Checklist'], None]
1
D16-1035table_4
Performance comparison with other state-of-the-art systems on RST-DT.
2
[['System', 'Joty et al. (2013)'], ['System', 'Ji and Eisenstein. (2014)'], ['System', 'Feng and Hirst. (2014)'], ['System', 'Li et al. (2014a)'], ['System', 'Li et al. (2014b)'], ['System', 'Heilman and Sagae. (2015)'], ['System', 'Ours'], ['System', 'Human']]
1
[['S'], ['N'], ['R']]
[['82.7', '68.4', '55.7'], ['82.1', '71.1', '61.6'], ['85.7', '71.0', '58.2'], ['84.0', '70.8', '58.6'], ['83.4', '73.8', '57.8'], ['83.5', '68.1', '55.1'], ['85.8', '71.1', '58.9'], ['88.7', '77.7', '65.8']]
column
['S', 'N', 'R']
['Ours']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>S</th> <th>N</th> <th>R</th> </tr> </thead> <tbody> <tr> <td>System || Joty et al. (2013)</td> <td>82.7</td> <td>68.4</td> <td>55.7</td> </tr> <tr> <td>System || Ji a...
Table 4
table_4
D16-1035
8
emnlp2016
Table 4 shows the performance for our system and those systems. Our system achieves the best result in span and relatively lower performance in nucleus and relation identification comparing with the corresponding best results but still better than most systems. No system achieves the best result on all three metrics. T...
[1, 1, 1, 0, 0]
['Table 4 shows the performance for our system and those systems.', 'Our system achieves the best result in span and relatively lower performance in nucleus and relation identification comparing with the corresponding best results but still better than most systems.', 'No system achieves the best result on all three me...
[None, ['Ours', 'System'], ['System', 'S', 'N', 'R'], None, None]
1
D16-1038table_7
Domain Transfer Results. We conduct the evaluation on TAC-KBP corpus with the split of newswire (NW) and discussion form (DF) documents. Here, we choose MSEP-EMD and MSEP-CorefESA+AUG+KNOW as the MSEP approach for event detection and co-reference respectively. We use SSED and SupervisedBase as the supervised modules fo...
3
[['Event Detection', 'In Domain', 'Train NW Test NW'], ['Event Detection', 'Out of Domain', 'Train DF Test NW'], ['Event Detection', 'In Domain', 'Train DF Test DF'], ['Event Detection', 'Out of Domain', 'Train NW Test DF'], ['Event Co-reference', 'In Domain', 'Train NW Test NW'], ['Event Co-reference', 'Out of Domain'...
1
[['MSEP'], ['Supervised']]
[['58.5', '63.7'], ['55.1', '54.8'], ['57.9', '62.6'], ['52.8', '52.3'], ['73.2', '73.6'], ['71', '70.1'], ['68.6', '68.9'], ['67.9', '67']]
column
['F1', 'F1']
['MSEP']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MSEP</th> <th>Supervised</th> </tr> </thead> <tbody> <tr> <td>Event Detection || In Domain || Train NW Test NW</td> <td>58.5</td> <td>63.7</td> </tr> <tr> <td>Event Detection |...
Table 7
table_7
D16-1038
9
emnlp2016
4.7 Domain Transfer Evaluation. To demonstrate the superiority of the adaptation capabilities of the proposed MSEP system, we test its performance on new domains and compare with the supervised system. TAC-KBP corpus contains two genres: newswire (NW) and discussion forum (DF), and they have roughly equal number of doc...
[2, 2, 2, 1, 1, 1, 2]
['4.7 Domain Transfer Evaluation.', 'To demonstrate the superiority of the adaptation capabilities of the proposed MSEP system, we test its performance on new domains and compare with the supervised system.', 'TAC-KBP corpus contains two genres: newswire (NW) and discussion forum (DF), and they have roughly equal numbe...
[None, ['MSEP', 'Supervised'], None, ['Train NW Test DF'], ['MSEP'], ['MSEP', 'Supervised', 'Out of Domain', 'Event Detection', 'Event Co-reference'], None]
1
D16-1039table_2
Performance results for the BLESS and ENTAILMENT datasets.
4
[['Model', 'SVM+Yu', 'Dataset', 'BLESS'], ['Model', 'SVM+Word2Vecshort', 'Dataset', 'BLESS'], ['Model', 'SVM+Word2Vec', 'Dataset', 'BLESS'], ['Model', 'SVM+Ourshort', 'Dataset', 'BLESS'], ['Model', 'SVM+Our', 'Dataset', 'BLESS'], ['Model', 'SVM+Yu', 'Dataset', 'ENTAIL'], ['Model', 'SVM+Word2Vecshort', 'Dataset', 'ENTAI...
1
[['Accuracy']]
[['90.4%'], ['83.8%'], ['84.0%'], ['91.1%'], ['93.6%'], ['87.5%'], ['82.8%'], ['83.3%'], ['88.2%'], ['91.7%']]
column
['accuracy']
['SVM+Our']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy</th> </tr> </thead> <tbody> <tr> <td>Model || SVM+Yu || Dataset || BLESS</td> <td>90.4%</td> </tr> <tr> <td>Model || SVM+Word2Vecshort || Dataset || BLESS</td> <td>83.8%</t...
Table 2
table_2
D16-1039
7
emnlp2016
Table 2 shows the performance of the three supervised models in Experiment 1. Our approach achieves significantly better performance than Yu’s method and Word2Vec method in terms of accuracy (t-test, p-value < 0.05) for both BLESS and ENTAILMENT datasets. Specifically, our approach improves the average accuracy by 4% c...
[1, 1, 1, 1, 1, 2, 1, 2, 2]
['Table 2 shows the performance of the three supervised models in Experiment 1.', 'Our approach achieves significantly better performance than Yu’s method and Word2Vec method in terms of accuracy (t-test, p-value < 0.05) for both BLESS and ENTAILMENT datasets.', 'Specifically, our approach improves the average accuracy...
[None, ['SVM+Ourshort', 'SVM+Our', 'BLESS', 'ENTAIL', 'Accuracy'], ['SVM+Ourshort', 'SVM+Our', 'SVM+Yu', 'SVM+Word2Vecshort', 'SVM+Word2Vec'], ['SVM+Word2Vecshort', 'SVM+Word2Vec'], ['SVM+Ourshort', 'SVM+Our', 'SVM+Yu'], ['SVM+Yu'], ['SVM+Ourshort', 'SVM+Our'], ['SVM+Word2Vecshort', 'SVM+Word2Vec'], ['SVM+Word2Vecshort...
1
D16-1039table_3
Performance results for the general domain datasets when using one domain for training and another domain for testing.
6
[['Model', 'SVM+Yu', 'Training', 'BLESS', 'Testing', 'ENTAIL'], ['Model', 'SVM+Word2Vecshort', 'Training', 'BLESS', 'Testing', 'ENTAIL'], ['Model', 'SVM+Word2Vec', 'Training', 'BLESS', 'Testing', 'ENTAIL'], ['Model', 'SVM+Ourshort', 'Training', 'BLESS', 'Testing', 'ENTAIL'], ['Model', 'SVM+Our', 'Training', 'BLESS', 'T...
1
[['Accuracy']]
[['83.7%'], ['76.5%'], ['77.1%'], ['85.8%'], ['89.4%'], ['87.1%'], ['78.0%'], ['78.9%'], ['87.1%'], ['90.6%']]
column
['accuracy']
['SVM+Our']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy</th> </tr> </thead> <tbody> <tr> <td>Model || SVM+Yu || Training || BLESS || Testing || ENTAIL</td> <td>83.7%</td> </tr> <tr> <td>Model || SVM+Word2Vecshort || Training || BLESS...
Table 3
table_3
D16-1039
7
emnlp2016
Experiment 2. This experiment aims to evaluate the generalization capability of our extracted term embeddings. In the experiment, we train the classifier on the BLESS dataset, test it on the ENTAILMENT dataset and vice versa. Similarly, we exclude from the training set any pair of terms that has one term appearing in t...
[2, 2, 2, 2, 1, 1]
['Experiment 2.', 'This experiment aims to evaluate the generalization capability of our extracted term embeddings.', 'In the experiment, we train the classifier on the BLESS dataset, test it on the ENTAILMENT dataset and vice versa.', 'Similarly, we exclude from the training set any pair of terms that has one term app...
[None, None, ['BLESS', 'ENTAIL'], None, ['SVM+Our', 'Model'], ['SVM+Our']]
1
D16-1043table_5
Performance on common coverage subsets of the datasets (MEN* and SimLex*).
3
[['Source', 'Wikipedia', 'Text'], ['Source', 'Google', 'Visual'], ['Source', 'Google', 'MM'], ['Source', 'Bing', 'Visual'], ['Source', 'Bing', 'MM'], ['Source', 'Flickr', 'Visual'], ['Source', 'Flickr', 'MM'], ['Source', 'ImageNet', 'Visual'], ['Source', 'ImageNet', 'MM'], ['Source', 'ESPGame', 'Visual'], ['Source', 'E...
6
[['Arch.', 'AlexNet', 'Agg.', 'Mean', 'Type/Eval', 'SL'], ['Arch.', 'AlexNet', 'Agg.', 'Mean', 'Type/Eval', 'MEN'], ['Arch.', 'AlexNet', 'Agg.', 'Max', 'Type/Eval', 'SL'], ['Arch.', 'AlexNet', 'Agg.', 'Max', 'Type/Eval', 'MEN'], ['Arch.', 'GoogLeNet', 'Agg.', 'Mean', 'Type/Eval', 'SL'], ['Arch.', 'GoogLeNet', 'Agg.', '...
[['0.248', '0.654', '0.248', '0.654', '0.248', '0.654', '0.248', '0.654', '0.248', '0.654', '0.248', '0.654'], ['0.406', '0.549', '0.402', '0.552', '0.420', '0.570', '0.434', '0.579', '0.430', '0.576', '0.406', '0.560'], ['0.366', '0.691', '0.344', '0.693', '0.366', '0.701', '0.342', '0.699', '0.378', '0.701', '0.341',...
column
['similarity', 'similarity', 'similarity', 'similarity', 'similarity', 'similarity', 'similarity', 'similarity', 'similarity', 'similarity', 'similarity', 'similarity']
['VGGNet']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Arch. || AlexNet || Agg. || Mean || Type/Eval || SL</th> <th>Arch. || AlexNet || Agg. || Mean || Type/Eval || MEN</th> <th>Arch. || AlexNet || Agg. || Max || Type/Eval || SL</th> <th>Arch. || AlexNet ...
Table 5
table_5
D16-1043
6
emnlp2016
5.2 Common subset comparison. Table 5 shows the results on the common subset of the evaluation datasets, where all word pairs have images in each of the data sources. First, note the same patterns as before: multi-modal representations perform better than linguistic ones. Even for the poorly performing ESP Game dataset...
[2, 1, 1, 1, 1, 1, 1, 1]
['5.2 Common subset comparison.', 'Table 5 shows the results on the common subset of the evaluation datasets, where all word pairs have images in each of the data sources.', 'First, note the same patterns as before: multi-modal representations perform better than linguistic ones.', 'Even for the poorly performing ESP G...
[None, None, None, ['ESPGame', 'VGGNet', 'SL', 'MEN'], ['Google', 'Bing', 'Flickr', 'ImageNet', 'ESPGame'], None, None, ['VGGNet', 'ImageNet', 'Google', 'Bing', 'Flickr']]
1
D16-1044table_1
Comparison of multimodal pooling methods. Models are trained on the VQA train split and tested on test-dev.
2
[['Method', 'Element-wise Sum'], ['Method', 'Concatenation'], ['Method', 'Concatenation + FC'], ['Method', 'Concatenation + FC + FC'], ['Method', 'Element-wise Product'], ['Method', 'Element-wise Product + FC'], ['Method', 'Element-wise Product + FC + FC'], ['Method', 'MCB (2048 × 2048 → 16K)'], ['Method', 'Full Biline...
1
[['Accuracy']]
[['56.50'], ['57.49'], ['58.40'], ['57.10'], ['58.57'], ['56.44'], ['57.88'], ['59.83'], ['58.46'], ['58.69'], ['55.97'], ['57.05'], ['58.36'], ['62.50']]
column
['accuracy']
['MCB (2048 × 2048 → 16K)', 'MCB (128 × 128 → 4K)', 'MCB (d = 16K) with VGG-19', 'MCB (d = 16K) with Attention']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy</th> </tr> </thead> <tbody> <tr> <td>Method || Element-wise Sum</td> <td>56.50</td> </tr> <tr> <td>Method || Concatenation</td> <td>57.49</td> </tr> <tr> <td>Met...
Table 1
table_1
D16-1044
6
emnlp2016
4.3 Ablation Results. We compare the performance of non-bilinear and bilinear pooling methods in Table 1. We see that MCB pooling outperforms all non-bilinear pooling methods, such as eltwise sum, concatenation, and eltwise product. One could argue that the compact bilinear method simply has more parameters than the no...
[2, 1, 1, 1, 2, 1, 2, 2, 1, 1, 1, 1]
['4.3 Ablation Results.', 'We compare the performance of non-bilinear and bilinear pooling methods in Table 1.', 'We see that MCB pooling outperforms all non-bilinear pooling methods, such as eltwise sum, concatenation, and eltwise product.', 'One could argue that the compact bilinear method simply has more parameters ...
[None, None, ['MCB (2048 × 2048 → 16K)', 'MCB (128 × 128 → 4K)', 'MCB (d = 16K) with VGG-19', 'MCB (d = 16K) with Attention', 'Method'], None, None, ['MCB (2048 × 2048 → 16K)'], ['MCB (2048 × 2048 → 16K)'], ['MCB (2048 × 2048 → 16K)'], None, ['MCB (d = 16K) with VGG-19'], ['MCB (d = 16K) with Attention'], ['MCB (d = 16...
1
D16-1045table_1
Overall Synthetic Data Results. Aand Bdenote an aggressive and a balanced approaches, respectively. Acc. (std) is the average and the standard deviation of the accuracy across 10 test sets. # Wins is the number of test sets on which the SWVP algorithm outperforms CSP. Gener. is the number of times the best β hyper-para...
2
[['Model', 'B-WM'], ['Model', 'B-WMR'], ['Model', 'A-WM'], ['Model', 'A-WMR'], ['Model', 'CSP']]
2
[['simple(++), learnable(+++)', 'Acc. (std)'], ['simple(++), learnable(+++)', '# Wins'], ['simple(++), learnable(+++)', 'Gener.'], ['simple(++), learnable(++)', 'Acc. (std)'], ['simple(++), learnable(++)', '# Wins'], ['simple(++), learnable(++)', 'Gener.'], ['simple(+), learnable(+)', 'Acc. (std)'], ['simple(+), learna...
[['75.47(3.05)', '9/10', '10/10', '63.18 (1.32)', '9/10', '10/10', '28.48 (1.9)', '5/10', '10/10'], ['75.96 (2.42)', '8/10', '10/10', '63.02 (2.49)', '9/10', '10/10', '24.31 (5.2)', '4/10', '10/10'], ['74.18 (2.16)', '7/10', '10/10', '61.65 (2.30)', '9/10', '10/10', '30.45 (1.0)', '6/10', '10/10'], ['75.17 (3.07)', '7/...
column
['Acc. (std)', '# Wins', 'Gener.', 'Acc. (std)', '# Wins', 'Gener.', 'Acc. (std)', '# Wins', 'Gener.']
['B-WM', 'B-WMR', 'A-WM', 'A-WMR']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>simple(++), learnable(+++) || Acc. (std)</th> <th>simple(++), learnable(+++) || # Wins</th> <th>simple(++), learnable(+++) || Gener.</th> <th>simple(++), learnable(++) || Acc. (std)</th> <th>simp...
Table 1
table_1
D16-1045
8
emnlp2016
Synthetic Data. Table 1 presents our results. In all three setups an SWVP algorithm is superior. Averaged accuracy differences between the best performing algorithms and CSP are: 3.72 (B-WMR, (simple(++), learnable(+++))), 5.29 (B-WM, (simple(++), learnable(++))) and 5.18 (A-WM, (simple(+), learnable(+))). In all setup...
[2, 1, 1, 1, 1, 1, 1]
['Synthetic Data.', 'Table 1 presents our results.', 'In all three setups an SWVP algorithm is superior.', 'Averaged accuracy differences between the best performing algorithms and CSP are: 3.72 (B-WMR, (simple(++), learnable(+++))), 5.29 (B-WM, (simple(++), learnable(++))) and 5.18 (A-WM, (simple(+), learnable(+))).',...
[None, None, ['B-WM', 'B-WMR', 'A-WM', 'A-WMR'], ['B-WM', 'B-WMR', 'A-WM', 'A-WMR'], ['B-WM', 'B-WMR', 'A-WM', 'A-WMR', 'CSP'], ['CSP'], ['B-WM', 'B-WMR', 'A-WM', 'A-WMR', 'CSP']]
1
D16-1048table_2
The performance of cross-lingually similarized Chinese dependency grammars with different configurations.
2
[['Grammar', 'baseline'], ['Grammar', 'proj : fixed'], ['Grammar', 'proj : proj'], ['Grammar', 'proj : nonproj'], ['Grammar', 'nonproj : fixed'], ['Grammar', 'nonproj : proj'], ['Grammar', 'nonproj : nonproj']]
1
[['Similarity (%)'], ['Dep. P (%)'], ['Ada. P (%)'], ['BLEU-4 (%)']]
[['34.2', '84.5', '84.5', '24.6'], ['46.3', '54.1', '82.3', '25.8 (+1.2)'], ['63.2', '72.2', '84.6', '26.1 (+1.5)'], ['64.3', '74.6', '84.7', '26.2 (+1.6)'], ['48.4', '56.1', '82.6', '20.1 (−4.5)'], ['63.6', '71.4', '84.4', '22.9 (−1.7)'], ['64.1', '73.9', '84.9', '20.7 (−3.9)']]
column
['Similarity (%)', 'Dep. P (%)', 'Ada. P (%)', 'BLEU-4 (%)']
['Grammar']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Similarity (%)</th> <th>Dep. P (%)</th> <th>Ada. P (%)</th> <th>BLEU-4 (%)</th> </tr> </thead> <tbody> <tr> <td>Grammar || baseline</td> <td>34.2</td> <td>84.5</td> <td>84....
Table 2
table_2
D16-1048
8
emnlp2016
5.2.2 Selection of Searching Modes. With the hyper-parameters given by the developing procedures, cross-lingual similarization is conducted on the whole FBIS dataset. All the searching mode configurations are tried and 6 pairs of grammars are generated. For each of the 6 Chinese dependency grammars, we also give the th...
[0, 0, 0, 0, 1, 1, 2, 2, 2, 2, 2, 1, 1, 2, 0]
['5.2.2 Selection of Searching Modes.', 'With the hyper-parameters given by the developing procedures, cross-lingual similarization is conducted on the whole FBIS dataset.', 'All the searching mode configurations are tried and 6 pairs of grammars are generated.', 'For each of the 6 Chinese dependency grammars, we also ...
[None, None, None, None, None, None, None, None, None, None, None, None, None, None, None]
1
D16-1048table_3
The performance of the cross-lingually similarized grammar on dependency tree-based translation, compared with related work.
2
[['System', '(Liu et al. 2006)'], ['System', '(Chiang 2007)'], ['System', '(Xie et al. 2011)'], ['System', 'Original Grammar'], ['System', 'Similarized Grammar']]
1
[['NIST 04'], ['NIST 05']]
[['34.55', '31.94'], ['35.29', '33.22'], ['35.82', '33.62'], ['35.44', '33.08'], ['36.78', '35.12']]
column
['BLEU', 'BLEU']
['Original Grammar', 'Similarized Grammar']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>NIST 04</th> <th>NIST 05</th> </tr> </thead> <tbody> <tr> <td>System || (Liu et al. 2006)</td> <td>34.55</td> <td>31.94</td> </tr> <tr> <td>System || (Chiang 2007)</td> <t...
Table 3
table_3
D16-1048
8
emnlp2016
Table 3 shows the performance of the crosslingually similarized grammar on dependency treebased translation, compared with previous work (Xie et al., 2011). We also give the performance of constituency tree-based translation (Liu et al., 2006) and formal syntax-based translation (Chiang, 2007). The original grammar per...
[1, 2, 1, 1, 2]
['Table 3 shows the performance of the crosslingually similarized grammar on dependency treebased translation, compared with previous work (Xie et al., 2011).', 'We also give the performance of constituency tree-based translation (Liu et al., 2006) and formal syntax-based translation (Chiang, 2007).', 'The original gra...
[['Similarized Grammar', 'Original Grammar', '(Xie et al. 2011)'], ['(Liu et al. 2006)', '(Chiang 2007)'], ['Original Grammar', '(Xie et al. 2011)'], ['Similarized Grammar', 'Original Grammar', '(Xie et al. 2011)'], None]
1
D16-1050table_1
BLEU scores on the NIST Chinese-English translation task. AVG = average BLEU scores on test sets. We highlight the best results in bold for each test set. “↑/⇑”: significantly better than Moses (p < 0.05/p < 0.01); “+/++”: significantly better than GroundHog (p < 0.05/p < 0.01);
2
[['System', 'Moses'], ['System', 'GroundHog'], ['System', 'VNMT w/o KL'], ['System', 'VNMT']]
1
[['MT05'], ['MT02'], ['MT03'], ['MT04'], ['MT06'], ['MT08'], ['AVG']]
[['33.68', '34.19', '34.39', '35.34', '29.20', '22.94', '31.21'], ['31.38', '33.32', '32.59', '35.05', '29.80', '22.82', '30.72'], ['31.40', '33.50', '32.92', '34.95', '28.74', '22.07', '30.44'], ['32.25', '34.50++', '33.78++', '36.72⇑++', '30.92⇑++', '24.41↑++', '32.07']]
column
['BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU']
['VNMT']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MT05</th> <th>MT02</th> <th>MT03</th> <th>MT04</th> <th>MT06</th> <th>MT08</th> <th>AVG</th> </tr> </thead> <tbody> <tr> <td>System || Moses</td> <td>33.68</td> <...
Table 1
table_1
D16-1050
6
emnlp2016
Table 1 summarizes the BLEU scores of different systems on the Chinese-English translation tasks. Clearly VNMT significantly improves translation quality in terms of BLEU on most cases, and obtains the best average results that gain 0.86 and 1.35 BLEU points over Moses and GroundHog respectively. Besides, without the K...
[1, 1, 1, 2]
['Table 1 summarizes the BLEU scores of different systems on the Chinese-English translation tasks.', 'Clearly VNMT significantly improves translation quality in terms of BLEU on most cases, and obtains the best average results that gain 0.86 and 1.35 BLEU points over Moses and GroundHog respectively.', 'Besides, witho...
[None, ['VNMT', 'Moses', 'GroundHog'], ['VNMT w/o KL', 'GroundHog'], None]
1
D16-1051table_1
Alignment quality results for IBM2-HMM (2H) and its convex relaxation (2HC) using either HMM-style dynamic programming or “Joint” decoding. The first and last columns above are for the GIZA++ HMM initialized either with IBM Model 1 or Model 1 followed by Model 2. FA above refers to the improved IBM Model 2 (FastAlign) ...
2
[['Iteration', '1'], ['Iteration', '2'], ['Iteration', '3'], ['Iteration', '4'], ['Iteration', '5'], ['Iteration', '6'], ['Iteration', '7'], ['Iteration', '8'], ['Iteration', '9'], ['Iteration', '10']]
5
[['Training', '2H', 'Decoding', 'HMM', 'AER'], ['Training', '2H', 'Decoding', 'HMM', 'F-Measure'], ['Training', '2H', 'Decoding', 'Joint', 'AER'], ['Training', '2H', 'Decoding', 'Joint', 'F-Measure'], ['Training', '2HC', 'Decoding', 'HMM', 'AER'], ['Training', '2HC', 'Decoding', 'HMM', 'F-Measure'], ['Training', '2HC',...
[['0.0956', '0.7829', '0.1076', '0.7797', '0.1538', '0.7199', '0.1814', '0.6914', '0.5406', '0.2951', '0.1761', '0.7219'], ['0.0884', '0.7854', '0.0943', '0.7805', '0.1093', '0.7594', '0.1343', '0.733', '0.1625', '0.7111', '0.0873', '0.8039'], ['0.0844', '0.7899', '0.0916', '0.7806', '0.1023', '0.7651', '0.1234', '0.74...
column
['AER', 'F-Measure', 'AER', 'F-Measure', 'AER', 'F-Measure', 'AER', 'F-Measure', 'AER', 'F-Measure', 'AER', 'F-Measure']
['HMM']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Training || 15210H || Decoding || HMM || AER</th> <th>Training || 15210H || Decoding || HMM || F-Measure</th> <th>Training || 15210H || Decoding || Joint || AER</th> <th>Training || 15210H || Decoding...
Table 1
table_1
D16-1051
9
emnlp2016
Table 1 shows the alignment summary statistics for the 447 sentences present in the Hansard test data. We present alignments quality scores using either the FastAlign IBM Model 2, the GIZA++ HMM, and our model and its relaxation using either the “HMM” or “Joint” decoding. First, we note that in deciding the decoding st...
[1, 2, 1, 2, 1, 1]
['Table 1 shows the alignment summary statistics for the 447 sentences present in the Hansard test data.', 'We present alignments quality scores using either the FastAlign IBM Model 2, the GIZA++ HMM, and our model and its relaxation using either the “HMM” or “Joint” decoding.', 'First, we note that in deciding the dec...
[None, ['Training', 'HMM', 'Joint', 'Decoding'], ['2H', 'HMM', 'Joint'], ['HMM'], ['HMM', '2H', '2HC'], ['2H', 'IBM2', 'AER', 'HMM', 'F-Measure', 'FA']]
1
D16-1062table_3
Comparison of Fleiss’ κ scores with scores from SNLI quality control sentence pairs.
2
[['Fleiss’κ', 'Contradiction'], ['Fleiss’κ', 'Entailment'], ['Fleiss’κ', 'Neutral'], ['Fleiss’κ', 'Overall']]
1
[['4GS'], ['5GS'], ['Bowman et al. 2015']]
[['0.37', '0.59', '0.77'], ['0.48', '0.63', '0.72'], ['0.41', '0.54', '0.6'], ['0.43', '0.6', '0.7']]
column
['Fleiss’κ', 'Fleiss’κ', 'Fleiss’κ']
['4GS']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>4GS</th> <th>5GS</th> <th>Bowman et al. 2015</th> </tr> </thead> <tbody> <tr> <td>Fleiss’κ || Contradiction</td> <td>0.37</td> <td>0.59</td> <td>0.77</td> </tr> <tr> ...
Table 3
table_3
D16-1062
6
emnlp2016
Table 3 shows that the level of agreement as measured by the Fleiss’κ score is much lower when the number of annotators is increased, particularly for the 4GS set of sentence pairs, as compared to scores noted in Bowman et al. (2015). The decrease in agreement is particularly large with regard to contradiction. This co...
[1, 1, 2, 2, 2]
['Table 3 shows that the level of agreement as measured by the Fleiss’κ score is much lower when the number of annotators is increased, particularly for the 4GS set of sentence pairs, as compared to scores noted in Bowman et al. (2015).', 'The decrease in agreement is particularly large with regard to contradiction.', ...
[['Fleiss’κ', '4GS', 'Bowman et al. 2015'], ['Contradiction'], None, None, None]
1
D16-1062table_5
Theta scores and area under curve percentiles for LSTM trained on SNLI and tested on GSIRT . We also report the accuracy for the same LSTM tested on all SNLI quality control items (see Section 3.1). All performance is based on binary classification for each label.
4
[['Item', 'Set', '5GS', 'Entailment'], ['Item', 'Set', '5GS', 'Contradiction'], ['Item', 'Set', '5GS', 'Neutral'], ['Item', 'Set', '4GS', 'Contradiction'], ['Item', 'Set', '4GS', 'Neutral']]
1
[['Theta Score'], ['Percentile'], ['Test Acc.']]
[['-0.133', '44.83%', '96.5%'], ['1.539', '93.82%', '87.9%'], ['0.423', '66.28%', '88%'], ['1.777', '96.25%', '78.9%'], ['0.441', '67%', '83%']]
column
['Theta Score', 'Percentile', 'Test Acc.']
['4GS', '5GS']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Theta Score</th> <th>Percentile</th> <th>Test Acc.</th> </tr> </thead> <tbody> <tr> <td>Item || Set || 5GS || Entailment</td> <td>-0.133</td> <td>44.83%</td> <td>96.5%</td> <...
Table 5
table_5
D16-1062
8
emnlp2016
The theta scores from IRT in Table 5 show that, compared to AMT users, the system performed well above average for contradiction items compared to human performance, and performed around the average for entailment and neutral items. For both the neutral and contradiction items, the theta scores are similar across the 4...
[1, 1, 2, 2, 2]
['The theta scores from IRT in Table 5 show that, compared to AMT users, the system performed well above average for contradiction items compared to human performance, and performed around the average for entailment and neutral items.', 'For both the neutral and contradiction items, the theta scores are similar across ...
[['Theta Score', 'Contradiction', 'Entailment', 'Neutral'], ['Neutral', 'Contradiction', 'Theta Score', '4GS', '5GS', 'Test Acc.'], None, ['Theta Score'], ['Theta Score', 'Test Acc.']]
1
D16-1063table_2
Performance of different rho functions on Text8 dataset with 17M tokens.
2
[['Task', 'Similarity'], ['Task', 'Analogy']]
2
[['Robi', '-'], ['ρ0', 'off'], ['ρ0', 'on'], ['ρ1', 'off'], ['ρ1', 'on'], ['ρ2', 'off'], ['ρ2', 'on'], ['ρ3', 'off'], ['ρ3', 'on']]
[['41.2', '69.0', '71.0', '66.7', '70.4', '66.8', '70.8', '68.1', '68.0'], ['22.7', '24.9', '31.9', '34.3', '44.5', '32.3', '40.4', '33.6', '42.9']]
column
['Robi', 'ρ0', 'ρ0', 'ρ1', 'ρ1', 'ρ2', 'ρ2', 'ρ3', 'ρ3']
['Similarity', 'Analogy']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Robi || -</th> <th>ρ0 || off</th> <th>ρ0 || on</th> <th>ρ1 || off</th> <th>ρ1 || on</th> <th>ρ2 || off</th> <th>ρ2 || on</th> <th>ρ3 || off</th> <th>ρ3 || on</th> </tr> </...
Table 2
table_2
D16-1063
7
emnlp2016
It can be seen from Table 2 that adding the weight rw,c improves performance in all the cases, especially on the word analogy task. Among the four ρ functions, ρ0 performs the best on the word similarity task but suffers notably on the analogy task, while ρ1 = log performs the best overall. Given these observations, wh...
[1, 1, 2]
['It can be seen from Table 2 that adding the weight rw,c improves performance in all the cases, especially on the word analogy task.', 'Among the four ρ functions, ρ0 performs the best on the word similarity task but suffers notably on the analogy task, while ρ1 = log performs the best overall.', 'Given these observat...
[['Analogy', 'Similarity'], ['ρ0', 'ρ1', 'Similarity', 'Analogy'], ['ρ1']]
1
D16-1065table_3
Comparison between our joint approaches and the pipelined counterparts.
4
[['Dataset', 'LDC2013E117', 'System', 'JAMR (fixed)'], ['Dataset', 'LDC2013E117', 'System', 'System 1'], ['Dataset', 'LDC2013E117', 'System', 'System 2'], ['Dataset', 'LDC2014T12', 'System', 'JAMR (fixed)'], ['Dataset', 'LDC2014T12', 'System', 'System 1'], ['Dataset', 'LDC2014T12', 'System', 'System 2']]
1
[['P'], ['R'], ['F1']]
[['0.67', '0.58', '0.62'], ['0.72', '0.65', '0.68'], ['0.73', '0.69', '0.71'], ['0.68', '0.59', '0.63'], ['0.74', '0.63', '0.68'], ['0.73', '0.68', '0.71']]
column
['P', 'R', 'F1']
['System 1', 'System 2']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Dataset || LDC2013E117 || System || JAMR(fixed)</td> <td>0.67</td> <td>0.58</td> <td>0.62</td> </tr> <tr> ...
Table 3
table_3
D16-1065
8
emnlp2016
4.4 Joint Model vs. Pipelined Model. In this section, we compare the overall performance of our joint model to the pipelined model, JAMR. To give a fair comparison, we first implemented system 1 only using the same features (i.e., features 1- 4 in Table 1) as JAMR for concept fragments. Table 3 gives the results on the...
[2, 2, 2, 1, 1, 2, 2, 1]
['4.4 Joint Model vs. Pipelined Model.', 'In this section, we compare the overall performance of our joint model to the pipelined model, JAMR.', 'To give a fair comparison, we first implemented system 1 only using the same features (i.e., features 1- 4 in Table 1) as JAMR for concept fragments.', 'Table 3 gives the res...
[None, None, None, None, ['F1', 'System 1', 'JAMR (fixed)'], ['System 2'], None, ['System 2']]
1
D16-1065table_4
Final results of various methods.
4
[['Dataset', 'LDC2013E117', 'System', 'CAMR*'], ['Dataset', 'LDC2013E117', 'System', 'CAMR'], ['Dataset', 'LDC2013E117', 'System', 'Our approach'], ['Dataset', 'LDC2014T12', 'System', 'CAMR*'], ['Dataset', 'LDC2014T12', 'System', 'CAMR'], ['Dataset', 'LDC2014T12', 'System', 'CCG-based'], ['Dataset', 'LDC2014T12', 'Syst...
1
[['P'], ['R'], ['F1']]
[['.69', '.67', '.68'], ['.71', '.69', '.70'], ['.73', '.69', '.71'], ['.70', '.66', '.68'], ['.72', '.67', '.70'], ['.67', '.66', '.66'], ['.73', '.68', '.71']]
column
['P', 'R', 'F1']
['Our approach']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Dataset || LDC2013E117 || System || CAMR*</td> <td>.69</td> <td>.67</td> <td>.68</td> </tr> <tr> <td>Dat...
Table 4
table_4
D16-1065
8
emnlp2016
We give a comparison between our approach and other state-of-the-art AMR parsers, including CCGbased parser (Artzi et al., 2015) and dependencybased parser (Wang et al., 2015b). For comparison purposes, we give two results from two different versions of dependency-based AMR parser: CAMR* and CAMR. Compared to the latte...
[2, 2, 2, 1]
['We give a comparison between our approach and other state-of-the-art AMR parsers, including CCGbased parser (Artzi et al., 2015) and dependency-based parser (Wang et al., 2015b).', 'For comparison purposes, we give two results from two different versions of dependency-based AMR parser: CAMR* and CAMR.', 'Compared to ...
[None, ['CAMR*', 'CAMR'], None, ['Our approach', 'System']]
1
D16-1065table_5
Final results on the full LDC2014T12 dataset.
4
[['Dataset', 'LDC2014T12', 'System', 'JAMR (fixed)'], ['Dataset', 'LDC2014T12', 'System', 'CAMR*'], ['Dataset', 'LDC2014T12', 'System', 'CAMR'], ['Dataset', 'LDC2014T12', 'System', 'SMBT-based'], ['Dataset', 'LDC2014T12', 'System', 'Our approach']]
1
[['P'], ['R'], ['F1']]
[['.64', '.53', '.58'], ['.68', '.60', '.64'], ['.70', '.62', '.66'], ['-', '-', '.67'], ['.70', '.62', '.66']]
column
['P', 'R', 'F1']
['Our approach']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Dataset || LDC2014T12 || System || JAMR (fixed)</td> <td>.64</td> <td>.53</td> <td>.58</td> </tr> <tr> <...
Table 5
table_5
D16-1065
8
emnlp2016
We also evaluate our parser on the full LDC2014T12 dataset. We use the training/development/test split recommended in the release: 10,312 sentences for training, 1368 sentences for development and 1371 sentences for testing. For comparison, we include the results of JAMR, CAMR*, CAMR and SMBT-based parser (Pust et al.,...
[2, 2, 2, 1, 1]
['We also evaluate our parser on the full LDC2014T12 dataset.', 'We use the training/development/test split recommended in the release: 10,312 sentences for training, 1368 sentences for development and 1371 sentences for testing.', 'For comparison, we include the results of JAMR, CAMR*, CAMR and SMBT-based parser (Pust...
[['LDC2014T12'], None, ['JAMR (fixed)', 'CAMR*', 'CAMR', 'SMBT-based', 'Our approach'], ['Our approach', 'CAMR*', 'CAMR'], ['Our approach', 'SMBT-based']]
1
D16-1068table_2
Per language UAS for the fully supervised setup. Model names are as in Table 1, ‘e’ stands for ensemble. Best results for each language and parsing model order are highlighted in bold.
2
[['language', 'swedish'], ['language', 'bulgarian'], ['language', 'chinese'], ['language', 'czech'], ['language', 'dutch'], ['language', 'japanese'], ['language', 'catalan'], ['language', 'english']]
2
[['First Order', 'TurboParser'], ['First Order', 'BGI-PP'], ['First Order', 'BGI-PP+i+b'], ['First Order', 'BGI-PP+i+b+e'], ['Second Order', 'TurboParser'], ['Second Order', 'BGI-PP'], ['Second Order', 'BGI-PP+i+b'], ['Second Order', 'BGI-PP+i+b+e']]
[['87.12', '86.35', '86.93', '87.12', '88.65', '86.14', '87.85', '89.29'], ['90.66', '90.22', '90.42', '90.66', '92.43', '89.73', '91.50', '92.58'], ['84.88', '83.89', '84.17', '84.17', '86.53', '81.33', '85.18', '86.59'], ['83.53', '83.46', '83.44', '83.44', '86.35', '84.91', '86.26', '87.50'], ['88.48', '88.56', '88....
column
['UAS', 'UAS', 'UAS', 'UAS', 'UAS', 'UAS', 'UAS', 'UAS']
['BGI-PP+i+b', 'BGI-PP+i+b+e']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>First Order || TurboParser</th> <th>First Order || BGI-PP</th> <th>First Order || BGI-PP+i+b</th> <th>First Order || BGI-PP+i+b+e</th> <th>Second Order || TurboParser</th> <th>Second Order |...
Table 2
table_2
D16-1068
8
emnlp2016
Table 2 complements our results, providing UAS values for each of the 8 languages participating in this setup. The UAS difference between BGI-PP+i+b and the TurboParser are (+0.24)-(- 0.71) in first order parsing and (+0.18)-(-2.46) in second order parsing. In the latter case, combining these two models (BGI+PP+i+b+e) ...
[1, 1, 1]
['Table 2 complements our results, providing UAS values for each of the 8 languages participating in this setup.', 'The UAS difference between BGI-PP+i+b and the TurboParser are (+0.24)-(- 0.71) in first order parsing and (+0.18)-(-2.46) in second order parsing.', 'In the latter case, combining these two models (BGI+PP...
[['language', 'First Order', 'Second Order'], ['BGI-PP+i+b', 'TurboParser', 'First Order', 'Second Order'], ['BGI-PP+i+b+e', 'TurboParser', 'language']]
1
D16-1071table_3
Word relation results. MRR per language and POS type for all models. unfiltered is the unfiltered nearest neighbor search space; filtered is the nearest neighbor search space that contains only one POS. ‡ (resp. †): significantly worse than LAMB (sign test, p < .01, resp. p < .05). Best unfiltered/filtered result per r...
4
[['lang', 'cz', 'POS', 'a'], ['lang', 'cz', 'POS', 'n'], ['lang', 'cz', 'POS', 'v'], ['lang', 'cz', 'POS', 'all'], ['lang', 'de', 'POS', 'a'], ['lang', 'de', 'POS', 'n'], ['lang', 'de', 'POS', 'v'], ['lang', 'de', 'POS', 'all'], ['lang', 'en', 'POS', 'a'], ['lang', 'en', 'POS', 'n'], ['lang', 'en', 'POS', 'v'], ['lang'...
3
[['unfiltered', 'form', 'real'], ['unfiltered', 'form', 'opt'], ['unfiltered', 'form', 'sum'], ['unfiltered', 'STEM', 'real'], ['unfiltered', 'STEM', 'opt'], ['unfiltered', 'STEM', 'sum'], ['unfiltered', '-', 'LAMB'], ['filtered', 'form', 'real'], ['filtered', 'form', 'opt'], ['filtered', 'form', 'sum'], ['filtered', '...
[['0.03', '0.04', '0.05', '0.02', '0.05', '0.05', '0.06', '0.03‡', '0.05†', '0.07', '0.04†', '0.08', '0.08', '0.09'], ['0.15‡', '0.21‡', '0.24‡', '0.18‡', '0.27‡', '0.26‡', '0.30', '0.17‡', '0.23‡', '0.26‡', '0.20‡', '0.29‡', '0.28‡', '0.32'], ['0.07‡', '0.13‡', '0.16†', '0.08‡', '0.14‡', '0.16‡', '0.18', '0.09‡', '0.1...
column
['MRR', 'MRR', 'MRR', 'MRR', 'MRR', 'MRR', 'MRR', 'MRR', 'MRR', 'MRR', 'MRR', 'MRR', 'MRR', 'MRR']
['LAMB']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>unfiltered || form || real</th> <th>unfiltered || form || opt</th> <th>unfiltered || form || sum</th> <th>unfiltered || STEM || real</th> <th>unfiltered || STEM || opt</th> <th>unfiltered ||...
Table 3
table_3
D16-1071
7
emnlp2016
Results. The MRR results in the left half of Table 3 (“unfiltered”) show that for all languages and for all POS, form real has the worst performance among the form models. This comes at no surprise since this model does barely know anything about word forms and lemmata. The form opt model improves these results based o...
[2, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
['Results.', 'The MRR results in the left half of Table 3 (“unfiltered”) show that for all languages and for all POS, form real has the worst performance among the form models.', 'This comes at no surprise since this model does barely know anything about word forms and lemmata.', 'The form opt model improves these resu...
[None, ['lang', 'POS', 'unfiltered'], None, ['form'], ['form', 'sum', 'opt'], ['lang'], ['form', 'sum', 'de'], ['de'], ['STEM'], ['lang', 'POS', 'STEM', 'sum', 'opt'], ['STEM'], ['es'], ['STEM'], ['LAMB', 'lang', 'POS'], ['LAMB'], ['cz'], ['LAMB', 'form', 'sum', 'opt', 'de'], ['LAMB']]
1
D16-1071table_5
Polarity classification results. Bold is best per language and column.
4
[['lang', 'cz', 'features', 'Brychcin et al. (2013)'], ['lang', 'cz', 'features', 'form'], ['lang', 'cz', 'features', 'STEM'], ['lang', 'cz', 'features', 'LAMB'], ['lang', 'en', 'features', 'Hagen et al. (2015)'], ['lang', 'en', 'features', 'form'], ['lang', 'en', 'features', 'STEM'], ['lang', 'en', 'features', 'LAMB']...
1
[['acc'], ['F1']]
[['-', '81.53'], ['80.86', '80.75'], ['81.51', '81.39'], ['81.21', '81.09'], ['-', '64.84'], ['66.78', '62.21'], ['66.95', '62.06'], ['67.49', '63.01']]
column
['acc', 'F1']
['LAMB']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>acc</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>lang || cz || features || Brychcin et al. (2013)</td> <td>-</td> <td>81.53</td> </tr> <tr> <td>lang || cz || features || fo...
Table 5
table_5
D16-1071
8
emnlp2016
Results. Table 5 lists the 10-fold cross-validation results (accuracy and macro F1) on the CSFD dataset. LAMB/STEM results are consistently better than form results. In our analysis, we found the following example for the benefit of normalization: “popis a nazev za- ´ jmavy a film je takov ´ a filma ´ ˇrska pras ´ arna...
[2, 1, 1, 2, 2, 2, 2, 1, 1, 2, 1]
['Results.', 'Table 5 lists the 10-fold cross-validation results (accuracy and macro F1) on the CSFD dataset.', 'LAMB/STEM results are consistently better than form results.', 'In our analysis, we found the following example for the benefit of normalization: “popis a nazev za- ´ jmavy a film je takov ´ a filma ´ ˇrska ...
[None, ['acc', 'F1'], ['LAMB', 'STEM'], None, ['LAMB'], ['form', 'LAMB'], ['form', 'LAMB'], ['LAMB', 'form', 'STEM'], ['LAMB', 'en'], None, ['Hagen et al. (2015)']]
1
D16-1072table_2
POS tagging performance of online and offline pruning with different r and λ on CTB5 and PD.
5
[['Online Pruning', 'r', '2', 'λ', '0.98'], ['Online Pruning', 'r', '4', 'λ', '0.98'], ['Online Pruning', 'r', '8', 'λ', '0.98'], ['Online Pruning', 'r', '16', 'λ', '0.98'], ['Online Pruning', 'r', '8', 'λ', '0.90'], ['Online Pruning', 'r', '8', 'λ', '0.95'], ['Online Pruning', 'r', '8', 'λ', '0.99'], ['Online Pruning'...
2
[['Accuracy (%)', 'CTB5-dev'], ['Accuracy (%)', 'PD-dev'], ['#Tags (pruned)', 'CTB-side'], ['#Tags (pruned)', 'PD-side']]
[['94.25', '95.03', '2.0', '2.0'], ['95.06', '95.66', '3.9', '4.0'], ['95.14', '95.83', '6.3', '7.4'], ['95.12', '95.81', '7.8', '14.1'], ['95.15', '95.79', '3.7', '6.3'], ['95.13', '95.82', '5.1', '7.1'], ['95.15', '95.74', '7.4', '7.9'], ['95.15', '95.76', '8.0', '8.0'], ['94.95', '96.05', '4.1', '5.1'], ['95.15', '9...
column
['Accuracy (%)', 'Accuracy (%)', '#Tags (pruned)', '#Tags (pruned)']
['Online Pruning', 'Offline Pruning']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy (%) || CTB5-dev</th> <th>Accuracy (%) || PD-dev</th> <th>#Tags (pruned) || CTB-side</th> <th>#Tags (pruned) || PD-side</th> </tr> </thead> <tbody> <tr> <td>Online Pruning || r ||...
Table 2
table_2
D16-1072
5
emnlp2016
5 Experiments on POS Tagging. 5.1 Parameter Tuning. For both online and offline pruning, we need to decide the maximum number of single-side tag candidates r and the accumulative probability threshold λ for further truncating the candidates. Table 2 shows the tagging accuracies and the averaged numbers of single-side t...
[2, 2, 2, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
['5 Experiments on POS Tagging.', '5.1 Parameter Tuning.', 'For both online and offline pruning, we need to decide the maximum number of single-side tag candidates r and the accumulative probability threshold λ for further truncating the candidates.', 'Table 2 shows the tagging accuracies and the averaged numbers of si...
[None, None, ['Online Pruning', 'Offline Pruning', 'λ', 'r'], None, ['Online Pruning'], ['λ', 'r', 'CTB5-dev', 'PD-dev'], ['r'], ['λ', 'r'], ['λ'], ['r', 'λ'], ['r', 'λ'], ['λ'], ['r', 'λ', 'CTB5-dev'], ['r', 'λ'], ['r', 'λ', 'CTB-side', 'PD-side']]
1
D16-1072table_3
POS tagging performance of difference approaches on CTB5 and PD.
1
[['Coupled (Offline)'], ['Coupled (Online)'], ['Coupled (No Prune)'], ['Coupled (Relaxed)'], ['Guide-feature'], ['Baseline'], ['Li et al. (2012b)']]
2
[['Accuracy (%)', 'CTB5-test'], ['Accuracy (%)', 'PD-test'], ['Speed', 'Toks/Sec']]
[['94.83', '95.90', '246'], ['94.74', '95.95', '365'], ['94.58', '95.79', '3'], ['94.63', '95.87', '127'], ['94.35', '95.63', '584'], ['94.07', '95.82', '1573'], ['94.60', '—', '—']]
column
['Accuracy (%)', 'Accuracy (%)', 'Speed']
['Coupled (Offline)', 'Coupled (Online)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy (%) || CTB5-test</th> <th>Accuracy (%) || PD-test</th> <th>Speed || Toks/Sec</th> </tr> </thead> <tbody> <tr> <td>Coupled (Offline)</td> <td>94.83</td> <td>95.90</td> <...
Table 3
table_3
D16-1072
6
emnlp2016
5.2 Main Results. Table 3 summarizes the accuracies on the test data and the tagging speed during the test phase. “Coupled (No Prune)” refers to the coupled model with complete mapping in Li et al. (2015), which maps each one-side tag to all the-other-side tags. “Coupled (Relaxed)” refers the coupled model with relaxed...
[0, 1, 2, 2, 2, 1, 1, 1]
['5.2 Main Results.', 'Table 3 summarizes the accuracies on the test data and the tagging speed during the test phase.', '“Coupled (No Prune)” refers to the coupled model with complete mapping in Li et al. (2015), which maps each one-side tag to all the-other-side tags.', '“Coupled (Relaxed)” refers the coupled model w...
[None, None, ['Coupled (No Prune)'], ['Coupled (Relaxed)'], ['Li et al. (2012b)'], ['Coupled (Offline)', 'Coupled (Online)'], ['Coupled (Offline)', 'CTB5-test'], ['PD-test']]
1
D16-1072table_4
WS&POS tagging performance of online and offline pruning with different r and λ on CTB5 and PD.
5
[['Online Pruning', 'r', '8', 'λ', '1.00'], ['Online Pruning', 'r', '16', 'λ', '0.95'], ['Online Pruning', 'r', '16', 'λ', '0.99'], ['Online Pruning', 'r', '16', 'λ', '1.00'], ['Offline Pruning', 'r', '16', 'λ', '0.99']]
2
[['Accuracy (%)', 'CTB5-dev'], ['Accuracy (%)', 'PD-dev'], ['#Tags (pruned)', 'CTB-side'], ['#Tags (pruned)', 'PD-side']]
[['90.41', '89.91', '8.0', '8.0'], ['90.65', '90.22', '15.9', '16.0'], ['90.77', '90.49', '16.0', '16.0'], ['90.79', '90.49', '16.0', '16.0'], ['91.64', '91.92', '2.5', '3.5']]
column
['Accuracy (%)', 'Accuracy (%)', '#Tags (pruned)', '#Tags (pruned)']
['Online Pruning']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy (%) || CTB5-dev</th> <th>Accuracy (%) || PD-dev</th> <th>#Tags (pruned) || CTB-side</th> <th>#Tags (pruned) || PD-side</th> </tr> </thead> <tbody> <tr> <td>Online Pruning || r ||...
Table 4
table_4
D16-1072
6
emnlp2016
Table 4 shows results for tuning r and λ. From the results, we can see that in the online pruning method, λ seems useless and r becomes the only threshold for pruning unlikely single-side tags. The accuracies are much inferior to those from the offline pruning approach. We believe that the accuracies can be further imp...
[1, 1, 1, 2, 1]
['Table 4 shows results for tuning r and λ.', 'From the results, we can see that in the online pruning method, λ seems useless and r becomes the only threshold for pruning unlikely single-side tags.', 'The accuracies are much inferior to those from the offline pruning approach.', 'We believe that the accuracies can be ...
[None, ['Online Pruning', 'λ'], ['Online Pruning', 'Offline Pruning'], ['r'], ['r', 'λ']]
1
D16-1072table_5
WS&POS tagging performance of difference approaches on CTB5 and PD.
1
[['Coupled (Offline)'], ['Coupled (Online)'], ['Guide-feature'], ['Baseline']]
2
[['F (%) on CTB5-test', 'Only WS'], ['F (%) on CTB5-test', 'Joint WS&POS'], ['F (%) on PD-test', 'Only WS'], ['F (%) on PD-test', 'Joint WS&POS'], ['Speed (Char/Sec)', '-']]
[['95.55', '90.58', '96.12', '92.44', '115'], ['94.94', '89.58', '95.60', '91.56', '26'], ['95.07', '89.79', '95.66', '91.61', '27'], ['94.88', '89.49', '96.28', '92.47', '119']]
column
['F (%) on CTB5-test', 'F (%) on CTB5-test', 'F (%) on PD-test', 'F (%) on PD-test', 'Speed (Char/Sec)']
['Coupled (Offline)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P/R/F (%) on CTB5-test || Only WS</th> <th>P/R/F (%) on CTB5-test || Joint WS&amp;POS</th> <th>P/R/F (%) on PD-test || Only WS</th> <th>P/R/F (%) on PD-test || Joint WS&amp;POS</th> <th>Speed || ...
Table 5
table_5
D16-1072
7
emnlp2016
6.2 Main Results. Table 5 summarizes the accuracies on the test data and the tagging speed (characters per second) during the test phase. “Coupled (No Prune)” is not tried due to the prohibitive tag set size in joint WS&POS tagging, and “Coupled (Relaxed)” is also skipped since it seems impossible to manually design re...
[2, 1, 2, 1, 1, 2]
['6.2 Main Results.', 'Table 5 summarizes the accuracies on the test data and the tagging speed (characters per second) during the test phase.', '“Coupled (No Prune)” is not tried due to the prohibitive tag set size in joint WS&POS tagging, and “Coupled (Relaxed)” is also skipped since it seems impossible to manually d...
[None, None, None, ['Speed (Char/Sec)'], ['F (%) on CTB5-test', 'Coupled (Offline)'], None]
1
D16-1072table_6
WS&POS tagging performance of difference approaches on CTB5X and PD.
1
[['Coupled (Offline)'], ['Guide-feature'], ['Baseline'], ['Sun and Wan (2012)'], ['Jiang et al. (2009)']]
2
[['F (%) on CTB5X-test', 'Only WS'], ['F (%) on CTB5X-test', 'Joint WS&POS']]
[['98.01', '94.39'], ['97.96', '94.06'], ['97.37', '93.23'], ['—', '94.36'], ['98.23', '94.03']]
column
['F (%) on CTB5X-test', 'F (%) on CTB5X-test']
['Coupled (Offline)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>F (%) on CTB5X-test || Only WS</th> <th>F (%) on CTB5X-test || Joint WS&amp;POS</th> </tr> </thead> <tbody> <tr> <td>Coupled (Offline)</td> <td>98.01</td> <td>94.39</td> </tr> <tr> ...
Table 6
table_6
D16-1072
8
emnlp2016
6.4 Comparison with Previous Work. In order to compare with previous work, we also run our models on CTB5X and PD, where CTB5X adopts a different data split of CTB5 and is widely used in previous research on joint WS&POS tagging (Jiang et al., 2009; Sun and Wan, 2012). CTB5X-dev/test only contain 352/348 sentences resp...
[2, 2, 2, 1, 1, 1, 2]
['6.4 Comparison with Previous Work.', 'In order to compare with previous work, we also run our models on CTB5X and PD, where CTB5X adopts a different data split of CTB5 and is widely used in previous research on joint WS&POS tagging (Jiang et al., 2009; Sun and Wan, 2012).', 'CTB5X-dev/test only contain 352/348 senten...
[None, None, None, ['F (%) on CTB5X-test'], ['Coupled (Offline)', 'Guide-feature', 'Baseline', 'F (%) on CTB5X-test'], ['Jiang et al. (2009)', 'F (%) on CTB5X-test', 'Coupled (Offline)'], ['Sun and Wan (2012)']]
1
D16-1075table_3
Performance of various approaches on stream summarization on five topics.
1
[['Random'], ['NB'], ['B-HAC'], ['TaHBM'], ['Ge et al. (2015b)'], ['BINet-NodeRank'], ['BINet-AreaRank']]
2
[['sports', 'P@50'], ['sports', 'P@100'], ['politics', 'P@50'], ['politics', 'P@100'], ['disaster', 'P@50'], ['disaster', 'P@100'], ['military', 'P@50'], ['military', 'P@100'], ['comprehensive', 'P@50'], ['comprehensive', 'P@100']]
[['0.02', '0.08', '0', '0', '0.02', '0.04', '0', '0', '0.02', '0.03'], ['0.08', '0.12', '0.18', '0.19', '0.42', '0.36', '0.18', '0.17', '0.38', '0.31'], ['0.10', '0.13', '0.30', '0.26', '0.50', '0.47', '0.30', '0.22', '0.36', '0.32'], ['0.18', '0.15', '0.30', '0.29', '0.50', '0.43', '0.46', '0.36', '0.38', '0.33'], ['0...
column
['P@50', 'P@100', 'P@50', 'P@100', 'P@50', 'P@100', 'P@50', 'P@100', 'P@50', 'P@100']
['BINet-NodeRank', 'BINet-AreaRank']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>sports || P@50</th> <th>sports || P@100</th> <th>politics || P@50</th> <th>politics || P@100</th> <th>disaster || P@50</th> <th>disaster || P@100</th> <th>military || P@50</th> <th...
Table 3
table_3
D16-1075
7
emnlp2016
The results are shown in Table 3. It can be clearly observed that BINet-based approaches outperform baselines and perform comparably to the state-ofthe-art model on generating the summaries on most topics: AreaRank achieves the significant improvement over the state-of-the-art model on sports and disasters, and perform...
[1, 1, 1, 1, 1, 1, 2, 2, 2]
['The results are shown in Table 3.', 'It can be clearly observed that BINet-based approaches outperform baselines and perform comparably to the state-ofthe-art model on generating the summaries on most topics: AreaRank achieves the significant improvement over the state-of-the-art model on sports and disasters, and pe...
[None, ['BINet-NodeRank', 'BINet-AreaRank'], ['sports', 'politics', 'disaster', 'military', 'comprehensive'], ['BINet-AreaRank'], ['sports', 'politics'], ['BINet-AreaRank'], None, ['BINet-AreaRank'], ['BINet-NodeRank', 'BINet-AreaRank']]
1
D16-1078table_2
The performances on the Abstracts sub-corpus.
3
[['Speculation', 'Systems', 'Baseline'], ['Speculation', 'Systems', 'CNN_C'], ['Speculation', 'Systems', 'CNN_D'], ['Negation', 'Systems', 'Baseline'], ['Negation', 'Systems', 'CNN_C'], ['Negation', 'Systems', 'CNN_D']]
1
[['P (%)'], ['R (%)'], ['F1'], ['PCLB (%)'], ['PCRB (%)'], ['PCS (%)']]
[['94.71', '90.54', '92.56', '84.81', '85.11', '72.47'], ['95.95', '95.19', '95.56', '93.16', '91.50', '85.75'], ['92.25', '94.98', '93.55', '86.39', '84.50', '74.43'], ['85.46', '72.95', '78.63', '84.00', '58.29', '46.42'], ['85.10', '92.74', '89.64', '81.04', '87.73', '70.86'], ['89.49', '90.54', '89.91', '91.91', '8...
column
['P (%)', 'R (%)', 'F1', 'PCLB (%)', 'PCRB (%)', 'PCS (%)']
['CNN_C', 'CNN_D']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P (%)</th> <th>R (%)</th> <th>F1</th> <th>PCLB (%)</th> <th>PCRB (%)</th> <th>PCS (%)</th> </tr> </thead> <tbody> <tr> <td>Speculation || Systems || Baseline</td> <td>94.71...
Table 2
table_2
D16-1078
7
emnlp2016
4.3 Experimental Results on Abstracts. Table 2 summarizes the performances of scope detection on Abstracts. In Table 2, CNN_C and CNN_D refer the CNN-based model with constituency paths and dependency paths, respectively (the same below). It shows that our CNN-based models (both CNN_C and CNN_D) can achieve better perf...
[2, 1, 2, 1, 1, 1, 1, 1, 1, 2, 2, 1, 1, 0]
['4.3 Experimental Results on Abstracts.', 'Table 2 summarizes the performances of scope detection on Abstracts.', 'In Table 2, CNN_C and CNN_D refer the CNN-based model with constituency paths and dependency paths, respectively (the same below).', 'It shows that our CNN-based models (both CNN_C and CNN_D) can achieve ...
[None, None, ['CNN_C', 'CNN_D'], ['CNN_C', 'CNN_D', 'Baseline'], ['CNN_C', 'CNN_D'], ['CNN_C', 'CNN_D', 'Baseline'], ['CNN_C', 'CNN_D'], ['CNN_C', 'CNN_D', 'PCRB (%)'], ['PCS (%)'], None, None, ['Negation', 'CNN_C', 'CNN_D'], ['Negation', 'CNN_C', 'CNN_D'], None]
1
D16-1078table_4
Comparison of our CNN-based model with the state-
3
[['Spe', 'System', 'Morante (2009a)'], ['Spe', 'System', 'Özgür (2009)'], ['Spe', 'System', 'Velldal (2012)'], ['Spe', 'System', 'Zou (2013)'], ['Spe', 'System', 'Ours'], ['Neg', 'System', 'Morante (2008)'], ['Neg', 'System', 'Morante (2009b)'], ['Neg', 'System', 'Li (2010)'], ['Neg', 'System', 'Velldal (2012)'], ['Neg...
1
[['Abstracts'], ['Cli'], ['Papers']]
[['77.13', '60.59', '47.94'], ['79.89', 'N/A', '61.13'], ['79.56', '78.69', '75.15'], ['84.21', '72.92', '67.24'], ['85.75', '73.92', '59.82'], ['57.33', 'N/A', 'N/A'], ['73.36', '87.27', '50.26'], ['81.84', '89.79', '64.02'], ['74.35', '90.74', '70.21'], ['76.90', '85.31', '61.19'], ['77.14', '89.66', '55.32']]
column
['PCS', 'PCS', 'PCS']
['Ours']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Abstracts</th> <th>Cli</th> <th>Papers</th> </tr> </thead> <tbody> <tr> <td>Spe || System || Morante (2009a)</td> <td>77.13</td> <td>60.59</td> <td>47.94</td> </tr> <tr> ...
Table 4
table_4
D16-1078
9
emnlp2016
Table 4 compares our CNN-based models with the state-of-the-art systems. It shows that our CNNbased models can achieve higher PCSs (+1.54%) than those of the state-of-the-art systems for speculation scope detection and the second highest PCS for negation scope detection on Abstracts, and can ...
[1, 1, 2, 1, 2, 2, 2, 2, 2, 2, 2, 0]
['Table 4 compares our CNN-based models with the state-of-the-art systems.', 'It shows that our CNNbased models can achieve higher PCSs (+1.54%) than those of the state-of-the-art systems for speculation scope detection and the second highest PCS for negation scope detection on Abstracts, and...
[['System'], ['Ours', 'Abstracts', 'Cli'], ['Abstracts', 'Cli'], ['Ours', 'System'], ['Cli'], ['Ours', 'Abstracts'], None, ['Li (2010)'], ['Velldal (2012)'], None, ['Ours'], None]
1
D16-1080table_4
Effects of embedding on performance. WEU, WENU, REU and RENU represent word embedding update, word embedding without update, random embedding update and random embedding without update respectively.
1
[['WEU'], ['WENU'], ['REU'], ['RENU']]
1
[['P'], ['R'], ['F1']]
[['80.74%', '81.19%', '80.97%'], ['74.10%', '69.30%', '71.62%'], ['79.01%', '79.75%', '79.38%'], ['78.16%', '64.55%', '70.70%']]
column
['P', 'R', 'F1']
['WEU']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>WEU</td> <td>80.74%</td> <td>81.19%</td> <td>80.97%</td> </tr> <tr> <td>WENU</td> <td>74.10%</td> ...
Table 4
table_4
D16-1080
7
emnlp2016
Table 4 lists the effects of word embedding. We can see that the performance when updating the word embedding is better than when not updating, and the performance of word embedding is a little better than random word embedding. The main reason is that the vocabulary size is 147,377, but the number of words from tweets...
[1, 1, 2, 2, 2]
['Table 4 lists the effects of word embedding.', 'We can see that the performance when updating the word embedding is better than when not updating, and the performance of word embedding is a little better than random word embedding.', 'The main reason is that the vocabulary size is 147,377, but the number of words fro...
[None, ['WEU', 'WENU', 'REU', 'RENU'], None, None, None]
1
D16-1083table_3
Classification results across the behavioral features (BF), the reviewer embeddings (RE) , product embeddings (PE) and bigram of the review texts. Training uses balanced data (50:50). Testing uses two class distributions (C.D.): 50:50 (balanced) and Natural Distribution (N.D.). Improvements of our method are statistica...
3
[['Method', 'SPEAGLE+(80%)', '50.50.00'], ['Method', 'SPEAGLE+(80%)', 'N.D.'], ['Method', 'Mukherjee_BF', '50.50.00'], ['Method', 'Mukherjee_BF', 'N.D.'], ['Method', 'Mukherjee_BF+Bigram', '50.50.00'], ['Method', 'Mukherjee_BF+Bigram', 'N.D.'], ['Method', 'Ours_RE', '50.50.00'], ['Method', 'Ours_RE', 'N.D.'], ['Method'...
2
[['P', 'Hotel'], ['P', 'Restaurant'], ['R', 'Hotel'], ['R', 'Restaurant'], ['F1', 'Hotel'], ['F1', 'Restaurant'], ['A', 'Hotel'], ['A', 'Restaurant']]
[['75.7', '80.5', '83', '83.2', '79.1', '81.8', '81', '82.5'], ['26.5', '50.1', '56', '70.5', '36', '58.6', '80.4', '82'], ['82.4', '82.8', '85.2', '88.5', '83.7', '85.6', '83.8', '83.3'], ['41.4', '48.2', '84.6', '87.9', '55.6', '62.3', '82.4', '78.6'], ['82.8', '84.5', '86.9', '87.8', '84.8', '86.1', '85.1', '86.5'],...
column
['P', 'P', 'R', 'R', 'F1', 'F1', 'A', 'A']
['Ours_RE', 'Ours_RE+PE', 'Ours_RE+PE+Bigram']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P || Hotel</th> <th>P || Restaurant</th> <th>R || Hotel</th> <th>R || Restaurant</th> <th>F1 || Hotel</th> <th>F1 || Restaurant</th> <th>A || Hotel</th> <th>A || Restaurant</th> ...
Table 3
table_3
D16-1083
7
emnlp2016
The compared results are shown in Table 3. We utilize our learnt embeddings of reviewers (Ours RE), both of reviewers’ embeddings and products’ embeddings (Ours RE+PE), respectively. Moreover, to perform fair comparison, like Mukherjee et al. (2013b), we add representations of the review text in classifier (Ours RE+PE+...
[1, 2, 2, 1, 1, 1, 2]
['The compared results are shown in Table 3.', 'We utilize our learnt embeddings of reviewers (Ours RE), both of reviewers’ embeddings and products’ embeddings (Ours RE+PE), respectively.', 'Moreover, to perform fair comparison, like Mukherjee et al. (2013b), we add representations of the review text in classifier (Our...
[None, ['Ours_RE', 'Ours_RE+PE'], ['Ours_RE+PE+Bigram'], ['Hotel', 'Restaurant', 'Ours_RE', 'Ours_RE+PE', 'Ours_RE+PE+Bigram'], ['Ours_RE+PE', 'Ours_RE+PE+Bigram'], ['Hotel', 'Restaurant', 'Ours_RE', 'Ours_RE+PE', 'Ours_RE+PE+Bigram'], None]
1
D16-1083table_4
SVM 5-fold CV classification results by dropping relations from our method utilizing RE+PE+Bigram. Both training and testing use balanced data (50:50). Differences in classification metrics for each dropped relation are statistically significant with p<0.01 based on paired t-test. of reviewers (RE) learnt by the tensor...
2
[['Dropped Relation', '1'], ['Dropped Relation', '2'], ['Dropped Relation', '3'], ['Dropped Relation', '4'], ['Dropped Relation', '5'], ['Dropped Relation', '6'], ['Dropped Relation', '7'], ['Dropped Relation', '8'], ['Dropped Relation', '9'], ['Dropped Relation', '10'], ['Dropped Relation', '11']]
2
[['Hotel', 'F1'], ['Hotel', 'A'], ['Restaurant', 'F1'], ['Restaurant', 'A']]
[['-2.1', '-2.0', '-2.0', '-3.1'], ['-2.3', '-2.1', '-1.9', '-2.9'], ['-3.9', '-4.0', '-4.0', '-6.3'], ['-3.7', '-3.5', '-3.6', '-5.5'], ['-3.5', '-3.6', '-2.8', '-4.5'], ['-2.5', '-2.5', '-3.4', '-5.2'], ['-3.2', '-3.2', '-3.3', '-5.0'], ['-2.8', '-2.6', '-3.0', '-4.6'], ['-4.0', '-3.7', '-3.7', '-5.4'], ['-2.2', '-2....
column
['F1', 'A', 'F1', 'A']
['Dropped Relation']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Hotel || F1</th> <th>Hotel || A</th> <th>Restaurant || F1</th> <th>Restaurant || A</th> </tr> </thead> <tbody> <tr> <td>Dropped Relation || 1</td> <td>-2.1</td> <td>-2.0</td> ...
Table 4
table_4
D16-1083
7
emnlp2016
3.5 The Effects of Different Relations. We also drop relations of our method with a graceful degradation. Table 4 shows the performances of our method utilizing BF+PE+Bigram for hotel and restaurant domains. We found that dropping Relations 1, 2 and 10 results in a relatively gentle reduction (about 2.2%) in F1-score. ...
[2, 2, 1, 1, 2, 2, 1, 2]
['3.5 The Effects of Different Relations.', 'We also drop relations of our method with a graceful degradation.', 'Table 4 shows the performances of our method utilizing BF+PE+Bigram for hotel and restaurant domains.', 'We found that dropping Relations 1, 2 and 10 results in a relatively gentle reduction (about 2.2%) in...
[None, None, ['Hotel', 'Restaurant'], ['F1', '1', '2', '10'], None, None, None, None]
1
D16-1084table_4
Results for the unseen target stance detection development setup using BiCond, with single vs separate embeddings matrices for tweet and target and different initialisations
6
[['EmbIni', 'Random', 'NumMatr', 'Sing', 'Stance', 'FAVOR'], ['EmbIni', 'Random', 'NumMatr', 'Sing', 'Stance', 'AGAINST'], ['EmbIni', 'Random', 'NumMatr', 'Sing', 'Stance', 'Macro'], ['EmbIni', 'Random', 'NumMatr', 'Sep', 'Stance', 'FAVOR'], ['EmbIni', 'Random', 'NumMatr', 'Sep', 'Stance', 'AGAINST'], ['EmbIni', 'Rando...
1
[['P'], ['R'], ['F1']]
[['0.1982', '0.3846', '0.2616'], ['0.6263', '0.5929', '0.6092'], ['-', '-', '0.4354'], ['0.2278', '0.5043', '0.3138'], ['0.6706', '0.4300', '0.5240'], ['-', '-', '0.4189'], ['0.6000', '0.0513', '0.0945'], ['0.5761', '0.9440', '0.7155'], ['-', '-', '0.4050'], ['0.1429', '0.0342', '0.0552'], ['0.5707', '0.9033', '0.6995'...
column
['P', 'R', 'F1']
['PreFixed', 'PreCont']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>EmbIni || Random || NumMatr || Sing || Stance || FAVOR</td> <td>0.1982</td> <td>0.3846</td> <td>0.2616</td> </tr...
Table 4
table_4
D16-1084
6
emnlp2016
Pre-Training. Table 4 shows the effect of unsupervised pre-training of word embeddings with a word2vec skip-gram model, and furthermore, the results of sharing of these representations between the tweets and targets, on the development set. The first set of results is with a uniformly Random embedding initialisation in...
[2, 1, 1, 2, 1, 2, 2, 2]
['Pre-Training.', 'Table 4 shows the effect of unsupervised pre-training of word embeddings with a word2vec skip-gram model, and furthermore, the results of sharing of these representations between the tweets and targets, on the development set.', 'The first set of results is with a uniformly Random embedding initialis...
[None, None, ['Random'], ['PreFixed', 'PreCont'], ['Random', 'PreCont', 'PreFixed'], ['Sing', 'Sep'], ['Sing', 'Sep'], None]
1
D16-1084table_7
Stance Detection test results, compared against the state of the art. SVM-ngrams-comb and Majority baseline are reported in Mohammad et al. (2016), pkudblab in Wei et al. (2016), LitisMind in Zarrella and Marsh (2016), INF-UFRGS in Dias and Becker (2016)
4
[['Method', 'SVM-ngrams-comb (Unseen Target)', 'Stance', 'FAVOR'], ['Method', 'SVM-ngrams-comb (Unseen Target)', 'Stance', 'AGAINST'], ['Method', 'SVM-ngrams-comb (Unseen Target)', 'Stance', 'Macro'], ['Method', 'Majority baseline (Unseen Target)', 'Stance', 'FAVOR'], ['Method', 'Majority baseline (Unseen Target)', 'St...
1
[['F1']]
[['0.1842'], ['0.3845'], ['0.2843'], ['0.0'], ['0.5944'], ['0.2972'], ['0.3902'], ['0.5899'], ['0.4901']]
column
['F1']
['BiCond (Unseen Target)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Method || SVM-ngrams-comb (Unseen Target) || Stance || FAVOR</td> <td>0.1842</td> </tr> <tr> <td>Method || SVM-ngrams-comb (Unseen Target) || Stan...
Table 7
table_7
D16-1084
8
emnlp2016
Table 7 shows all our results, including those using the unseen target setup, compared against the state-of-the-art on the stance detection corpus. Table 7 further lists baselines reported by Mohammad et al. (2016), namely a majority class baseline (Majority baseline), and a method using 1 to 3-gram bag-of-word and cha...
[1, 1, 1, 2, 1, 1, 1]
['Table 7 shows all our results, including those using the unseen target setup, compared against the state-of-the-art on the stance detection corpus.', 'Table 7 further lists baselines reported by Mohammad et al. (2016), namely a majority class baseline (Majority baseline), and a method using 1 to 3-gram bag-of-word an...
[None, ['Majority baseline (Unseen Target)', 'SVM-ngrams-comb (Unseen Target)'], ['SVM-ngrams-comb (Unseen Target)', 'Majority baseline (Unseen Target)'], None, ['BiCond (Unseen Target)'], ['BiCond (Unseen Target)'], ['BiCond (Unseen Target)', 'Stance']]
1
D16-1088table_4
Event Recognition Performance Before/After Incorporating Subevents
1
[['(Huang and Riloff 2013)'], ['+Subevents']]
1
[['Recall'], ['Precision'], ['F1-score']]
[['71', '88', '79'], ['81', '83', '82']]
column
['Recall', 'Precision', 'F1-score']
['+Subevents']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Recall</th> <th>Precision</th> <th>F1-score</th> </tr> </thead> <tbody> <tr> <td>(Huang and Riloff 2013)</td> <td>71</td> <td>88</td> <td>79</td> </tr> <tr> <td>+Sube...
Table 4
table_4
D16-1088
5
emnlp2016
4 Evaluation. We show that our acquired subevent phrases are useful to discover articles that describe the main event and therefore improve event detection performance. For direct comparisons, we tested our subevents using the same test data and the same evaluation setting as the previous multi-faceted event recognitio...
[2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1]
['4 Evaluation.', 'We show that our acquired subevent phrases are useful to discover articles that describe the main event and therefore improve event detection performance.', 'For direct comparisons, we tested our subevents using the same test data and the same evaluation setting as the previous multi-faceted event re...
[None, None, ['(Huang and Riloff 2013)'], None, None, ['(Huang and Riloff 2013)'], ['+Subevents', '(Huang and Riloff 2013)'], None, ['(Huang and Riloff 2013)'], ['+Subevents'], ['F1-score', '+Subevents', 'Precision']]
1
D16-1089table_1
Zero-shot recognition results on AWA (% accuracy).
2
[['Dataset', 'AWA']]
2
[['Vector space models', 'LinReg'], ['Vector space models', 'NLinReg'], ['Vector space models', 'CME'], ['Vector space models', 'ES-ZSL'], ['Ours', 'Gaussian']]
[['44.0', '48.4', '43.1', '58.2', '65.4']]
column
['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy']
['Ours', 'Gaussian']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Vector space models || LinReg</th> <th>Vector space models || NLinReg</th> <th>Vector space models || CME</th> <th>Vector space models || ES-ZSL</th> <th>Ours || Gaussian</th> </tr> </thead> ...
Table 1
table_1
D16-1089
4
emnlp2016
3.2 Results. Table 1 compares our results on the AWA benchmark against alternatives using the same visual features, and word vectors trained on the same corpus. We observe that: (i) Our Gaussian-embedding obtains the best performance overall. (ii) Our method outperforms CME which shares an objective function and optimi...
[2, 1, 1, 1, 1]
['3.2 Results.', 'Table 1 compares our results on the AWA benchmark against alternatives using the same visual features, and word vectors trained on the same corpus.', 'We observe that: (i) Our Gaussian-embedding obtains the best performance overall.', '(ii) Our method outperforms CME which shares an objective function...
[None, ['Ours', 'Gaussian', 'Vector space models'], ['Ours', 'Gaussian'], ['Ours', 'Gaussian', 'Vector space models'], ['Ours', 'Gaussian', 'Vector space models']]
1
D16-1096table_1
Single system results in terms of (TER-BLEU)/2 (the lower the better) on 5 million Chinese to English training set. NMT results are on a large vocabulary (300k) and with UNK replaced. UGRU : updating with a GRU; USub: updating as a subtraction; UGRU + USub: combination of two methods (do not share coverage embedding ve...
3
[['single system', '-', 'Tree-to-string'], ['single system', '-', 'LVNMT'], ['single system', 'Ours', 'UGRU'], ['single system', 'Ours', 'USub'], ['single system', 'Ours', 'UGRU+USub'], ['single system', 'Ours', '+Obj.']]
3
[['MT06', '-', 'BP'], ['MT06', '-', 'BLEU'], ['MT06', '-', 'T-B'], ['MT08', 'News', 'BP'], ['MT08', 'News', 'BLEU'], ['MT08', 'News', 'T-B'], ['MT08', 'Web', 'BP'], ['MT08', 'Web', 'BLEU'], ['MT08', 'Web', 'T-B'], ['avg.', '-', 'T-B']]
[['0.95', '34.93', '9.45', '0.94', '31.12', '12.90', '0.90', '23.45', '17.72', '13.36'], ['0.96', '34.53', '12.25', '0.93', '28.86', '17.40', '0.97', '26.78', '17.57', '15.74'], ['0.92', '35.59', '10.71', '0.89', '30.18', '15.33', '0.97', '27.48', '16.67', '14.24'], ['0.91', '35.90', '10.29', '0.88', '30.49', '15.23', ...
column
['BP', 'BLEU', 'T-B', 'BP', 'BLEU', 'T-B', 'BP', 'BLEU', 'T-B', 'T-B']
['Ours']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MT06 || - || BP</th> <th>MT06 || - || BLEU</th> <th>MT06 || - || T-B</th> <th>MT08 || News || BP</th> <th>MT08 || News || BLEU</th> <th>MT08 || News || T-B</th> <th>MT08 || Web || BP</t...
Table 1
table_1
D16-1096
5
emnlp2016
5.2 Translation Results. Table 1 shows the results of all systems on 5 million training set. The traditional syntax-based system achieves 9.45, 12.90, and 17.72 on MT06, MT08 News, and MT08 Web sets respectively, and 13.36 on average in terms of (TERBLEU)/2. The large vocabulary NMT (LVNMT), our baseline, achieves an a...
[0, 1, 1, 1, 2, 1, 1, 1, 2]
['5.2 Translation Results.', 'Table 1 shows the results of all systems on 5 million training set.', 'The traditional syntax-based system achieves 9.45, 12.90, and 17.72 on MT06, MT08 News, and MT08 Web sets respectively, and 13.36 on average in terms of (TERBLEU)/2.', 'The large vocabulary NMT (LVNMT), our baseline, ac...
[None, None, ['MT06', 'T-B', 'MT08', 'News', 'Web', 'avg.'], ['LVNMT', 'avg.', 'T-B'], ['UGRU', 'USub', 'UGRU+USub', '+Obj.'], ['UGRU', 'LVNMT', 'avg.'], ['UGRU+USub', 'avg.', 'LVNMT'], ['LVNMT'], ['UGRU+USub']]
1
D16-1096table_2
Single system results in terms of (TER-BLEU)/2 on 11 million set. NMT results are on a large vocabulary (500k) and with UNK replaced. Due to the time limitation, we only have the results of UGRU system.
2
[['single system', 'Tree-to-string'], ['single system', 'LVNMT'], ['single system', 'UGRU']]
3
[['MT06', '-', 'BP'], ['MT06', '-', 'T-B'], ['MT08', 'News', 'BP'], ['MT08', 'News', 'T-B'], ['MT08', 'Web', 'BP'], ['MT08', 'Web', 'T-B'], ['avg.', '-', 'T-B']]
[['0.90', '8.70', '0.84', '12.65', '0.84', '17.00', '12.78'], ['0.96', '9.78', '0.94', '14.15', '0.97', '15.89', '13.27'], ['0.97', '8.62', '0.95', '12.79', '0.97', '15.34', '12.31']]
column
['BP', 'T-B', 'BP', 'T-B', 'BP', 'T-B', 'T-B']
['UGRU']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MT06 || - || BP</th> <th>MT06 || - || T-B</th> <th>MT08 || News || BP</th> <th>MT08 || News || T-B</th> <th>MT08 || Web || BP</th> <th>MT08 || Web || T-B</th> <th>avg. || - || T-B</th> ...
Table 2
table_2
D16-1096
5
emnlp2016
Table 2 shows the results of 11 million systems, LVNMT achieves an average (TER-BLEU)/2 of 13.27, which is about 2.5 points better than 5 million LVNMT. The result of our UGRU coverage model gives almost 1 point gain over LVNMT. Those results suggest that the more training data we use, the stronger the baseline system ...
[1, 1, 2, 2]
['Table 2 shows the results of 11 million systems, LVNMT achieves an average (TER-BLEU)/2 of 13.27, which is about 2.5 points better than 5 million LVNMT.', 'The result of our UGRU coverage model gives almost 1 point gain over LVNMT.', 'Those results suggest that the more training data we use, the stronger the baseline...
[['LVNMT', 'avg.'], ['UGRU', 'LVNMT'], None, None]
1
D16-1099table_2
Average results for DSMs over four different frequency ranges for the items in the TOEFL, ESL, SL, MEN, and RW tests. All DSMs are trained on the 1 billion words data.
2
[['DSM', 'CO'], ['DSM', 'PPMI'], ['DSM', 'TSVD'], ['DSM', 'ISVD'], ['DSM', 'RI'], ['DSM', 'SGNS'], ['DSM', 'CBOW']]
1
[['HIGH'], ['MEDIUM'], ['LOW'], ['MIXED']]
[['32.61 (↑62.5,↓04.6)', '35.77 (↑66.6,↓21.2)', '12.57 (↑35.7,↓00.0)', '27.14 (↑56.6,↓07.9)'], ['55.51 (↑75.3,↓28.0)', '57.83 (↑88.8,↓18.7)', '25.84 (↑50.0,↓00.0)', '47.73 (↑83.3,↓27.1)'], ['50.52 (↑70.9,↓23.2)', '54.75 (↑77.9,↓24.1)', '17.85 (↑50.0,↓00.0)', '41.08 (↑56.6,↓19.6)'], ['63.31 (↑87.5,↓36.5)', '69.25 (↑88.8...
column
['correlation', 'correlation', 'correlation', 'correlation']
['DSM']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>HIGH</th> <th>MEDIUM</th> <th>LOW</th> <th>MIXED</th> </tr> </thead> <tbody> <tr> <td>DSM || CO</td> <td>32.61 (↑62.5,↓04.6)</td> <td>35.77 (↑66.6,↓21.2)</td> <td>12.57 (↑3...
Table 2
table_2
D16-1099
5
emnlp2016
Table 2 (next side) shows the average results over the different frequency ranges for the various DSMs trained on the 1 billion-word ukWaC data. We also include the highest and lowest individual test scores (signified by ↑ and ↓), in order to get an idea about the consistency of the results. As can be seen in the table...
[1, 2, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2]
['Table 2 (next side) shows the average results over the different frequency ranges for the various DSMs trained on the 1 billion-word ukWaC data.', 'We also include the highest and lowest individual test scores (signified by ↑ and ↓), in order to get an idea about the consistency of the results.', 'As can be seen in t...
[None, None, ['ISVD', 'MEDIUM', 'MIXED'], ['SGNS', 'CBOW', 'HIGH', 'LOW'], ['CBOW', 'SGNS'], None, ['PPMI', 'TSVD', 'RI', 'MEDIUM', 'LOW'], ['CO', 'HIGH', 'MEDIUM', 'LOW', 'MIXED'], ['CO', 'PPMI', 'TSVD', 'ISVD', 'MEDIUM', 'HIGH'], ['LOW'], ['LOW'], ['CBOW', 'PPMI', 'RI'], ['CBOW', 'PPMI'], ['ISVD', 'CO'], ['ISVD'], ['...
1
D16-1102table_2
Estimated precision and recall for Tamil, Bengali and Malayalam before and after non-expert curation. We list state-of-the-art results for German and Hindi for comparison.
4
[['LANG.', 'Bengali PROJECTED', 'Match', 'partial'], ['LANG.', 'Bengali PROJECTED', 'Match', 'exact'], ['LANG.', 'Bengali CURATED', 'Match', 'partial'], ['LANG.', 'Bengali CURATED', 'Match', 'exact'], ['LANG.', 'Malayalam PROJECTED', 'Match', 'partial'], ['LANG.', 'Malayalam PROJECTED', 'Match', 'exact'], ['LANG.', 'Ma...
2
[['PRED.', 'P'], ['ARGUMENT', 'P'], ['ARGUMENT', 'R'], ['ARGUMENT', 'F1'], ['ARGUMENT', '%Agree']]
[['1.0', '0.84', '0.68', '0.75', '0.67'], ['1.0', '0.83', '0.68', '0.75', '0.67'], ['1.0', '0.88', '0.69', '0.78', '0.67'], ['1.0', '0.87', '0.69', '0.77', '0.67'], ['0.99', '0.87', '0.65', '0.75', '0.65'], ['0.99', '0.79', '0.63', '0.7', '0.65'], ['0.99', '0.92', '0.69', '0.78', '0.65'], ['0.99', '0.84', '0.67', '0.74...
column
['P', 'P', 'R', 'F1', '%Agree']
['LANG.']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>PRED. || P</th> <th>ARGUMENT || P</th> <th>ARGUMENT || R</th> <th>ARGUMENT || F1</th> <th>ARGUMENT || %Agree</th> </tr> </thead> <tbody> <tr> <td>LANG. || Bengali PROJECTED || Match ...
Table 2
table_2
D16-1102
4
emnlp2016
4.2 Results. The evaluation results are listed in Table 2. For comparison, we include evaluation results reported for three high-resource languages: German and Chinese, representing average high-resource results, as well as Hindi, a below-average outlier. We make the following observations: Lower annotation projection ...
[2, 1, 2, 1, 1, 1, 1]
['4.2 Results.', 'The evaluation results are listed in Table 2.', 'For comparison, we include evaluation results reported for three high-resource languages: German and Chinese, representing average high-resource results, as well as Hindi, a below-average outlier.', 'We make the following observations: Lower annotation ...
[None, None, ['German (Akbik et al. 2015)', 'Chinese (Akbik et al. 2015)', 'Hindi (Akbik et al. 2015)'], None, ['Bengali PROJECTED', 'Malayalam PROJECTED', 'Tamil PROJECTED', 'German (Akbik et al. 2015)'], ['Bengali PROJECTED', 'Malayalam PROJECTED', 'Hindi (Akbik et al. 2015)'], None]
1
D16-1104table_2
Performance of unigrams versus our similarity-based features using embeddings from Word2Vec
3
[['Features', 'Baseline', 'Unigrams'], ['Features', 'Baseline', 'S'], ['Features', 'Baseline', 'WS'], ['Features', 'Baseline', 'Both']]
1
[['P'], ['R'], ['F']]
[['67.2', '78.8', '72.53'], ['64.6', '75.2', '69.49'], ['67.6', '51.2', '58.26'], ['67', '52.8', '59.05']]
column
['P', 'R', 'F']
['Features']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F</th> </tr> </thead> <tbody> <tr> <td>Features || Baseline || Unigrams</td> <td>67.2</td> <td>78.8</td> <td>72.53</td> </tr> <tr> <td>Features ...
Table 2
table_2
D16-1104
3
emnlp2016
6 Results. Table 2 shows performance of sarcasm detection when our word embedding-based features are used on their own i.e, not as augmented features. The embedding in this case is Word2Vec. The four rows show baseline sets of features: unigrams, unweighted similarity using word embeddings (S), weighted similarity usin...
[2, 1, 2, 2, 1, 2]
['6 Results.', 'Table 2 shows performance of sarcasm detection when our word embedding-based features are used on their own i.e, not as augmented features.', 'The embedding in this case is Word2Vec.', 'The four rows show baseline sets of features: unigrams, unweighted similarity using word embeddings (S), weighted simi...
[None, None, None, ['Features'], ['Unigrams', 'F', 'S', 'WS'], ['S', 'WS']]
1
D16-1104table_3
Performance obtained on augmenting word embedding features to features from four prior works, for four word embeddings; L: Liebrecht et al. (2013), G: Gonz´alez-Ib´anez et al. (2011a), B: Buschmeier et al. (2014) , J: Joshi et al. (2015)
1
[['L'], ['+S'], ['+WS'], ['+S+WS'], ['G'], ['+S'], ['+WS'], ['+S+WS'], ['B'], ['+S'], ['+WS'], ['+S+WS'], ['J'], ['+S'], ['+WS'], ['+S+WS']]
2
[['LSA', 'P'], ['LSA', 'R'], ['LSA', 'F'], ['GloVe', 'P'], ['GloVe', 'R'], ['GloVe', 'F'], ['Dependency Weights', 'P'], ['Dependency Weights', 'R'], ['Dependency Weights', 'F'], ['Word2Vec', 'P'], ['Word2Vec', 'R'], ['Word2Vec', 'F']]
[['73', '79', '75.8', '73', '79', '75.8', '73', '79', '75.8', '73', '79', '75.8'], ['81.8', '78.2', '79.95', '81.8', '79.2', '80.47', '81.8', '78.8', '80.27', '80.4', '80', '80.2'], ['76.2', '79.8', '77.9', '76.2', '79.6', '77.86', '81.4', '80.8', '81.09', '80.8', '78.6', '79.68'], ['77.6', '79.8', '78.68', '74', '79.4...
column
['P', 'R', 'F', 'P', 'R', 'F', 'P', 'R', 'F', 'P', 'R', 'F']
['LSA', 'GloVe', 'Dependency Weights', 'Word2Vec']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>LSA || P</th> <th>LSA || R</th> <th>LSA || F</th> <th>GloVe || P</th> <th>GloVe || R</th> <th>GloVe || F</th> <th>Dependency Weights || P</th> <th>Dependency Weights || R</th> ...
Table 3
table_3
D16-1104
4
emnlp2016
Table 3 shows results for four kinds of word embeddings. All entries in the tables are higher than the simple unigrams baseline, i.e., F-score for each of the four is higher than unigrams - highlighting that these are better features for sarcasm detection than simple unigrams. Values in bold indicate the best F-score f...
[1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2]
['Table 3 shows results for four kinds of word embeddings.', 'All entries in the tables are higher than the simple unigrams baseline, i.e., F-score for each of the four is higher than unigrams - highlighting that these are better features for sarcasm detection than simple unigrams.', 'Values in bold indicate the best F...
[None, ['F'], ['F'], ['L', 'F', 'Word2Vec'], ['P'], ['G', 'Word2Vec'], ['B', 'Word2Vec', 'F'], ['P', 'R'], ['J', 'Word2Vec', '+S'], ['Word2Vec'], ['LSA', 'GloVe', 'Dependency Weights', 'Word2Vec'], ['L'], ['LSA', 'GloVe', 'Dependency Weights', 'Word2Vec'], None]
1
D16-1108table_4
Spearman rank correlation of thread ˜si,j with karma scores. (*) indicates statistical significance (p < 0.05). thread level, and in Table 5 for the user level. On the thread level, the hyb-500.30 style model consistently finds positive, statistically significant, correlation between the post’s stylistic similarity sco...
2
[['subreddit', 'askmen'], ['subreddit', 'askscience'], ['subreddit', 'askwomen'], ['subreddit', 'atheism'], ['subreddit', 'chgmyvw'], ['subreddit', 'fitness'], ['subreddit', 'politics'], ['subreddit', 'worldnews']]
1
[['hyb-500.30'], ['word only'], ['topic-100']]
[['0.392*', '0.222*', '0.055'], ['0.321*', '-0.110', '-0.166*'], ['0.501*', '0.388*', '0.005'], ['0.137*', '-0.229*', '-0.251'], ['0.167*', '-0.121*', '-0.306*'], ['0.130*', '0.017', '-0.313*'], ['0.533*', '0.341*', '0.011'], ['0.374*', '0.148*', '-0.277*']]
column
['correlation', 'correlation', 'correlation']
['hyb-500.30']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>hyb-500.30</th> <th>word only</th> <th>topic-100</th> </tr> </thead> <tbody> <tr> <td>subreddit || askmen</td> <td>0.392*</td> <td>0.222*</td> <td>0.055</td> </tr> <tr> ...
Table 4
table_4
D16-1108
5
emnlp2016
We compute a normalized community similarity score s˜i,j = si,j − si,m, where si,m is the corresponding score from the subreddit merged others. The correlation between s˜i,j and community feedback is reported for three models in Table 4 for the thread level, and in Table 5 for the user level. On the thread level, the h...
[2, 1, 1, 1, 1, 0, 0, 0]
['We compute a normalized community similarity score s˜i,j = si,j − si,m, where si,m is the corresponding score from the subreddit merged others.', 'The correlation between s˜i,j and community feedback is reported for three models in Table 4 for the thread level, and in Table 5 for the user level.', 'On the thread leve...
[None, None, ['hyb-500.30'], ['hyb-500.30'], None, None, None, None]
1
D16-1122table_2
Event detection performance (nDCG; higher is better) using thirty-nine well-known events that took place between 1973 and 1978. Capsule outperforms all four baseline methods.
2
[['Method', 'Capsule (this paper)'], ['Method', 'term-count deviation + tf-idf (equation (7))'], ['Method', 'term-count deviation (equation (6))'], ['Method', 'random'], ['Method', '“event-only” Capsule (this paper)']]
1
[['nDCG']]
[['0.693'], ['0.652'], ['0.642'], ['0.557'], ['0.426']]
column
['nDCG']
['Capsule (this paper)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>nDCG</th> </tr> </thead> <tbody> <tr> <td>Method || Capsule (this paper)</td> <td>0.693</td> </tr> <tr> <td>Method || term-count deviation + tf-idf (equation (7))</td> <td>0.652</td...
Table 2
table_2
D16-1122
8
emnlp2016
Specifically, we used each method to construct a ranked list of time intervals. Then, for each method, we computed the discounted cumulative gain (DCG), which, in this context, is equivalent to computing X 39 eD1 1 log rank e; Lmethod T  ; (9) where L method T is the method’s ranked list of time intervals and rank e;...
[0, 0, 0, 1]
['Specifically, we used each method to construct a ranked list of time intervals.', 'Then, for each method, we computed the discounted cumulative gain (DCG), which, in this context, is equivalent to computing X 39 eD1 1 log rank e; Lmethod T \x01\x01 ; (9) where L method T is the method’s ranked list of time intervals ...
[None, None, None, ['Capsule (this paper)', 'Method']]
1
D16-1129table_2
Results of multi-label classification from Experiment 1. Hamming-loss and One-Error are shown for two systems – Bidirectional LSTM and Bidirectional LSTM with Convolution and Attention.
2
[['Debate', 'Ban plastic water bottles?'], ['Debate', 'Christianity or Atheism'], ['Debate', 'Evolution vs. Creation'], ['Debate', 'Firefox vs. Internet Explorer'], ['Debate', 'Gay marriage: right or wrong?'], ['Debate', 'Should parents use spanking?'], ['Debate', 'If your spouse committed murder...'], ['Debate', 'Indi...
2
[['BLSTM', 'H-loss'], ['BLSTM', 'one-E'], ['BLSTM/CNN/ATT', 'H-loss'], ['BLSTM/CNN/ATT', 'one-E']]
[['0.092', '0.283', '0.090', '0.305'], ['0.105', '0.212', '0.105', '0.218'], ['0.093', '0.196', '0.094', '0.234'], ['0.080', '0.312', '0.078', '0.345'], ['0.095', '0.243', '0.094', '0.270'], ['0.082', '0.312', '0.083', '0.344'], ['0.094', '0.297', '0.094', '0.272'], ['0.088', '0.294', '0.086', '0.322'], ['0.086', '0.36...
column
['H-loss', 'one-E', 'H-loss', 'one-E']
['BLSTM', 'BLSTM/CNN/ATT']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLSTM || H-loss</th> <th>BLSTM || one-E</th> <th>BLSTM/CNN/ATT || H-loss</th> <th>BLSTM/CNN/ATT || one-E</th> </tr> </thead> <tbody> <tr> <td>Debate || Ban plastic water bottles?</td> ...
Table 2
table_2
D16-1129
6
emnlp2016
Results from Table 2 do not show significant differences between the two models. Putting the oneerror numbers into human performance context can be done only indirectly, as the data validation presented in Section 3.4 had a different set-up. Here we can see that the error rate of the most confident predicted label is a...
[1, 2, 1]
['Results from Table 2 do not show significant differences between the two models.', 'Putting the one-error numbers into human performance context can be done only indirectly, as the data validation presented in Section 3.4 had a different set-up.', 'Here we can see that the error rate of the most confident predicted l...
[['BLSTM', 'BLSTM/CNN/ATT'], None, ['BLSTM', 'BLSTM/CNN/ATT']]
1
D16-1132table_2
Results of intra-sentential subject zero anaphora resolution
3
[['Method', 'Ouchi et al. (ACL2015)', '-'], ['Method', 'Iida et al. (EMNLP2015)', '-'], ['Method', 'single column CNN (w/ position vec.)', '-'], ['Method', 'MCNN', 'BASE'], ['Method', 'MCNN', 'BASE+SURFSEQ'], ['Method', 'MCNN', 'BASE+DEPTREE'], ['Method', 'MCNN', 'BASE+SURFSEQ+DEPTREE'], ['Method', 'MCNN', 'BASE+SURFSE...
1
[['#cols.'], ['Recall'], ['Precision'], ['F-score'], ['Avg.P']]
[['—', '0.539', '0.612', '0.573', '0.670'], ['—', '0.484', '0.357', '0.411', '—'], ['1', '0.365', '0.524', '0.430', '0.540'], ['1', '0.446', '0.394', '0.419', '0.448'], ['4', '0.458', '0.597', '0.518', '0.679'], ['5', '0.339', '0.688', '0.454', '0.690'], ['8', '0.417', '0.695', '0.521', '0.730'], ['7', '0.459', '0.631'...
column
['#cols.', 'Recall', 'Precision', 'F-score', 'Avg.P']
['MCNN', 'BASE+SURFSEQ+DEPTREE+PREDCONTEXT (Proposed)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>#cols.</th> <th>Recall</th> <th>Precision</th> <th>F-score</th> <th>Avg.P</th> </tr> </thead> <tbody> <tr> <td>Method || Ouchi et al. (ACL2015) || -</td> <td>—</td> <td>0.5...
Table 2
table_2
D16-1132
8
emnlp2016
The results in Table 2 show that our method using all the column sets achieved the best average precision among the combination of column sets that include at least the BASE column set. This suggests that all of the clues introduced by our four column sets are effective for performance improvement. Table 2 also demonst...
[1, 1, 1, 1, 1, 2, 0, 1, 2]
['The results in Table 2 show that our method using all the column sets achieved the best average precision among the combination of column sets that include at least the BASE column set.', 'This suggests that all of the clues introduced by our four column sets are effective for performance improvement.', 'Table 2 also...
[['MCNN'], ['BASE+SURFSEQ+DEPTREE+PREDCONTEXT (Proposed)'], ['BASE+SURFSEQ+DEPTREE+PREDCONTEXT (Proposed)', 'Precision', 'Ouchi et al. (ACL2015)'], ['BASE+SURFSEQ+DEPTREE+PREDCONTEXT (Proposed)', 'F-score', 'Iida et al. (EMNLP2015)', 'single column CNN (w/ position vec.)'], ['BASE+SURFSEQ+DEPTREE+PREDCONTEXT (Proposed)...
1
D16-1136table_4
Spearman’s rank correlation for monolingual similarity measurement on 3 datasets WS-de (353 pairs), WS-en (353 pairs) and RW-en (2034 pairs). We compare against 5 baseline crosslingual word embeddings. The best CLWE performance is bold. For reference, we add the monolingual CBOW with and without embeddings combination,...
3
[['Model', 'Baselines', 'Klementiev et al. (2012)'], ['Model', 'Baselines', 'Chandar A P et al. (2014)'], ['Model', 'Baselines', 'Hermann and Blunsom (2014)'], ['Model', 'Baselines', 'Luong et al. (2015)'], ['Model', 'Baselines', 'Gouws and Sogaard (2015)'], ['Model', 'Mono', 'CBOW'], ['Model', 'Mono', '+combine'], ['M...
1
[['WS-de'], ['WS-en'], ['RW-en']]
[['23.8', '13.2', '7.3'], ['34.6', '39.8', '20.5'], ['28.3', '19.8', '13.6'], ['47.4', '49.3', '25.3'], ['67.4', '71.8', '31.0'], ['62.2', '70.3', '42.7'], ['65.8', '74.1', '43.1'], ['-', '81.0', '-'], ['-', '74.8', '48.3'], ['59.3', '68.6', '38.1'], ['71.1', '76.2', '44.0']]
column
['correlation', 'correlation', 'correlation']
['Ours']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>WS-de</th> <th>WS-en</th> <th>RW-en</th> </tr> </thead> <tbody> <tr> <td>Model || Baselines || Klementiev et al. (2012)</td> <td>23.8</td> <td>13.2</td> <td>7.3</td> </tr> ...
Table 4
table_4
D16-1136
7
emnlp2016
We train the model as described in §4, which is the combine embeddings setting from Table 3. Since the evaluation involves de and en word similarity, we train the CLWE for en-de pair. Table 4 shows the performance of our combined model compared with several baselines. Our combined model out-performed both Luong et al. ...
[0, 2, 1, 1]
['We train the model as described in §4, which is the combine embeddings setting from Table 3.', 'Since the evaluation involves de and en word similarity, we train the CLWE for en-de pair.', 'Table 4 shows the performance of our combined model compared with several baselines.', 'Our combined model out-performed both Lu...
[None, None, ['Model'], ['Ours', '+combine', 'Luong et al. (2015)', 'Gouws and Sogaard (2015)']]
1
D16-1136table_6
CLDC performance for both en → de and de → en direction for many CLWE. The MT baseline uses phrase-based statistical machine translation to translate the source language to target language (Klementiev et al., 2012). The best scores are bold.
2
[['Model', 'MT baseline'], ['Model', 'Klementiev et al. (2012)'], ['Model', 'Gouws et al. (2015)'], ['Model', 'Kocisk ˇ y et al. (2014)'], ['Model', 'Chandar A P et al. (2014) 91.8'], ['Model', 'Hermann and Blunsom (2014)'], ['Model', 'Luong et al. (2015)'], ['Model', 'Our model']]
1
[['en → de'], ['de → en']]
[['68.1', '67.4'], ['77.6', '71.1'], ['86.5', '75.0'], ['83.1', '75.4 ´'], ['74.2'], ['86.4', '74.7'], ['88.4', '80.3'], ['86.3', '76.8']]
column
['accuracy', 'accuracy']
['Our model']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>en → de</th> <th>de → en</th> </tr> </thead> <tbody> <tr> <td>Model || MT baseline</td> <td>68.1</td> <td>67.4</td> </tr> <tr> <td>Model || Klementiev et al. (2012)</td> <...
Table 6
table_6
D16-1136
8
emnlp2016
Table 6 shows the CLDC results for various CLWE. Despite its simplicity, our model achieves competitive performance. Note that aside from our model, all other models in Table 6 use a large bitext (Europarl) which may not exist for many lowresource languages, limiting their applicability.
[1, 1, 2]
['Table 6 shows the CLDC results for various CLWE.', 'Despite its simplicity, our model achieves competitive performance.', 'Note that aside from our model, all other models in Table 6 use a large bitext (Europarl) which may not exist for many lowresource languages, limiting their applicability.']
[None, ['Our model'], ['Model']]
1
D16-1138table_3
Average accuracy over all the morphological inflection datasets. The baseline results for Seq2Seq variants are taken from (Faruqui et al., 2016).
2
[['Model', 'Seq2Seq'], ['Model', 'Seq2Seq w/ Attention'], ['Model', 'Adapted-seq2seq (FTND16)'], ['Model', 'uniSSNT+'], ['Model', 'biSSNT+']]
1
[['Avg. accuracy']]
[['79.08'], ['95.64'], ['96.20'], ['87.85'], ['95.32']]
column
['Avg. accuracy']
['biSSNT+']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Avg. accuracy</th> </tr> </thead> <tbody> <tr> <td>Model || Seq2Seq</td> <td>79.08</td> </tr> <tr> <td>Model || Seq2Seq w/ Attention</td> <td>95.64</td> </tr> <tr> <td>Mo...
Table 3
table_3
D16-1138
7
emnlp2016
Table 3 gives the average accuracy of the uniSSNT+, biSSNT+, vanilla encoder-decoder, and attention-based models. The model with the best previous average result — denoted as adaptedseq2seq (FTND16) (Faruqui et al., 2016) — is also included for comparison. Our biSSNT+ model outperforms the vanilla encoder-decoder by a ...
[1, 2, 1, 2, 2]
['Table 3 gives the average accuracy of the uniSSNT+, biSSNT+, vanilla encoder-decoder, and attention-based models.', 'The model with the best previous average result — denoted as adaptedseq2seq (FTND16) (Faruqui et al., 2016) — is also included for comparison.', 'Our biSSNT+ model outperforms the vanilla encoder-decod...
[['Seq2Seq', 'Seq2Seq w/ Attention', 'uniSSNT+', 'biSSNT+', 'Avg. accuracy'], ['Adapted-seq2seq (FTND16)'], ['biSSNT+', 'Seq2Seq'], None, ['uniSSNT+', 'biSSNT+', 'Seq2Seq']]
1
D16-1144table_4
Study of typing performance on the three datasets.
2
[['Typing Method', 'CLPL (Cour et al. 2011)'], ['Typing Method', 'PL-SVM (Nguyen and Caruana 2008)'], ['Typing Method', 'FIGER (Ling and Weld 2012)'], ['Typing Method', 'FIGER-Min (Gillick et al. 2014)'], ['Typing Method', 'HYENA (Yosef et al. 2012)'], ['Typing Method', 'HYENA-Min'], ['Typing Method', 'ClusType (Ren et...
2
[['Wiki', 'Acc'], ['Wiki', 'Ma-F1'], ['Wiki', 'Mi-F1'], ['OntoNotes', 'Acc'], ['OntoNotes', 'Ma-F1'], ['OntoNotes', 'Mi-F1'], ['BBN', 'Acc'], ['BBN', 'Ma-F1'], ['BBN', 'Mi-F1']]
[['0.162', '0.431', '0.411', '0.201', '0.347', '0.358', '0.438', '0.603', '0.536'], ['0.428', '0.613', '0.571', '0.225', '0.455', '0.437', '0.465', '0.648', '0.582'], ['0.474', '0.692', '0.655', '0.369', '0.578', '0.516', '0.467', '0.672', '0.612'], ['0.453', '0.691', '0.631', '0.373', '0.570', '0.509', '0.444', '0.671...
column
['Acc', 'Ma-F1', 'Mi-F1', 'Acc', 'Ma-F1', 'Mi-F1', 'Acc', 'Ma-F1', 'Mi-F1']
['AFET']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Wiki || Acc</th> <th>Wiki || Ma-F1</th> <th>Wiki || Mi-F1</th> <th>OntoNotes || Acc</th> <th>OntoNotes || Ma-F1</th> <th>OntoNotes || Mi-F1</th> <th>BBN || Acc</th> <th>BBN || Ma-F...
Table 4
table_4
D16-1144
8
emnlp2016
Table 4 shows the results of AFET and its variants. Comparison with the other typing methods. AFET outperforms both FIGER and HYENA systems, demonstrating the predictive power of the learned embeddings, and the effectiveness of modeling type correlation information and noisy candidate types. We also observe that prunin...
[1, 2, 1, 1, 1]
['Table 4 shows the results of AFET and its variants.', 'Comparison with the other typing methods.', 'AFET outperforms both FIGER and HYENA systems, demonstrating the predictive power of the learned embeddings, and the effectiveness of modeling type correlation information and noisy candidate types.', 'We also observe ...
[['AFET-NoCo', 'AFET-NoPa', 'AFET-CoH', 'AFET'], ['Typing Method'], ['AFET', 'FIGER (Ling and Weld 2012)', 'HYENA (Yosef et al. 2012)'], None, ['ClusType (Ren et al. 2015)', 'FIGER (Ling and Weld 2012)', 'HYENA (Yosef et al. 2012)']]
1
D16-1147table_4
Breakdown of test results (% hits@1) on WIKIMOVIES for Key-Value Memory Networks using different knowledge representations.
2
[['Question Type', 'Writer to Movie'], ['Question Type', 'Tag to Movie'], ['Question Type', 'Movie to Year'], ['Question Type', 'Movie to Writer'], ['Question Type', 'Movie to Tags'], ['Question Type', 'Movie to Language'], ['Question Type', 'Movie to IMDb Votes'], ['Question Type', 'Movie to IMDb Rating'], ['Question ...
1
[['KB'], ['IE'], ['Doc']]
[['97', '72', '91'], ['85', '35', '49'], ['95', '75', '89'], ['95', '61', '64'], ['94', '47', '48'], ['96', '62', '84'], ['92', '92', '92'], ['94', '75', '92'], ['97', '84', '86'], ['93', '76', '79'], ['91', '64', '64'], ['90', '78', '91'], ['93', '66', '83']]
column
['hits@1', 'hits@1', 'hits@1']
['KB', 'IE', 'Doc']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>KB</th> <th>IE</th> <th>Doc</th> </tr> </thead> <tbody> <tr> <td>Question Type || Writer to Movie</td> <td>97</td> <td>72</td> <td>91</td> </tr> <tr> <td>Question Typ...
Table 4
table_4
D16-1147
7
emnlp2016
A breakdown by question type comparing the different data sources for KVMemNNs is given in Table 4. IE loses out especially to Doc (and KB) on Writer, Director and Actor to Movie, perhaps because coreference is difficult in these cases – although it has other losses elsewhere too. Note that only 56% of subject-object p...
[1, 1, 2, 1]
['A breakdown by question type comparing the different data sources for KVMemNNs is given in Table 4.', 'IE loses out especially to Doc (and KB) on Writer, Director and Actor to Movie, perhaps because coreference is difficult in these cases – although it has other losses elsewhere too.', 'Note that only 56% of subject-...
[None, ['IE', 'Doc', 'KB', 'Writer to Movie', 'Director to Movie', 'Actor to Movie'], ['IE', 'KB'], ['Doc', 'KB', 'Tag to Movie', 'Movie to Writer', 'Movie to Actors']]
1
D16-1149table_3
Convergence t-values of paired t-tests comparing team-level partner differences (T Dif fp) of first 3, 5, 7 minutes vs. last 3, 5, 7 minutes, respectively, and of first vs. second game half, for each game. Positive t-values indicate convergence (i.e., that partner differences in the second interval are smaller than in ...
2
[['Feature', 'Pitch-min'], ['Feature', 'Pitch-max'], ['Feature', 'Pitch-mean'], ['Feature', 'Pitch-sd'], ['Feature', 'Intensity-mean'], ['Feature', 'Intensity-min'], ['Feature', 'Intensity-max'], ['Feature', 'Shimmer-local'], ['Feature', 'Jitter-local']]
2
[['First vs. last 3 minutes', 'Game1'], ['First vs. last 3 minutes', 'Game2'], ['First vs. last 5 minutes', 'Game1'], ['First vs. last 5 minutes', 'Game2'], ['First vs. last 7 minutes', 'Game1'], ['First vs. last 7 minutes', 'Game2'], ['First vs. second half', 'Game1'], ['First vs. second half', 'Game2']]
[['2.474*', '-0.709', '1.487', '-1.299', '1.359', '-1.622', '0.329', '-0.884'], ['4.947*', '1.260', '1.892', '-0.468', '1.348', '-0.424', '0.457', '0.627'], ['-2.687*', '0.109', '-2.900*', '0.417', '-2.965*', '-0.361', '-1.905', '-0.266'], ['1.364', '0.409', '1.919', '0.591', '1.807', '0.576', '1.271', '0.089'], ['-0.2...
column
['t-value', 't-value', 't-value', 't-value', 't-value', 't-value', 't-value', 't-value']
['Game1', 'Game2']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>First vs. last 3 minutes || Game1</th> <th>First vs. last 3 minutes || Game2</th> <th>First vs. last 5 minutes || Game1</th> <th>First vs. last 5 minutes || Game2</th> <th>First vs. last 7 minute...
Table 3
table_3
D16-1149
7
emnlp2016
The convergence results are shown in Table 3 for four different temporal comparison intervals. Comparison of the significant game 1 results shows that teams entrained on pitch min, pitch max, shimmer, and jitter in at least one of the intervals. Both shimmer and jitter converged for all choices of temporal units. For p...
[1, 1, 1, 2, 1, 1, 1, 2]
['The convergence results are shown in Table 3 for four different temporal comparison intervals.', 'Comparison of the significant game 1 results shows that teams entrained on pitch min, pitch max, shimmer, and jitter in at least one of the intervals.', 'Both shimmer and jitter converged for all choices of temporal unit...
[None, ['Game1', 'Pitch-min', 'Pitch-max', 'Shimmer-local', 'Jitter-local'], ['Shimmer-local', 'Jitter-local'], ['Pitch-min', 'Pitch-max', 'Pitch-mean', 'Pitch-sd'], ['Game1', 'Pitch-mean'], ['Game1', 'Feature'], ['Game2', 'Feature', 'Intensity-mean', 'Intensity-min'], ['Feature']]
1
D16-1150table_1
Results of the Regression Analysis
1
[['Agreeableness'], ['Conscientiousness'], ['Extroversion'], ['Neurotisim'], ['Openness'], ['Conservation'], ['Hedonism'], ['Openness to change'], ['Self-enhancement'], ['Self-transcendence']]
1
[['Safety'], ['Fuel'], ['Quality'], ['Style'], ['Price'], ['Luxury'], ['Perf'], ['Durab']]
[['0.39', '-0.52', '-0.53', '0.54', '0.81', '0.004', '-0.62', '-0.27'], ['-1.75', '-0.31', '0.80', '0.29', '-0.01', '0.27', '0.83', '-0.12'], ['0.69', '-0.71', '0.008', '-0.25', '-0.37', '0.48', '-0.07', '0.224'], ['1.08', '-0.01', '-0.46', '-0.11', '-0.32', '-0.07', '0.18', '-0.28'], ['1.59', '-0.05', '0.01', '-0.99',...
column
['regression', 'regression', 'regression', 'regression', 'regression', 'regression', 'regression', 'regression']
['Safety', 'Fuel', 'Quality', 'Style', 'Price', 'Luxury', 'Perf', 'Durab']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Safety</th> <th>Fuel</th> <th>Quality</th> <th>Style</th> <th>Price</th> <th>Luxury</th> <th>Perf</th> <th>Durab</th> </tr> </thead> <tbody> <tr> <td>Agreeableness</td...
Table 1
table_1
D16-1150
6
emnlp2016
In our first study, we employed regression analysis to identify significant correlations between personal traits and aspect ranks. Specifically, we trained eight linear regression models, one for each of the eight car aspects. The dependent variable in each model is the rank of an aspect (from 1 to 8) and the independe...
[2, 2, 2, 2, 2, 1, 1, 2, 1, 2, 1, 2, 1]
['In our first study, we employed regression analysis to identify significant correlations between personal traits and aspect ranks.', 'Specifically, we trained eight linear regression models, one for each of the eight car aspects.', 'The dependent variable in each model is the rank of an aspect (from 1 to 8) and the i...
[None, None, None, None, None, None, ['Luxury', 'Self-enhancement'], None, ['Safety', 'Conservation'], None, ['Self-transcendence', 'Fuel', 'Style'], None, ['Price', 'Conservation', 'Safety', 'Conscientiousness', 'Openness to change', 'Perf']]
1
D16-1151table_9
Performance of different feature groups for alignment.
3
[['Feature', 'Entailment score only', '-'], ['Feature', 'Entailment score only', '+Lexical'], ['Feature', 'Entailment score only', '+Syntactic'], ['Feature', 'Entailment score only', '+Sentence']]
1
[['P'], ['R'], ['F1']]
[['39.55', '14.59', '21.32'], ['50.75', '26.02', '34.40'], ['62.31', '31.47', '41.82'], ['62.33', '31.41', '41.53']]
column
['P', 'R', 'F1']
['Feature']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Feature || Entailment score only || -</td> <td>39.55</td> <td>14.59</td> <td>21.32</td> </tr> <tr> <td>F...
Table 9
table_9
D16-1151
8
emnlp2016
Table 9 shows an ablation of the alignment classifier features. Entailment of arguments is the most informative feature for argument alignment. Adding lexical and syntactic context compatibilities adds significant boosts in precision and recall. Knowing that the arguments are retrieved by the same query pattern (senten...
[1, 1, 1, 1, 2]
['Table 9 shows an ablation of the alignment classifier features.', 'Entailment of arguments is the most informative feature for argument alignment.', 'Adding lexical and syntactic context compatibilities adds significant boosts in precision and recall.', 'Knowing that the arguments are retrieved by the same query patt...
[None, ['Entailment score only'], ['+Lexical', '+Syntactic', 'P', 'R'], ['+Sentence'], ['+Sentence']]
1
D16-1152table_4
Evaluation results on the NEEL-test and TACL datasets for different systems. The best results are in bold.
3
[['System', 'Our approach', 'NTEL-nonstruct'], ['System', 'Our approach', 'NTEL'], ['System', 'Our approach', 'NTEL user-entity'], ['System', 'Our approach', 'NTEL mention-entity'], ['System', 'Our approach', 'NTEL user-entity mention-entity'], ['System', 'Best published results', 'S-MART']]
2
[['NEEL -test', 'P'], ['NEEL -test', 'R'], ['NEEL-test', 'F1'], ['TACL', 'P'], ['TACL', 'R'], ['TACL', 'F1'], ['-', 'Avg. F1']]
[['80.0', '68.0', '73.5', '64.7', '62.3', '63.5', '68.5'], ['82.8', '69.3', '75.4', '68.0', '66.0', '67.0', '71.2'], ['82.3', '71.8', '76.7', '66.9', '68.7', '67.8', '72.2'], ['80.2', '75.8', '77.9', '66.9', '69.3', '68.1', '73.0'], ['81.9', '75.6', '78.6', '69.0', '69.0', '69.0', '73.8'], ['80.2', '75.4', '77.7', '60....
column
['P', 'R', 'F1', 'P', 'R', 'F1', 'Avg. F1']
['Our approach']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>NEEL-test || P</th> <th>NEEL-test || R</th> <th>NEEL-test || F1</th> <th>TACL || P</th> <th>TACL || R</th> <th>TACL || F1</th> <th>Avg. F1 || -</th> </tr> </thead> <tbody> <tr> ...
Table 4
table_4
D16-1152
8
emnlp2016
Table 4 summarizes the empirical findings for our approach and S-MART (Yang and Chang, 2015) on the tweet entity linking task. For the systems with user-entity bilinear function, we report results obtained from embeddings trained on RETWEET+ in Table 4, and other results are available in Table 5. The best hyper-paramet...
[1, 1, 2, 2, 2, 1, 1, 2, 1, 1]
['Table 4 summarizes the empirical findings for our approach and S-MART (Yang and Chang, 2015) on the tweet entity linking task.', 'For the systems with user-entity bilinear function, we report results obtained from embeddings trained on RETWEET+ in Table 4, and other results are available in Table 5.', 'The best hyper...
[['System'], None, None, None, None, ['NTEL-nonstruct', 'NTEL', 'F1'], ['NTEL', 'S-MART'], None, ['NTEL', 'Avg. F1'], ['R', 'P']]
1
D16-1153table_6
NIST evaluations for Uyghur. * indicates transfer from Uzbek and Turkish
2
[['Model', 'Lample et al. (2016)'], ['Model', 'Our best transfer model']]
1
[['F1']]
[['37.1'], ['51.2']]
column
['F1']
['Our best transfer model']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Model || Lample et al. (2016)</td> <td>37.1</td> </tr> <tr> <td>Model || Our best transfer model</td> <td>51.2</td> </tr> </tbody></table...
Table 6
table_6
D16-1153
7
emnlp2016
Table 6 documents NIST evaluation results on an unseen Uyghur test set (with gold annotations) for the best transfer model configuration jointly trained on Turkish and Uzbek gold annotations and Uyghur training annotations produced by a non-speaker linguist (non-gold). Since Uyghur lacks helpful typelevel orthographic ...
[1, 2, 1, 2]
['Table 6 documents NIST evaluation results on an unseen Uyghur test set (with gold annotations) for the best transfer model configuration jointly trained on Turkish and Uzbek gold annotations and Uyghur training annotations produced by a non-speaker linguist (non-gold).', "Since Uyghur lacks helpful typelevel orthogra...
[None, None, ['F1', 'Model'], None]
1
D16-1154table_3
LMs performance on the LTCB test set.
1
[['KN'], ['KN+cache'], ['FFNN [M*200]-600-600-80k'], ['FOFE [M*200]-600-600-80k'], ['RNN [600]-R600-80k'], ['LSTM [200]-R600-80k'], ['LSTM [200]-R600-R600-80k'], ['LSRC [200]-R600-80k'], ['LSRC [200]-R600-600-80k']]
1
[['Model'], ['Model'], ['Model']]
[['239', '156', '132'], ['188', '127', '109'], ['235', '150', '114'], ['112', '107', '100'], ['85', '85', '85'], ['66', '66', '66'], ['61', '61', '61'], ['63', '63', '63'], ['59', '59', '59']]
column
['perplexity', 'perplexity', 'perplexity']
['LSRC [200]-R600-80k', 'LSRC [200]-R600-600-80k']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Model</th> <th>Model</th> <th>Model</th> <th>NoP</th> </tr> </thead> <tbody> <tr> <td>Context Size M=N-1</td> <td>1</td> <td>2</td> <td>4</td> <td>4</td> </tr> <...
Table 3
table_3
D16-1154
8
emnlp2016
The results shown in Table 3 generally confirm the conclusions we drew from the PTB experiments above. In particular, we can see that the proposed LSRC model largely outperforms all other models. In particular, LSRC clearly outperforms LSTM with a negligible increase in the number of parameters (resulting from the addi...
[1, 1, 1, 1]
['The results shown in Table 3 generally confirm the conclusions we drew from the PTB experiments above.', 'In particular, we can see that the proposed LSRC model largely outperforms all other models.', 'In particular, LSRC clearly outperforms LSTM with a negligible increase in the number of parameters (resulting from ...
[None, ['LSRC [200]-R600-80k', 'LSRC [200]-R600-600-80k', 'FFNN [M*200]-600-600-80k', 'FOFE [M*200]-600-600-80k', 'RNN [600]-R600-80k', 'LSTM [200]-R600-80k', 'LSTM [200]-R600-R600-80k'], ['LSRC [200]-R600-80k', 'LSRC [200]-R600-600-80k', 'LSTM [200]-R600-80k', 'LSTM [200]-R600-R600-80k'], ['LSRC [200]-R600-80k', 'LSRC...
1
D16-1156table_2
Results on our subset of the PASCAL-50S and PASCAL-Context-50S datasets. We are able to significantly outperform the Stanford Parser and make small improvements over DeepLab-CRF for PASCAL-50S.
1
[['DeepLab-CRF'], ['Stanford Parser'], ['Average'], ['Domain Adaptation'], ['Ours CASCADE'], ['Ours MEDIATOR'], ['oracle']]
2
[['PASCAL-50S', 'Instance-Level Jaccard Index'], ['PASCAL-50S', 'PPAR Acc.'], ['PASCAL-50S', 'Average'], ['PASCAL-Context-50S', 'Instance-Level.1 Jaccard Index'], ['PASCAL-Context-50S', 'PPAR Acc.'], ['PASCAL-Context-50S', 'Average']]
[['66.83', '-', '-', '43.94', '-', '-'], ['-', '62.42', '-', '-', '50.75', '-'], ['-', '-', '64.63', '-', '-', '47.345'], ['-', '72.08', '-', '-', '58.32', '-'], ['67.56', '75.00', '71.28', '43.94', '63.58', '53.76'], ['67.58', '80.33', '73.96', '43.94', '63.58', '53.76'], ['69.96', '96.50', '83.23', '49.21', '75.75', ...
column
['Instance-Level Jaccard Index', 'PPAR Acc.', 'Average', 'Instance-Level.1 Jaccard Index', 'PPAR Acc.', 'Average']
['Ours CASCADE', 'Ours MEDIATOR']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>PASCAL-50S || Instance-Level Jaccard Index</th> <th>PASCAL-50S || PPAR Acc.</th> <th>PASCAL-50S || Average</th> <th>PASCAL-Context-50S || Instance-Level.1 Jaccard Index</th> <th>PASCAL-Context-50...
Table 2
table_2
D16-1156
8
emnlp2016
We present our results in Table 2. Our approach significantly outperforms the Stanford Parser (De Marneffe et al., 2006) by 17.91% (28.69% relative) for PASCAL-50S, and 12.83% (25.28% relative) for PASCAL-Context-50S. We also make small improvements over DeepLabCRF (Chen et al., 2015) in the case of PASCAL-50S. To meas...
[1, 1, 1, 2, 2, 2]
['We present our results in Table 2.', 'Our approach significantly outperforms the Stanford Parser (De Marneffe et al., 2006) by 17.91% (28.69% relative) for PASCAL-50S, and 12.83% (25.28% relative) for PASCAL-Context-50S.', 'We also make small improvements over DeepLabCRF (Chen et al., 2015) in the case of PASCAL-50S....
[None, ['Ours CASCADE', 'Ours MEDIATOR', 'Stanford Parser', 'PASCAL-50S', 'PASCAL-Context-50S'], ['DeepLab-CRF', 'PASCAL-50S'], None, None, ['Ours CASCADE', 'Ours MEDIATOR', 'PPAR Acc.']]
1
D16-1157table_4
Results on part-of-speech tagging.
2
[['Model', 'charCNN'], ['Model', 'charLSTM'], ['Model', 'CHARAGRAM'], ['Model', 'CHARAGRAM (2-layer)']]
1
[['Accuracy (%)']]
[['97.02'], ['96.90'], ['96.99'], ['97.10']]
column
['Accuracy (%)']
['CHARAGRAM (2-layer)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy (%)</th> </tr> </thead> <tbody> <tr> <td>Model || charCNN</td> <td>97.02</td> </tr> <tr> <td>Model || charLSTM</td> <td>96.90</td> </tr> <tr> <td>Model || CHARAG...
Table 4
table_4
D16-1157
6
emnlp2016
The results are shown in Table 4. Performance is similar across models. We found that adding a second fully-connected 150 dimensional layer to the CHARAGRAM model improved results slightly.
[1, 1, 1]
['The results are shown in Table 4.', 'Performance is similar across models.', 'We found that adding a second fully-connected 150 dimensional layer to the CHARAGRAM model improved results slightly.']
[None, ['Accuracy (%)', 'Model'], ['CHARAGRAM (2-layer)', 'Accuracy (%)']]
1
D16-1160table_1
Translation results (BLEU score) for different translation methods. For our methods exploring the source-side monolingual data, we investigate the performance change as we choose different scales of monolingual data (e.g. from top 25% to 100% according to the word coverage of the monolingual sentence in source language...
2
[['Method', 'Moses'], ['Method', 'RNNSearch'], ['Method', 'RNNSearch-Mono-SL (25%)'], ['Method', 'RNNSearch-Mono-SL (50%)'], ['Method', 'RNNSearch-Mono-SL (75%)'], ['Method', 'RNNSearch-Mono-SL (100%)'], ['Method', 'RNNSearch-Mono-MTL (25%)'], ['Method', 'RNNSearch-Mono-MTL (50%)'], ['Method', 'RNNSearch-Mono-MTL (75%)...
1
[['MT03'], ['MT04'], ['MT05'], ['MT06']]
[['30.30', '31.04', '28.19', '30.04'], ['28.38', '30.85', '26.78', '29.27'], ['29.65', '31.92', '28.65', '29.86'], ['32.43', '33.16', '30.43', '32.35'], ['30.24', '31.18', '29.33', '28.82'], ['29.97', '30.78', '26.45', '28.06'], ['31.68', '32.51', '29.8', '31.29'], ['33.38', '34.3', '31.57', '33.4'], ['31.69', '32.83',...
column
['BLEU', 'BLEU', 'BLEU', 'BLEU']
['RNNSearch-Mono-Autoencoder (50%)', 'RNNSearch-Mono-Autoencoder (100%)', 'RNNSearch-Mono-MTL (25%)', 'RNNSearch-Mono-MTL (50%)', 'RNNSearch-Mono-MTL (75%)', 'RNNSearch-Mono-MTL (100%)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MT03</th> <th>MT04</th> <th>MT05</th> <th>MT06</th> </tr> </thead> <tbody> <tr> <td>Method || Moses</td> <td>30.30</td> <td>31.04</td> <td>28.19</td> <td>30.04</td> ...
Table 1
table_1
D16-1160
6
emnlp2016
Table 1 reports the translation quality for different methods. Comparing the first two lines in Table 1, it is obvious that the NMT method RNNSearch performs much worse than the SMT model Moses on Chinese-to-English translation. The gap is as large as approximately 2.0 BLEU points (28.38 vs. 30.30). We speculate that t...
[1, 1, 1, 2, 2, 2, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1]
['Table 1 reports the translation quality for different methods.', 'Comparing the first two lines in Table 1, it is obvious that the NMT method RNNSearch performs much worse than the SMT model Moses on Chinese-to-English translation.', 'The gap is as large as approximately 2.0 BLEU points (28.38 vs. 30.30).', 'We specu...
[None, ['Method', 'RNNSearch', 'Moses'], ['Method', 'RNNSearch', 'Moses'], None, None, ['RNNSearch-Mono-SL (25%)', 'RNNSearch-Mono-SL (50%)', 'RNNSearch-Mono-SL (75%)', 'RNNSearch-Mono-SL (100%)'], ['RNNSearch-Mono-SL (25%)', 'RNNSearch-Mono-SL (50%)', 'RNNSearch-Mono-SL (75%)', 'RNNSearch-Mono-SL (100%)', 'RNNSearch']...
1
D16-1160table_2
Translation results (BLEU score) for different translation methods in large-scale training data.
2
[['Method', 'RNNSearch'], ['Method', 'RNNSearch-Mono-MTL (50%)'], ['Method', 'RNNSearch-Mono-MTL (100%)']]
1
[['MT03'], ['MT04'], ['MT05'], ['MT06']]
[['35.18', '36.20', '33.21', '32.86'], ['36.32', '37.51', '35.08', '34.26'], ['35.75', '36.74', '34.23', '33.52']]
column
['BLEU', 'BLEU', 'BLEU', 'BLEU']
['RNNSearch-Mono-MTL (50%)', 'RNNSearch-Mono-MTL (100%)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MT03</th> <th>MT04</th> <th>MT05</th> <th>MT06</th> </tr> </thead> <tbody> <tr> <td>Method || RNNSearch</td> <td>35.18</td> <td>36.20</td> <td>33.21</td> <td>32.86</td...
Table 2
table_2
D16-1160
8
emnlp2016
A natural question arises that is the source-side monolingual data still very helpful when we have much more bilingual training data. We conduct the large-scale experiments using our proposed multitask framework RNNSearch-Mono-MTL. Table 2 reports the results. We can see from the table that closely related source-side ...
[0, 2, 1, 1, 1, 1, 2, 2]
['A natural question arises that is the source-side monolingual data still very helpful when we have much more bilingual training data.', 'We conduct the large-scale experiments using our proposed multitask framework RNNSearch-Mono-MTL.', 'Table 2 reports the results.', 'We can see from the table that closely related s...
[None, None, None, ['RNNSearch-Mono-MTL (50%)', 'MT03', 'MT04', 'MT05', 'MT06'], None, ['RNNSearch-Mono-MTL (50%)', 'RNNSearch-Mono-MTL (100%)', 'MT03', 'MT04', 'MT05', 'MT06'], None, None]
1
D16-1161table_4
Best results in restricted setting with added unrestricted language model for original (2014) and extended (2014-10) CoNLL test set (trained with public data only).
3
[['System', 'Baseline', '-'], ['System', 'Baseline', '+CCLM'], ['System', 'Best dense', '-'], ['System', 'Best dense', '+CCLM'], ['System', 'Best parse', '-'], ['System', 'Best parse', '+CCLM']]
2
[['2014', 'Prec.'], ['2014', 'Recall'], ['2014', 'M2'], ['2015', 'Prec..1'], ['2015', 'Recall'], ['2015', 'M2']]
[['48.97', '26.03', '41.63', '69.29', '31.35', '55.78'], ['58.91', '25.05', '46.37', '77.17', '29.38', '58.23'], ['50.94', '26.21', '42.85', '71.21', '31.70', '57.00'], ['59.98', '28.17', '48.93', '79.98', '32.76', '62.08'], ['57.99', '25.11', '45.95', '76.61', '29.74', '58.25'], ['61.27', '27.98', '49.49', '80.93', '3...
column
['Prec.', 'Recall', 'M2', 'Prec.', 'Recall', 'M2']
['Best parse']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>2014 || Prec.</th> <th>2014 || Recall</th> <th>2014 || M2</th> <th>2015 || Prec..1</th> <th>2015 || Recall</th> <th>2015 || M2</th> </tr> </thead> <tbody> <tr> <td>System || Bas...
Table 4
table_4
D16-1161
9
emnlp2016
Table 4 summarizes the best results reported in this paper for the CoNLL-2014 test set (column 2014) before and after adding the Common Crawl n-gram language model. The vanilla Moses baseline with the Common Crawl model can be seen as a new simple baseline for unrestricted settings and is ahead of any previously publis...
[1, 2, 1, 1]
['Table 4 summarizes the best results reported in this paper for the CoNLL-2014 test set (column 2014) before and after adding the Common Crawl n-gram language model.', 'The vanilla Moses baseline with the Common Crawl model can be seen as a new simple baseline for unrestricted settings and is ahead of any previously p...
[['2014'], ['Baseline'], ['M2', 'Best parse', '+CCLM'], ['Best parse']]
1
D16-1163table_3
Our transfer method applied to re-scoring output nbest lists from the SBMT system. The first row shows the SBMT performance with no re-scoring and the other 3 rows show the performance after re-scoring with the selected model. Note: the ‘LM’ row shows the results when an RNN LM trained on the large English corpus was u...
2
[['Re-scorer', 'None'], ['Re-scorer', 'NMT'], ['Re-scorer', 'Xfer'], ['Re-scorer', 'LM']]
2
[['SBMT Decoder', 'Hausa'], ['SBMT Decoder', 'Turkish'], ['SBMT Decoder', 'Uzbek'], ['SBMT Decoder', 'Urdu']]
[['23.7', '20.4', '17.9', '17.9'], ['24.5', '21.4', '19.5', '18.2'], ['24.8', '21.8', '19.5', '19.1'], ['23.6', '21.1', '17.9', '18.2']]
column
['BLEU', 'BLEU', 'BLEU', 'BLEU']
['Re-scorer']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>SBMT Decoder || Hausa</th> <th>SBMT Decoder || Turkish</th> <th>SBMT Decoder || Uzbek</th> <th>SBMT Decoder || Urdu</th> </tr> </thead> <tbody> <tr> <td>Re-scorer || None</td> <td>23...
Table 3
table_3
D16-1163
3
emnlp2016
We also use the NMT model with transfer learning as a feature when re-scoring output n-best lists (n = 1000) from the SBMT system. Table 3 shows the results of re-scoring. We compare re-scoring with transfer NMT to re-scoring with baseline (i.e. non-transfer) NMT and to re-scoring with a neural language model. The neur...
[0, 1, 2, 2, 2, 2, 1]
['We also use the NMT model with transfer learning as a feature when re-scoring output n-best lists (n = 1000) from the SBMT system.', 'Table 3 shows the results of re-scoring.', 'We compare re-scoring with transfer NMT to re-scoring with baseline (i.e. non-transfer) NMT and to re-scoring with a neural language model.'...
[None, None, None, None, None, None, ['Xfer']]
1
D16-1165table_2
Results of the ablation study.
3
[['System', 'Full Network', '-'], ['System', 'Full Network', '- Lexical similarity'], ['System', 'Full Network', '- Domain-specific'], ['System', 'Full Network', '- Distributed rep.'], ['System', 'No hidden layer', '-']]
1
[['MAP'], ['AvgRec'], ['MRR'], ['∆MAP']]
[['54.51', '60.93', '62.94', '-'], ['45.89', '51.54', '53.29', '-8.62'], ['48.48', '50.46', '53.78', '-6.03'], ['51.17', '56.63', '56.91', '-3.34'], ['52.19', '58.23', '59.95', '-2.32']]
column
['MAP', 'AvgRec', 'MRR', '∆MAP']
['System']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MAP</th> <th>AvgRec</th> <th>MRR</th> <th>∆MAP</th> </tr> </thead> <tbody> <tr> <td>System || Full Network || -</td> <td>54.51</td> <td>60.93</td> <td>62.94</td> <td...
Table 2
table_2
D16-1165
7
emnlp2016
Table 2 shows the results of an ablation study when removing some groups of features. More specifically, we drop lexical similarities, domain-specific features, and the complex semantic-syntactic interactions modeled in the hidden layer between the embeddings and the domain-specific features. We can see that the lexica...
[1, 2, 1, 2, 0, 1, 1, 2, 1, 1, 2]
['Table 2 shows the results of an ablation study when removing some groups of features.', 'More specifically, we drop lexical similarities, domain-specific features, and the complex semantic-syntactic interactions modeled in the hidden layer between the embeddings and the domain-specific features.', 'We can see that th...
[None, None, ['- Lexical similarity', 'MAP'], None, None, ['MAP', '- Domain-specific'], ['MAP', '- Distributed rep.'], None, None, ['- Lexical similarity', '- Distributed rep.'], ['- Distributed rep.', '- Lexical similarity']]
1
D16-1168table_2
Human evaluation results on pairwise-comparisons between FULL and -SYN, and FULL and HUMAN, on STARtest and CARTOON datasets.
3
[['Model', 'FULL', '-'], ['Model', 'FULL', '-SYN'], ['Model', 'FULL', '-'], ['Model', 'FULL', '-SEM'], ['Model', 'FULL', '-'], ['Model', 'HUMAN', '-']]
1
[['STARtest'], ['CARTOON']]
[['65.0', '57.9'], ['35.0', '42.1'], ['68.8', '69.4'], ['31.2', '30.6'], ['17.9', '10.0'], ['82.1', '90.0']]
column
['pairwise-comparisons', 'pairwise-comparisons']
['Model']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>STARtest</th> <th>CARTOON</th> </tr> </thead> <tbody> <tr> <td>Model || FULL || -</td> <td>65.0</td> <td>57.9</td> </tr> <tr> <td>Model || FULL || -SYN</td> <td>35.0</td> ...
Table 2
table_2
D16-1168
7
emnlp2016
However, a pairwise comparison between FULL and -SYN (Table 2) reveals that human subjects consistently prefer the output of FULL instead of -SYN both for STARtest and CARTOON. Table 2 also reports that HUMAN outperforms the output of the FULL model, and a pairwise comparison of FULL and -SEM which yields a result in l...
[1, 2]
['However, a pairwise comparison between FULL and -SYN (Table 2) reveals that human subjects consistently prefer the output of FULL instead of -SYN both for STARtest and CARTOON.', 'Table 2 also reports that HUMAN outperforms the output of the FULL model, and a pairwise comparison of FULL and -SEM which yields a result...
[['FULL', 'HUMAN', '-SYN', 'STARtest', 'CARTOON'], ['HUMAN', 'FULL', '-SEM', 'STARtest', 'CARTOON']]
1
D16-1168table_3
Human evaluation results for FULL, -SYN, -SEMand HUMAN on thematicity, coherence and solvability on STARtest.
3
[['Model', 'HUMAN', '-'], ['Model', 'FULL', '-'], ['Model', 'FULL', '-SYN'], ['Model', 'FULL', '-SEM']]
1
[['Thematicity'], ['Coherence'], ['Solvability']]
[['3.7', '3.175', '4.025'], ['3.7', '3.025', '3.9'], ['3.375', '3.075', '3.825'], ['3.325', '2.65', '3.7']]
column
['Thematicity', 'Coherence', 'Solvability']
['Model']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Thematicity</th> <th>Coherence</th> <th>Solvability</th> </tr> </thead> <tbody> <tr> <td>Model || HUMAN || -</td> <td>3.7</td> <td>3.175</td> <td>4.025</td> </tr> <tr> ...
Table 3
table_3
D16-1168
7
emnlp2016
Table 3 shows the results of the detailed comparison of Thematicity, Coherence, and Solvability. This table clearly shows the strong contribution of the semantic component of our system. The specific contribution of the syntactic component is to pro duce overall more solvable and thematically satisfying problems, altho...
[1, 1, 1, 1]
['Table 3 shows the results of the detailed comparison of Thematicity, Coherence, and Solvability.', 'This table clearly shows the strong contribution of the semantic component of our system.', 'The specific contribution of the syntactic component is to pro duce overall more solvable and thematically satisfying problem...
[['Thematicity', 'Coherence', 'Solvability'], ['-SEM'], ['-SYN'], ['HUMAN', 'Model']]
1
D16-1173table_1
Classification performance on SST2. The top and second blocks use only sentence-level annotations for training, while the bottom block uses both sentenceand phrases-level annotations. We report the accuracy of both the regularized teacher model q and the student model p after distillation.
3
[['Model', 'sentences', 'CNN (Kim 2014)'], ['Model', 'sentences', 'CNN+REL q'], ['Model', 'sentences', 'CNN+REL p'], ['Model', 'sentences', 'CNN+REL+LEX q'], ['Model', 'sentences', 'CNN+REL+LEX p'], ['Model', 'sentences', 'MC-CNN (Kim 2014)'], ['Model', 'sentences', 'Tensor-CNN (Lei et al. 2015)'], ['Model', 'sentences...
1
[['Accuracy (%)']]
[['86.6'], ['87.8'], ['87.1'], ['88.0'], ['87.2'], ['86.8'], ['87.0'], ['87.1'], ['87.2'], ['88.0'], ['88.1'], ['89.2'], ['89.4']]
column
['Accuracy (%)']
['CNN+REL+LEX q', 'CNN+REL+LEX p']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy (%)</th> </tr> </thead> <tbody> <tr> <td>Model || sentences || CNN (Kim 2014)</td> <td>86.6</td> </tr> <tr> <td>Model || sentences || CNN+REL q</td> <td>87.8</td> </tr> ...
Table 1
table_1
D16-1173
7
emnlp2016
Table 1 shows the classification performance on the SST2 dataset. From rows 1-3 we see that our proposed sentiment model that integrates the diverse set of knowledge (section 4) significantly outperforms the base CNN (Kim 2014). The improvement of the student network p validates the effectiveness of the iterative mutua...
[1, 1, 2, 2, 2, 1, 2, 1, 1, 0, 0, 0]
['Table 1 shows the classification performance on the SST2 dataset.', 'From rows 1-3 we see that our proposed sentiment model that integrates the diverse set of knowledge (section 4) significantly outperforms the base CNN (Kim 2014).', 'The improvement of the student network p validates the effectiveness of the iterati...
[None, ['CNN+REL q', 'CNN+REL p', 'CNN+REL+LEX q', 'CNN+REL+LEX p', 'CNN (Kim 2014)'], None, None, None, ['CNN+REL+LEX q', 'CNN+REL+LEX p'], ['CNN+But-q (Hu et al. 2016)'], ['CNN+REL+LEX q', 'CNN+REL+LEX p'], ['CNN (Kim 2014)'], None, None, None]
1
D16-1173table_2
Classification performance on the CR dataset. We report the average accuracy±one standard deviation with 10fold CV. The top block compares the base CNN (row 1) with the knowledge-enhanced CNNs by our framework.
3
[['Model', '1', 'CNN (Kim, 2014)'], ['Model', '2', 'CNN+REL'], ['Model', '3', 'CNN+REL+LEX'], ['Model', '4', 'MC-CNN (Kim, 2014)'], ['Model', '5', 'Bi-RNN (Lai et al. 2015)'], ['Model', '6', 'CRF-PR (Yang and Cardie, 2014)'], ['Model', '7', 'AdaSent (Zhao et al. 2015)']]
1
[['Accuracy (%)']]
[['84.1±0.2'], ['q: 85.0±0.2, p: 84.7±0.2'], ['q: 85.3±0.3, p: 85.0±0.2'], ['85.0'], ['82.6'], ['82.7'], ['86.3']]
column
['Accuracy (%)']
['CNN+REL+LEX']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy (%)</th> </tr> </thead> <tbody> <tr> <td>Model || 1 || CNN (Kim, 2014)</td> <td>84.1±0.2</td> </tr> <tr> <td>Model || 2 || CNN+REL</td> <td>q: 85.0±0.2, p: 84.7±0.2</td> ...
Table 2
table_2
D16-1173
8
emnlp2016
Table 2 shows model performance on the CR dataset. Our model again surpasses the base network and several other competitive neural methods by a large margin. Though falling behind AdaSent (row 7) which has a more specialized and complex architecture than standard convolutional networks, the proposed framework indeed is...
[1, 1, 2, 2, 0, 0, 0, 0, 0, 0, 0]
['Table 2 shows model performance on the CR dataset.', 'Our model again surpasses the base network and several other competitive neural methods by a large margin.', 'Though falling behind AdaSent (row 7) which has a more specialized and complex architecture than standard convolutional networks, the proposed framework i...
[None, ['CNN+REL+LEX', 'Model'], ['AdaSent (Zhao et al. 2015)'], None, None, None, None, None, None, None, None]
1
D16-1174table_5
Evaluation results on the word to sense similarity test dataset of the SemEval-14 task on Cross-Level Semantic Similarity, according to Pearson (r × 100) and Spearman (ρ × 100) correlations. We show results for four similarity computation strategies (see §3.3). The best results per strategy are shown in bold whereas th...
2
[['System', 'DECONF*'], ['System', 'Rothe and Schutze (2015)*'], ['System', 'Iacobacci et al. (2015)*'], ['System', 'Chen et al. (2014)*'], ['System', 'DECONF'], ['System', 'Pilehvar and Navigli (2015)'], ['System', 'Iacobacci et al. (2015)']]
2
[['MaxSim', 'r'], ['MaxSim', 'rho'], ['AvgSim', 'r'], ['AvgSim', 'rho'], ['S2W', 'r'], ['S2W', 'rho'], ['S2A', 'r'], ['S2A', 'rho']]
[['36.4', '37.6', '36.8', '38.8', '34.9', '35.6', '37.5', '39.3'], ['34.0', '33.8', '34.1', '33.6', '33.4', '32.0', '35.4', '34.9'], ['19.1', '21.5', '21.3', '24.2', '22.7', '21.7', '19.5', '21.1'], ['17.7', '18.0', '17.2', '16.8', '27.7', '26.7', '17.9', '18.8'], ['35.5', '36.4', '36.2', '38.0', '34.9', '35.6', '36.8'...
column
['r', 'rho', 'r', 'rho', 'r', 'rho', 'r', 'rho']
['DECONF']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MaxSim || r</th> <th>MaxSim || rho</th> <th>AvgSim || r</th> <th>AvgSim || rho</th> <th>S2W || r</th> <th>S2W || rho</th> <th>S2A || r</th> <th>S2A || rho</th> </tr> </thead> ...
Table 5
table_5
D16-1174
8
emnlp2016
Table 5 shows the results on the word to sense dataset of the SemEval-2014 CLSS task, according to Pearson (r × 100) and Spearman (rho × 100) correlation scores and for the four strategies. As can be seen from the low overall performance, the task is a very challenging benchmark with many WordNet out-of-vocabulary or s...
[1, 2, 1, 1, 1, 1]
['Table 5 shows the results on the word to sense dataset of the SemEval-2014 CLSS task, according to Pearson (r × 100) and Spearman (rho × 100) correlation scores and for the four strategies.', 'As can be seen from the low overall performance, the task is a very challenging benchmark with many WordNet out-of-vocabulary...
[['r', 'rho'], None, ['DECONF'], ['S2A', 'DECONF'], ['Chen et al. (2014)*', 'S2W', 'Iacobacci et al. (2015)*'], ['S2W', 'S2A']]
1
D16-1175table_1
Example feature spaces for the lexemes white and clothes extracted from the dependency tree of Figure 1. Not all features are displayed for space reasons. Offsetting amod:shoes by amod results in an empty dependency path, leaving just the word co-occurrence :shoes as feature.
5
[['white', 'Distributional Features', 'amod:shoes', 'Offset Features (by amod)', ':shoes'], ['clothes', 'Distributional Features', 'amod:clean', 'Offset Features (by amod)', '-'], ['white', 'Distributional Features', 'amod:dobj:bought', 'Offset Features (by amod)', 'dobj:bought'], ['clothes', 'Distributional Features',...
1
[['Co-occurrence Count']]
[['1'], ['1'], ['1'], ['1'], ['1'], ['1'], ['1'], ['1']]
column
['Co-occurrence Count']
[]
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Co-occurrence Count</th> </tr> </thead> <tbody> <tr> <td>white || Distributional Features || amod:shoes || Offset Features (by amod) || :shoes</td> <td>1</td> </tr> <tr> <td>clothes || D...
Table 1
table_1
D16-1175
4
emnlp2016
Table 1 shows a number of features extracted from the aligned dependency trees in Figure 1 and highlights that adjectives and nouns do not share many features if only first order dependencies would be considered. However through the inclusion of inverse and higher order dependency paths we can observe that the second o...
[1, 2, 2, 2, 2]
['Table 1 shows a number of features extracted from the aligned dependency trees in Figure 1 and highlights that adjectives and nouns do not share many features if only first order dependencies would be considered.', 'However through the inclusion of inverse and higher order dependency paths we can observe that the sec...
[None, None, None, None, None]
0
D16-1175table_3
Effect of the magnitude of the shift parameter k in SPPMI on the word similarity tasks. Boldface means best performance per dateset.
2
[['APTs', 'k = 1'], ['APTs', 'k = 5'], ['APTs', 'k = 10'], ['APTs', 'k = 40'], ['APTs', 'k = 100']]
2
[['MEN', 'without DI'], ['MEN', 'with DI'], ['SimLex-999', 'without DI'], ['SimLex-999', 'with DI'], ['WordSim-353 (rel)', 'without DI'], ['WordSim-353 (rel)', 'with DI'], ['WordSim-353 (sub)', 'without DI'], ['WordSim-353 (sub)', 'with DI']]
[['0.54', '0.52', '0.31', '0.30', '0.34', '0.27', '0.62', '0.60'], ['0.64', '0.65', '0.35', '0.36', '0.56', '0.51', '0.74', '0.73'], ['0.63', '0.66', '0.35', '0.36', '0.56', '0.55', '0.75', '0.74'], ['0.63', '0.68', '0.30', '0.32', '0.55', '0.61', '0.75', '0.76'], ['0.61', '0.67', '0.26', '0.29', '0.47', '0.60', '0.71'...
column
['similarity', 'similarity', 'similarity', 'similarity', 'similarity', 'similarity', 'similarity', 'similarity']
['APTs']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MEN || without DI</th> <th>MEN || with DI</th> <th>SimLex-999 || without DI</th> <th>SimLex-999 || with DI</th> <th>WordSim-353 (rel) || without DI</th> <th>WordSim-353 (rel) || with DI</th>...
Table 3
table_3
D16-1175
7
emnlp2016
Table 3 highlights the effect of the SPPMI shift parameter k, while keeping the number of neighbours fixed at 30 and using the static top n neighbour retrieval function. For the APT model, a value of k = 40 performs best (except for SimLex-999, where smaller shifts give better results), with a performance drop-off for ...
[1, 1, 2, 2, 2, 2]
['Table 3 highlights the effect of the SPPMI shift parameter k, while keeping the number of neighbours fixed at 30 and using the static top n neighbour retrieval function.', 'For the APT model, a value of k = 40 performs best (except for SimLex-999, where smaller shifts give better results), with a performance drop-off...
[None, ['k = 40', 'MEN', 'SimLex-999', 'WordSim-353 (rel)', 'WordSim-353 (sub)'], ['k = 1'], None, None, None]
1
D16-1175table_4
Neighbour retrieval function comparison. Boldface means best performance on a dataset per VSM type. *) With 3 significant figures, the density window approach (0.713) is slightly better than the baseline without DI (0.708), static top n (0.710) and WordNet (0.710).
2
[['APTs (k = 40)', 'MEN'], ['APTs (k = 40)', 'SimLex-999'], ['APTs (k = 40)', 'WordSim-353 (rel)'], ['APTs (k = 40)', 'WordSim-353 (sub)'], ['Untyped VSM (k = 1)', 'MEN*'], ['Untyped VSM (k = 1)', 'SimLex-999'], ['Untyped VSM (k = 1)', 'WordSim-353 (rel)'], ['Untyped VSM (k = 1)', 'WordSim-353 (sub)']]
1
[['No Distributional Inference'], ['Density Window'], ['Static Top n'], ['WordNet']]
[['0.63', '0.67', '0.68', '0.63'], ['0.3', '0.32', '0.32', '0.38'], ['0.55', '0.62', '0.61', '0.56'], ['0.75', '0.78', '0.76', '0.77'], ['0.71', '0.71', '0.71', '0.71'], ['0.3', '0.29', '0.3', '0.36'], ['0.6', '0.64', '0.64', '0.52'], ['0.7', '0.73', '0.72', '0.67']]
column
['similarity', 'similarity', 'similarity', 'similarity']
['APTs (k = 40)', 'Untyped VSM (k = 1)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>No Distributional Inference</th> <th>Density Window</th> <th>Static Top n</th> <th>WordNet</th> </tr> </thead> <tbody> <tr> <td>APTs (k = 40) || MEN</td> <td>0.63</td> <td>0.67<...
Table 4
table_4
D16-1175
7
emnlp2016
Table 4 shows that distributional inference successfully infers missing information for both model types, resulting in improved performance over models without the use of DI on all datasets. The improvements are typically larger for the APT model, suggesting that it is missing more distributional knowledge in its eleme...
[1, 1, 1, 1, 2, 2]
['Table 4 shows that distributional inference successfully infers missing information for both model types, resulting in improved performance over models without the use of DI on all datasets.', 'The improvements are typically larger for the APT model, suggesting that it is missing more distributional knowledge in its ...
[None, ['APTs (k = 40)'], ['Density Window', 'Static Top n'], ['WordNet', 'SimLex-999'], ['WordNet', 'SimLex-999'], ['Untyped VSM (k = 1)', 'APTs (k = 40)']]
1
D16-1175table_6
Neighbour retrieval function. Underlined means best performance per phrase type, boldface means best average performance overall.
2
[['APTs', 'Adjective-Noun'], ['APTs', 'Noun-Noun'], ['APTs', 'Verb-Object'], ['APTs', 'Average']]
2
[['No Distributional Inference', 'intersection'], ['No Distributional Inference', 'union'], ['Density Window', 'intersection'], ['Density Window', 'union'], ['Static Top n', 'intersection'], ['Static Top n', 'union'], ['WordNet', 'intersection'], ['WordNet', 'union']]
[['0.10', '0.41', '0.31', '0.39', '0.25', '0.40', '0.12', '0.41'], ['0.18', '0.42', '0.34', '0.38', '0.37', '0.45', '0.24', '0.36'], ['0.17', '0.36', '0.36', '0.36', '0.34', '0.35', '0.25', '0.36'], ['0.15', '0.40', '0.34', '0.38', '0.32', '0.40', '0.20', '0.38']]
column
['similarity', 'similarity', 'similarity', 'similarity', 'similarity', 'similarity', 'similarity', 'similarity']
['APTs']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>No Distributional Inference || intersection</th> <th>No Distributional Inference || union</th> <th>Density Window || intersection</th> <th>Density Window || union</th> <th>Static Top n || interse...
Table 6
table_6
D16-1175
9
emnlp2016
Table 6 shows that the static top n and density window neighbour retrieval functions perform very similar again. The density window retrieval function outperforms static top n for composition by intersection and vice versa for composition by union. The WordNet approach is competitive for composition by union, but under...
[1, 1, 1, 2, 1]
['Table 6 shows that the static top n and density window neighbour retrieval functions perform very similar again.', 'The density window retrieval function outperforms static top n for composition by intersection and vice versa for composition by union.', 'The WordNet approach is competitive for composition by union, b...
[['Density Window', 'Static Top n'], ['Density Window', 'Static Top n'], ['WordNet'], None, ['intersection', 'union']]
1
D16-1175table_7
Results for the Mitchell and Lapata (2010) dataset. Results in brackets denote the performance of the respective models without the use of distributional inference. Underlined means best within group, boldfaced means best overall.
2
[['Model', 'APT – union'], ['Model', 'APT – intersect'], ['Model', 'Untyped VSM – addition'], ['Model', 'Untyped VSM – multiplication'], ['Model', 'Mitchell and Lapata (2010) (untyped VSM & multiplication)'], ['Model', 'Blacoe and Lapata (2012) (untyped VSM & multiplication)'], ['Model', 'Hashimoto et al. (2014) (PAS-C...
1
[['Adjective-Noun'], ['Noun-Noun'], ['Verb-Object'], ['Average']]
[['0.45 (0.45)', '0.45 (0.43)', '0.38 (0.37)', '0.43 (0.42)'], ['0.50 (0.38)', '0.49 (0.44)', '0.43 (0.36)', '0.47 (0.39)'], ['0.46 (0.46)', '0.40 (0.41)', '0.38 (0.33)', '0.41 (0.40)'], ['0.46 (0.42)', '0.48 (0.45)', '0.40 (0.39)', '0.45 (0.42)'], ['0.46', '0.49', '0.37', '0.44'], ['0.48', '0.50', '0.35', '0.44'], ['0...
column
['similarity', 'similarity', 'similarity', 'similarity']
['APT – union', 'Untyped VSM – addition']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Adjective-Noun</th> <th>Noun-Noun</th> <th>Verb-Object</th> <th>Average</th> </tr> </thead> <tbody> <tr> <td>Model || APT – union</td> <td>0.45 (0.45)</td> <td>0.45 (0.43)</td> ...
Table 7
table_7
D16-1175
9
emnlp2016
Table 7 shows that composition by intersection with distributional inference considerably improves upon the best results for APT models without distributional inference and for untyped count-based models, and is competitive with the state-of-the-art neural network based models of Hashimoto et al. (2014) and Wieting et ...
[1, 1, 1, 2, 2, 2, 2]
['Table 7 shows that composition by intersection with distributional inference considerably improves upon the best results for APT models without distributional inference and for untyped count-based models, and is competitive with the state-of-the-art neural network based models of Hashimoto et al. (2014) and Wieting e...
[None, None, ['APT – union', 'Untyped VSM – addition'], None, None, None, None]
1
D16-1179table_2
VPE detection results (baseline F1, Machine Learning F1, ML F1 improvement) obtained with 5-fold cross validation.
2
[['Auxiliariy', 'Do'], ['Auxiliariy', 'Be'], ['Auxiliariy', 'Have'], ['Auxiliariy', 'Modal'], ['Auxiliariy', 'To'], ['Auxiliariy', 'So'], ['Auxiliariy', 'ALL']]
1
[['Baseline'], ['ML'], ['Change']]
[['0.83', '0.89', '0.06'], ['0.34', '0.63', '0.29'], ['0.43', '0.75', '0.32'], ['0.8', '0.86', '0.06'], ['0.76', '0.79', '0.03'], ['0.67', '0.86', '0.19'], ['0.71', '0.82', '0.11']]
column
['F1', 'F1', 'F1']
['Change']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Baseline</th> <th>ML</th> <th>Change</th> </tr> </thead> <tbody> <tr> <td>Auxiliariy || Do</td> <td>0.83</td> <td>0.89</td> <td>0.06</td> </tr> <tr> <td>Auxiliariy ||...
Table 2
table_2
D16-1179
4
emnlp2016
Results. Using a standard logistic regression classifier, we achieve an 11% improvement in accuracy over the baseline approach, as can be seen in Table 2. The rule-based approach was insufficient for be and have VPE, where logistic regression provides the largest improvements. Although we improve upon the baseline by 2...
[2, 1, 1, 1, 2]
['Results.', 'Using a standard logistic regression classifier, we achieve an 11% improvement in accuracy over the baseline approach, as can be seen in Table 2.', 'The rule-based approach was insufficient for be and have VPE, where logistic regression provides the largest improvements.', 'Although we improve upon the ba...
[None, ['ALL', 'Change'], ['Be', 'Have'], ['Be'], None]
1
D16-1179table_3
Results (precision, recall, F1) for VPE detection using the train-test split proposed by Bos and Spenader (2011).
2
[['Test Set Results', 'Liu et al. (2016)'], ['Test Set Results', 'This work']]
1
[['P'], ['R'], ['F1']]
[['0.8022', '0.6134', '0.6952'], ['0.7574', '0.8655', '0.8078']]
column
['P', 'R', 'F1']
['This work']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Test Set Results || Liu et al. (2016)</td> <td>0.8022</td> <td>0.6134</td> <td>0.6952</td> </tr> <tr> <t...
Table 3
table_3
D16-1179
4
emnlp2016
In Table 3, we compare our results to those achieved by Liu et al. (2016) when using WSJ sets 0-14 for training and sets 20-24 for testing. We improve on their overall accuracy by over 11%, due to the 25% improvement in recall achieved by our method. Our results show that oversampling the positive examples in the datas...
[1, 1, 2, 2, 2]
['In Table 3, we compare our results to those achieved by Liu et al. (2016) when using WSJ sets 0-14 for training and sets 20-24 for testing.', 'We improve on their overall accuracy by over 11%, due to the 25% improvement in recall achieved by our method.', 'Our results show that oversampling the positive examples in t...
[['Liu et al. (2016)', 'This work'], ['R', 'This work'], None, None, None]
1
D16-1179table_6
Feature ablation results (feature set excluded, precision, recall, F1) on VPE detection; obtained with 5-fold cross validation.
2
[['Excluded', 'Auxiliary'], ['Excluded', 'Lexical'], ['Excluded', 'Syntactic'], ['Excluded', 'NONE']]
1
[['P'], ['R'], ['F1']]
[['0.7982', '0.7611', '0.7781'], ['0.6937', '0.8408', '0.7582'], ['0.7404', '0.733', '0.7343'], ['0.8242', '0.812', '0.817']]
column
['P', 'R', 'F1']
['Syntactic']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Excluded || Auxiliary</td> <td>0.7982</td> <td>0.7611</td> <td>0.7781</td> </tr> <tr> <td>Excluded || Le...
Table 6
table_6
D16-1179
8
emnlp2016
Trigger Detection. In Table 6 we can see that the syntactic features were essential for obtaining the best results, as can be seen by the 8.3% improvement, from 73.4% to 81.7%, obtained from including these features. This shows that notions from theoretical linguistics can prove to be invaluable when approaching the pr...
[2, 1, 2]
['Trigger Detection.', 'In Table 6 we can see that the syntactic features were essential for obtaining the best results, as can be seen by the 8.3% improvement, from 73.4% to 81.7%, obtained from including these features.', 'This shows that notions from theoretical linguistics can prove to be invaluable when approachin...
[None, ['Syntactic', 'F1', 'NONE'], None]
1
D16-1179table_7
Feature ablation results (feature set excluded, precision, recall, F1) on antecedent identification; obtained with 5fold cross validation.
2
[['Features Excluded', 'Alignment'], ['Features Excluded', 'NP Relation'], ['Features Excluded', 'Syntactic'], ['Features Excluded', 'Matching'], ['Features Excluded', 'NONE']]
1
[['Accuracy']]
[['0.6511'], ['0.6428'], ['0.5495'], ['0.6504'], ['0.6518']]
column
['Accuracy']
['Features Excluded']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy</th> </tr> </thead> <tbody> <tr> <td>Features Excluded || Alignment</td> <td>0.6511</td> </tr> <tr> <td>Features Excluded || NP Relation</td> <td>0.6428</td> </tr> <t...
Table 7
table_7
D16-1179
8
emnlp2016
Antecedent Identification. Table 7 presents the results from a feature ablation study on antecedent identification. The most striking observation is that the alignment features do not add any significant improvement in the results. This is either because there simply is not an inherent parallelism between the trigger ...
[2, 1, 1, 2, 1, 2]
['Antecedent Identification.', 'Table 7 presents the results from a feature ablation study on antecedent\r\nidentification.', 'The most striking observation is that the alignment features do not add any significant improvement in the results.', 'This is either because there simply is not an inherent parallelism between...
[None, None, ['Alignment', 'Accuracy'], None, ['Syntactic', 'Accuracy'], None]
1
D16-1181table_1
1-best supertagging results on both the dev and test sets. BLSTM is the baseline model without attention; BLSTMlocal and -global are the two attention-based models.
2
[['Model', 'C&C'], ['Model', 'Xu et al. (2015)'], ['Model', 'Xu et al. (2016)'], ['Model', 'Lewis et al. (2016)'], ['Model', 'Vaswani et al. (2016)'], ['Model', 'Vaswani et al. (2016) +LM +beam'], ['Model', 'BLSTM'], ['Model', 'BLSTM-local'], ['Model', 'BLSTM-global']]
1
[['Dev'], ['Test']]
[['91.50', '92.02'], ['93.07', '93.00'], ['93.49', '93.52'], ['94.1', '94.3'], ['94.08', '-'], ['94.24', '94.50'], ['94.11', '94.29'], ['94.31', '94.46'], ['94.22', '94.42']]
column
['Accuracy', 'Accuracy']
['BLSTM', 'BLSTM-local', 'BLSTM-global']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Dev</th> <th>Test</th> </tr> </thead> <tbody> <tr> <td>Model || C&amp;C</td> <td>91.50</td> <td>92.02</td> </tr> <tr> <td>Model || Xu et al. (2015)</td> <td>93.07</td> ...
Table 1
table_1
D16-1181
7
emnlp2016
Table 1 summarizes 1-best supertagging results. Our baseline BLSTM model without attention achieves the same level of accuracy as Lewis et al. (2016) and the baseline BLSTM model of Vaswani et al. (2016). Compared with the latter, our hidden state size is 50% smaller (256 vs. 512). For training and testing the local at...
[1, 1, 2, 1, 1]
['Table 1 summarizes 1-best supertagging results.', 'Our baseline BLSTM model without attention achieves the same level of accuracy as Lewis et al. (2016) and the baseline BLSTM model of Vaswani et al. (2016).', 'Compared with the latter, our hidden state size is 50% smaller (256 vs. 512).', 'For training and testing t...
[None, ['BLSTM', 'Lewis et al. (2016)', 'Vaswani et al. (2016)'], None, ['BLSTM-local', 'Xu et al. (2016)', 'Vaswani et al. (2016)'], ['BLSTM-global', 'BLSTM-local']]
1
D16-1181table_4
Parsing results on the dev (Section 00) and test (Section 23) sets with 100% coverage, with all LSTM models using the BLSTM-local supertagging model. All experiments using auto POS. CAT (lexical category assignment accuracy). LSTM-greedy is the full greedy parser.
2
[['Model', 'C&C (normal-form)'], ['Model', 'C&C (dependency hybrid)'], ['Model', 'Zhang and Clark (2011)'], ['Model', 'Xu et al. (2014)'], ['Model', 'Ambati et al. (2016)'], ['Model', 'Xu et al. (2016)-greedy'], ['Model', 'Xu et al. (2016)-XF1'], ['Model', 'LSTM-greedy'], ['Model', 'LSTM-XF1'], ['Model', 'LSTM-XF1']]
2
[['Beam', '-'], ['Section 00', 'LP'], ['Section 00', 'LR'], ['Section 00', 'LF'], ['Section 00', 'CAT'], ['Section 23', 'LP'], ['Section 23', 'LR'], ['Section 23', 'LF'], ['Section 23', 'CAT']]
[['-', '85.18', '82.53', '83.83', '92.39', '85.45', '83.97', '84.70', '92.83'], ['-', '86.07', '82.77', '84.39', '92.57', '86.24', '84.17', '85.19', '93.0'], ['16', '87.15', '82.95', '85.0', '92.77', '87.43', '83.61', '85.48', '93.12'], ['128', '86.29', '84.09', '85.18', '92.75', '87.03', '85.08', '86.04', '93.1'], ['1...
column
['Beam', 'LP', 'LR', 'LF', 'CAT', 'LP', 'LR', 'LF', 'CAT']
['LSTM-greedy', 'LSTM-XF1']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Beam || -</th> <th>Section 00 || LP</th> <th>Section 00 || LR</th> <th>Section 00 || LF</th> <th>Section 00 || CAT</th> <th>Section 23 || LP</th> <th>Section 23 || LR</th> <th>Sect...
Table 4
table_4
D16-1181
8
emnlp2016
The XF1 model. Table 4 also shows the results for the XF1 models (LSTM-XF1), which use all four types of embeddings. We used a beam size of 8, and a ? value of 0.06 for both training and testing (tuned on the dev set), and training took 12 epochs to converge (Fig. 4b), with an F1 of 87.45% on the dev set. Decoding the ...
[0, 1, 2, 1, 1]
['The XF1 model.', 'Table 4 also shows the results for the XF1 models (LSTM-XF1), which use all four types of embeddings.', 'We used a beam size of 8, and a ? value of 0.06 for both training and testing (tuned on the dev set), and training took 12 epochs to converge (Fig. 4b), with an F1 of 87.45% on the dev set.', 'De...
[None, ['LSTM-XF1'], ['Beam'], ['LSTM-greedy', 'LR', 'LF'], ['LSTM-greedy', 'LF', 'Xu et al. (2014)']]
1
D16-1182table_1
Parsers’ performance in terms of accuracy and robustness. The best result in each column is given in bold, and the worst result is in italics.
2
[['Parser', 'Malt'], ['Parser', 'Mate'], ['Parser', 'MST'], ['Parser', 'SNN'], ['Parser', 'SyntaxNet'], ['Parser', 'Turbo'], ['Parser', 'Tweebo'], ['Parser', 'Yara']]
3
[['Train on PTB §1-21', 'UAS', 'PTB §23'], ['Train on PTB §1-21', 'Robustness F1', 'ESL'], ['Train on PTB §1-21', 'Robustness F1', 'MT'], ['Train on Tweebanktrain', 'UAF1', 'Tweebanktest'], ['Train on Tweebanktrain', 'Robustness F1', 'ESL'], ['Train on Tweebanktrain', 'Robustness F1', 'MT']]
[['89.58', '93.05', '76.26', '77.48', '94.36', '80.66'], ['93.16', '93.24', '77.07', '76.26', '91.83', '75.74'], ['91.17', '92.80', '76.51', '73.99', '92.37', '77.71'], ['90.70', '93.15', '74.18', '53.4', '88.90', '71.54'], ['93.04', '93.24', '76.39', '75.75', '88.78', '81.87'], ['92.84', '93.72', '77.79', '79.42', '93...
column
['UAS', 'Robustness F1', 'Robustness F1', 'UAF1', 'Robustness F1', 'Robustness F1']
['Parser']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Train on PTB §1-21 || UAS || PTB §23</th> <th>Train on PTB §1-21 || Robustness F1 || ESL</th> <th>Train on PTB §1-21 || Robustness F1 || MT</th> <th>Train on Tweebanktrain || UAF1 || Tweebanktest</th>...
Table 1
table_1
D16-1182
5
emnlp2016
The overall performances of all parsers are shown in Table 1. Note that the Tweebo Parser’s performance is not trained on the PTB because it is a specialization of the Turbo Parser, designed to parse tweets. Table 1 shows that, for both training conditions, the parser that has the best robustness score in the ESL domai...
[1, 2, 1, 2, 1, 2, 2, 2, 1, 1, 2, 1, 1]
['The overall performances of all parsers are shown in Table 1.', 'Note that the Tweebo Parser’s performance is not trained on the PTB because it is a specialization of the Turbo Parser, designed to parse tweets.', 'Table 1 shows that, for both training conditions, the parser that has the best robustness score in the E...
[['Parser'], None, ['Parser', 'ESL', 'MT', 'Robustness F1'], None, ['Malt', 'Train on Tweebanktrain', 'Train on PTB §1-21'], ['Train on Tweebanktrain'], ['SNN'], None, ['Parser', 'ESL'], ['Train on Tweebanktrain', 'ESL', 'Tweebo', 'Train on PTB §1-21'], None, ['Turbo', 'Train on PTB §1-21', 'Malt', 'Train on Tweebanktr...
1
D16-1183table_2
Test SMATCH results.12
2
[['Parser', 'JAMR'], ['Parser', 'CKY (Artzi et al. 2015)'], ['Parser', 'Shift Reduce'], ['Parser', 'Wang et al. (2015a)']]
1
[['P'], ['R'], ['F']]
[['67.8', '59.2', '63.2'], ['66.8', '65.7', '66.3'], ['68.1', '64.2', '66.1'], ['72.0', '67.0', '70.0']]
column
['P', 'R', 'F']
['Shift Reduce']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F</th> </tr> </thead> <tbody> <tr> <td>Parser || JAMR</td> <td>67.8</td> <td>59.2</td> <td>63.2</td> </tr> <tr> <td>Parser || CKY (Artzi et al. ...
Table 2
table_2
D16-1183
8
emnlp2016
Table 2 shows the test results using our best performing model (ensemble with syntax features). We compare our approach to the CKY parser of Artzi et al. ,(2015) and JAMR (Flanigan et al., 2014). We also list the results of Wang et al.,(2015b), who demonstrated the benefit of auxiliary analyzers and is the current stat...
[1, 2, 2, 1, 1]
['Table 2 shows the test results using our best performing model (ensemble with syntax features).', 'We compare our approach to the CKY parser of Artzi et al. ,(2015) and JAMR (Flanigan et al., 2014).', 'We also list the results of Wang et al.,(2015b), who demonstrated the benefit of auxiliary analyzers and is the curr...
[None, ['CKY (Artzi et al. 2015)', 'JAMR', 'Shift Reduce'], ['Wang et al. (2015a)'], ['CKY (Artzi et al. 2015)', 'Shift Reduce'], ['Shift Reduce']]
1
D16-1184table_6
Parsing performance on web queries
2
[['System', 'Stanford'], ['System', 'MSTParser'], ['System', 'LSTMParser'], ['System', 'QueryParser + label refinement'], ['System', 'QueryParser + word2vec'], ['System', 'QueryParser + label refinement + word2vec']]
2
[['All (n=1000)', 'UAS'], ['All (n=1000)', 'LAS'], ['NoFunc (n=900)', 'UAS'], ['NoFunc (n=900)', 'LAS'], ['Func (n=100)', 'UAS'], ['Func (n=100)', 'LAS']]
[['0.694', '0.602', '0.670', '0.568', '0.834', '0.799'], ['0.699', '0.616', '0.683', '0.691', '0.799', '0.766'], ['0.700', '0.608', '0.679', '0.578', '0.827', '0.790'], ['0.829', '0.769', '0.824', '0.761', '0.858', '0.818'], ['0.843', '0.788', '0.843', '0.784', '0.838', '0.812'], ['0.862', '0.804', '0.858', '0.795', '0...
column
['UAS', 'LAS', 'UAS', 'LAS', 'UAS', 'LAS']
['QueryParser + label refinement', 'QueryParser + word2vec', 'QueryParser + label refinement + word2vec']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>All (n=1000) || UAS</th> <th>All (n=1000) || LAS</th> <th>NoFunc (n=900) || UAS</th> <th>NoFunc (n=900) || LAS</th> <th>Func (n=100) || UAS</th> <th>Func (n=100) || LAS</th> </tr> </thea...
Table 6
table_6
D16-1184
9
emnlp2016
Table 6 shows the results. We use 3 versions of QueryParser. The first two use random word embedding for initialization, and the first one does not use label refinement. From the results, it can be concluded that QueryParser consistently outperformed competitors on query parsing task. Pretrained word2vec embeddings imp...
[1, 2, 1, 1, 1, 1, 1]
['Table 6 shows the results.', 'We use 3 versions of QueryParser.', 'The first two use random word embedding for initialization, and the first one does not use label refinement.', 'From the results, it can be concluded that QueryParser consistently outperformed competitors on query parsing task.', 'Pretrained word2vec ...
[None, ['QueryParser + label refinement', 'QueryParser + word2vec', 'QueryParser + label refinement + word2vec'], ['QueryParser + label refinement', 'QueryParser + word2vec', 'QueryParser + label refinement + word2vec'], ['QueryParser + label refinement', 'QueryParser + word2vec', 'QueryParser + label refinement + word...
1
D16-1185table_2
Experimental results on different methods using descriptions. Contingency-based methods generally outperforms summarization-based methods.
1
[['ILP-Ext (Banerjee et al. 2015)'], ['ILP-Abs (Banerjee et al. 2015)'], ['Our approach TREM'], ['w/o SR'], ['w/o CC'], ['w/o SR&CC (summarization only)']]
1
[['ROUGE-1'], ['ROUGE-2'], ['ROUGE-SU4']]
[['0.308', '0.112', '0.091'], ['0.361', '0.158', '0.12'], ['0.405', '0.207', '0.148'], ['0.393', '0.189', '0.144'], ['0.383', '0.171', '0.132'], ['0.374', '0.168', '0.129']]
column
['ROUGE-1', 'ROUGE-2', 'ROUGE-SU4']
['Our approach TREM']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>ROUGE-1</th> <th>ROUGE-2</th> <th>ROUGE-SU4</th> </tr> </thead> <tbody> <tr> <td>ILP-Ext (Banerjee et al. 2015)</td> <td>0.308</td> <td>0.112</td> <td>0.091</td> </tr> <tr...
Table 2
table_2
D16-1185
7
emnlp2016
Table 2 shows our experimental results comparing TREM and baseline models using descriptions. In general, contingency-based methods (TREM, TREM w/o SR and TREM w/o CC) outperform summarization-based methods. Our contingency assumptions are verified as adding CC and SC both improve TREM with summarization component only...
[1, 1, 1, 1, 2, 1, 1, 2]
['Table 2 shows our experimental results comparing TREM and baseline models using descriptions.', 'In general, contingency-based methods (TREM, TREM w/o SR and TREM w/o CC) outperform summarization-based methods.', 'Our contingency assumptions are verified as adding CC and SC both improve TREM with summarization compon...
[None, ['Our approach TREM', 'w/o SR', 'w/o CC'], None, ['Our approach TREM'], None, ['Our approach TREM', 'ILP-Ext (Banerjee et al. 2015)', 'ILP-Abs (Banerjee et al. 2015)', 'ROUGE-1', 'ROUGE-2', 'ROUGE-SU4'], ['ILP-Ext (Banerjee et al. 2015)'], None]
1
D16-1187table_3
Spam and nonspam review detection results in the doctor, hotel, and restaurant review domains.
1
[['SMTL-LLR'], ['MTL-LR'], ['MTRL'], ['TSVM'], ['LR'], ['SVM'], ['PU']]
1
[['Doctor'], ['Hotel'], ['Restaurant'], ['Average']]
[['85.4%', '88.7%', '87.5%', '87.2%'], ['83.1%', '86.7%', '85.7%', '85.2%'], ['82.0%', '85.4%', '84.7%', '84.0%'], ['80.6%', '84.2%', '83.8%', '82.9%'], ['79.8%', '83.5%', '83.1%', '82.1%'], ['79.0%', '83.5%', '82.9%', '81.8%'], ['68.5%', '75.4%', '74.0%', '72.6%']]
column
['Accuracy', 'Accuracy', 'Accuracy', 'Accuracy']
['SMTL-LLR']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Doctor</th> <th>Hotel</th> <th>Restaurant</th> <th>Average</th> </tr> </thead> <tbody> <tr> <td>SMTL-LLR</td> <td>85.4%</td> <td>88.7%</td> <td>87.5%</td> <td>87.2%</t...
Table 3
table_3
D16-1187
8
emnlp2016
Table 3 reports the spam and nonspam review detection accuracy of our methods SMTL-LLR and MTLLR against all other baseline methods. In terms of 5% significance level, the differences between SMTL-LLR and the baseline methods are considered to be statistically significant. Under symmetric multi-task learning setting, o...
[1, 1, 1, 1, 1, 1]
['Table 3 reports the spam and nonspam review detection accuracy of our methods SMTL-LLR and MTLLR against all other baseline methods.', 'In terms of 5% significance level, the differences between SMTL-LLR and the baseline methods are considered to be statistically significant.', 'Under symmetric multi-task learning se...
[None, ['SMTL-LLR', 'MTL-LR', 'MTRL', 'TSVM', 'LR', 'SVM', 'PU'], ['SMTL-LLR', 'MTL-LR', 'MTRL', 'TSVM', 'LR', 'SVM', 'PU'], ['MTL-LR', 'LR', 'SVM', 'MTRL', 'Average'], ['SMTL-LLR', 'MTL-LR', 'Average', 'MTRL', 'TSVM'], ['PU']]
1
D16-1194table_5
Evaluation of annotators performance
2
[['Parameters', 'True-positive'], ['Parameters', 'True-negative'], ['Parameters', 'False-positive'], ['Parameters', 'False-negative'], ['Parameters', 'Precision'], ['Parameters', 'Recall'], ['Parameters', 'Accuracy'], ['Parameters', 'F-Measure']]
1
[['Expert 1'], ['Expert 2'], ['Expert 3']]
[['130', '99', '125'], ['161', '164', '166'], ['20', '51', '25'], ['14', '11', '9'], ['86.67', '66.00', '83.33'], ['90.27', '90.00', '93.28'], ['89.54', '80.92', '89.54'], ['88.43', '76.15', '88.03']]
column
['Kappa', 'Kappa', 'Kappa']
['Parameters']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Expert 1</th> <th>Expert 2</th> <th>Expert 3</th> </tr> </thead> <tbody> <tr> <td>Parameters || True-positive</td> <td>130</td> <td>99</td> <td>125</td> </tr> <tr> <t...
Table 5
table_5
D16-1194
7
emnlp2016
As Table 5 shows, the best performance of annotators is highlighted and regarded as the upper bound performance (UB) of the NLD task on our dataset. The state-of-the-art unsupervised PD system named STS (Islam and Inkpen, 2008), as well as the state-of-the-art supervised PD system named RAE (Socher et al., 2011), are u...
[1, 2, 2, 2, 2]
['As Table 5 shows, the best performance of annotators is highlighted and regarded as the upper bound performance (UB) of the NLD task on our dataset.', 'The state-of-the-art unsupervised PD system named STS (Islam and Inkpen, 2008), as well as the state-of-the-art supervised PD system named RAE (Socher et al., 2011), ...
[None, None, None, None, None]
0
D16-1194table_6
Evaluation of NLDS
2
[['Method', 'UB'], ['Method', 'STS'], ['Method', 'RAE'], ['Method', 'Uni-gram'], ['Method', 'Bi-gram'], ['Method', 'Tri-gram'], ['Method', 'POS'], ['Method', 'Lexical'], ['Method', 'Flickr'], ['Method', 'NLDS']]
1
[['R (%)'], ['P (%)'], ['A (%)'], ['F1 (%)']]
[['92.38', '86.67', '89.54', '88.43'], ['100.0', '46.15', '46.15', '63.16'], ['100.0', '46.4', '46.4', '63.39'], ['11.11', '35.29', '52.8', '16.9'], ['44.44', '61.54', '64.0', '51.61'], ['50.0', '62.79', '65.6', '55.67'], ['77.78', '72.77', '78.4', '76.52'], ['85.18', '59.74', '68.8', '70.23'], ['48.96', '94.0', '74.0'...
column
['R (%)', 'P (%)', 'A (%)', 'F1 (%)']
['NLDS']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>R (%)</th> <th>P (%)</th> <th>A (%)</th> <th>F1 (%)</th> </tr> </thead> <tbody> <tr> <td>Method || UB</td> <td>92.38</td> <td>86.67</td> <td>89.54</td> <td>88.43</td> ...
Table 6
table_6
D16-1194
8
emnlp2016
To assess the importance of each feature utilized in the proposed framework, we performed a feature ablation study (Cohen and Howe, 1988) on N-gram, POS analysis, lexical analysis (GTM and WordNet), and Flickr, separately on the DStest dataset. The results are listed in Table 6. A series of cross-validation and Student...
[2, 1, 2, 1, 1]
['To assess the importance of each feature utilized in the proposed framework, we performed a feature ablation study (Cohen and Howe, 1988) on N-gram, POS analysis, lexical analysis (GTM and WordNet), and Flickr, separately on the DStest dataset.', 'The results are listed in Table 6.', 'A series of cross-validation and...
[['Uni-gram', 'Bi-gram', 'Tri-gram', 'POS', 'Lexical', 'Flickr'], None, ['NLDS', 'STS', 'RAE', 'UB'], ['NLDS', 'STS', 'RAE', 'UB'], ['NLDS']]
1
D16-1196table_3
Results ILCI corpus (% BLEU). The reported scores are:W: word-level, WX: word-level followed by transliteration of OOV words, M: morph-level, MX: morph-level followed by transliteration of OOV morphemes, C: character-level, O: orthographic syllable. The values marked in bold indicate the best scores for the language pa...
1
[['ben-hin'], ['pan-hin'], ['kok-mar'], ['mal-tam'], ['tel-mal'], ['hin-mal'], ['mal-hin']]
1
[['W'], ['WX'], ['M'], ['MX'], ['C'], ['O']]
[['31.23', '32.79', '32.17', '32.32', '27.95', '33.46'], ['68.96', '71.71', '71.29', '71.42', '71.26', '72.51'], ['21.39', '21.90', '22.81', '22.82', '19.83', '23.53'], ['6.52', '7.01', '7.61', '7.65', '4.50', '7.86'], ['6.62', '6.94', '7.86', '7.89', '6.00', '8.51'], ['8.49', '8.77', '9.23', '9.26', '6.28', '10.45'], ...
column
['BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU']
['O']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>W</th> <th>WX</th> <th>M</th> <th>MX</th> <th>C</th> <th>O</th> </tr> </thead> <tbody> <tr> <td>ben-hin</td> <td>31.23</td> <td>32.79</td> <td>32.17</td> <td...
Table 3
table_3
D16-1196
4
emnlp2016
Comparison of Translation Units: Table 3 compares the BLEU scores for various translation systems. The orthographic syllable level system is clearly better than all other systems. It significantly outperforms the character-level system (by 46% on an average). The system also outperforms two strong baselines which addre...
[1, 1, 1, 1, 1, 1, 1, 1, 1]
['Table 3 compares the BLEU scores for various translation systems.', 'The orthographic syllable level system is clearly better than all other systems.', 'It significantly outperforms the character-level system (by 46% on an average).', 'The system also outperforms two strong baselines which address data sparsity: (a) ...
[None, None, None, None, None, None, None, None, None]
1
D16-1200table_3
Results for our system and other participants in the SemEval 2015 Task 4: TimeLine.
2
[['System', 'GPLSIUA 1'], ['System', 'GPLSIUA 2'], ['System', 'HeidelToul 1'], ['System', 'HeidelToul 2'], ['System', 'Our System Binary'], ['System', 'Our System Alignment']]
2
[['Airbus', 'F1'], ['GM', 'F1'], ['Stock', 'F1'], ['Total', 'P'], ['Total', 'R'], ['Total', 'F1']]
[['22.35', '19.28', '33.59', '21.73', '30.46', '25.36'], ['20.47', '16.17', '29.90', '20.08', '26.00', '22.66'], ['19.62', '7.25', '20.37', '20.11', '14.76', '17.03'], ['16.50', '10.82', '25.89', '13.58', '28.23', '18.34'], ['17.99', '20.97', '34.95', '25.97', '24.79', '25.37'], ['25.65', '26.64', '32.35', '29.05', '28...
column
['F1', 'F1', 'F1', 'P', 'R', 'F1']
['Our System Binary', 'Our System Alignment']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Airbus || F1</th> <th>GM || F1</th> <th>Stock || F1</th> <th>Total || P</th> <th>Total || R</th> <th>Total || F1</th> </tr> </thead> <tbody> <tr> <td>System || GPLSIUA 1</td> ...
Table 3
table_3
D16-1200
5
emnlp2016
In Table 3 we compare the binary classification model (Our System Binary) against the alignment model (Our System Alignment) and show that the latter outperforms the former by a margin of 3.2 points in F-score, achieving a micro F1-score of 28.58 across the three test corpora, thus confirming the benefits of joint infe...
[1, 1]
['In Table 3 we compare the binary classification model (Our System Binary) against the alignment model (Our System Alignment) and show that the latter outperforms the former by a margin of 3.2 points in F-score, achieving a micro F1-score of 28.58 across the three test corpora, thus confirming the benefits of joint in...
[['Our System Binary', 'Our System Alignment'], None]
1
D16-1204table_1
Youtube dataset: METEOR and BLEU@4 in %, and human ratings (1-5) on relevance and grammar. Best results in bold, * indicates significant over S2VT.
3
[['Model', 'S2VT', '-'], ['Model', 'Early Fusion', '-'], ['Model', 'Late Fusion', '-'], ['Model', 'Deep Fusion', '-'], ['Model', 'Glove', '-'], ['Model', 'Glove+Deep', '- Web Corpus'], ['Model', 'Glove+Deep', '- In-Domain'], ['Model', 'Ensemble', '-']]
1
[['METEOR'], ['B-4'], ['Relevance'], ['Grammar']]
[['29.2', '37.0', '2.06', '3.76'], ['29.6', '37.6', '-', '-'], ['29.4', '37.2', '-', '-'], ['29.6', '39.3', '-', '-'], ['30.0', '37.0', '-', '-'], ['30.3', '38.1', '2.12', '4.05*'], ['30.3', '38.8', '2.21*', '4.17*'], ['31.4', '42.1', '2.24*', '4.20*']]
column
['METEOR', 'B-4', 'Relevance', 'Grammar']
['Ensemble']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>METEOR</th> <th>B-4</th> <th>Relevance</th> <th>Grammar</th> </tr> </thead> <tbody> <tr> <td>Model || S2VT || -</td> <td>29.2</td> <td>37.0</td> <td>2.06</td> <td>3.76...
Table 1
table_1
D16-1204
4
emnlp2016
Comparison of the proposed techniques in Table 1 shows that Deep Fusion performs well on both METEOR and BLEU, incorporating Glove embeddings substantially increases METEOR, and combining them both does best. Our final model is an ensemble (weighted average) of the Glove, and the two Glove+Deep Fusion models trained on...
[1, 2, 2, 2, 2]
['Comparison of the proposed techniques in Table 1 shows that Deep Fusion performs well on both METEOR and BLEU, incorporating Glove embeddings substantially increases METEOR, and combining them both does best.', 'Our final model is an ensemble (weighted average) of the Glove, and the two Glove+Deep Fusion models train...
[['Deep Fusion', 'METEOR', 'B-4', 'Glove+Deep', 'Ensemble'], ['Ensemble'], None, ['Ensemble'], None]
1
D16-1207table_2
Accuracy under cross-domain evaluation; the best result for each dataset is indicated in bold.
2
[['Train/Test', 'Dropout (beta) = 0.3'], ['Train/Test', 'Dropout (beta) = 0.5'], ['Train/Test', 'Dropout (beta) = 0.7'], ['Train/Test', 'Robust Regularization (lambda) = 10^-3'], ['Train/Test', 'Robust Regularization (lambda) = 10^-2'], ['Train/Test', 'Robust Regularization (lambda) = 10^-1'], ['Train/Test', 'Robust Re...
2
[['MR/CR', '67.5'], ['CR/MR', '61.0']]
[['71.6', '62.2'], ['71.0', '62.1'], ['70.9', '62.0'], ['70.8', '61.6'], ['71.1', '62.5'], ['72.0', '62.2'], ['71.8', '62.3'], ['72.0', '62.4']]
column
['accuracy', 'accuracy']
['Robust Regularization (lambda) = 10^-3', 'Robust Regularization (lambda) = 10^-2', 'Robust Regularization (lambda) = 10^-1', 'Robust Regularization (lambda) = 1', 'Dropout + Robust beta = 0.5 lambda = 10^-2']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MR/CR || 67.5</th> <th>CR/MR || 61.0</th> </tr> </thead> <tbody> <tr> <td>Train/Test || Dropout (beta) = 0.3</td> <td>71.6</td> <td>62.2</td> </tr> <tr> <td>Train/Test || Dropo...
Table 2
table_2
D16-1207
5
emnlp2016
Table 2 presents the results of the cross-domain experiment, whereby we train a model on MR and test on CR, and vice versa, to measure the robustness of the different regularization methods in a more real-world setting. Once again, we see that our regularization method is superior to word-level dropout and the baseline...
[1, 1]
['Table 2 presents the results of the cross-domain experiment, whereby we train a model on MR and test on CR, and vice versa, to measure the robustness of the different regularization methods in a more real-world setting.', 'Once again, we see that our regularization method is superior to word-level dropout and the bas...
[None, ['Robust Regularization (lambda) = 10^-3', 'Robust Regularization (lambda) = 10^-2', 'Robust Regularization (lambda) = 10^-1', 'Robust Regularization (lambda) = 1', 'Dropout + Robust beta = 0.5 lambda = 10^-2']]
1
D16-1210table_2
Word alignment performance.
2
[['Method', 'HMM+none'], ['Method', 'HMM+sym'], ['Method', 'HMM+itg'], ['Method', 'IBM Model 4+none'], ['Method', 'IBM Model 4+sym'], ['Method', 'IBM Model 4+itg']]
2
[['Hansard Fr-En', 'F-measure'], ['Hansard Fr-En', 'AER'], ['KFTT Ja-En', 'F-measure'], ['KFTT Ja-En', 'AER'], ['BTEC Ja-En', 'F-measure'], ['BTEC Ja-En', 'AER']]
[['0.7900', '0.0646', '0.4623', '0.5377', '0.4425', '0.5575'], ['0.7923', '0.0597', '0.4678', '0.5322', '0.4534', '0.5466'], ['0.7869', '0.0629', '0.4690', '0.5310', '0.4499', '0.5501'], ['0.7780', '0.0775', '0.5379', '0.4621', '0.4454', '0.5546'], ['0.7800', '0.0693', '0.5545', '0.4455', '0.4761', '0.5239'], ['0.7791'...
column
['F-measure', 'AER', 'F-measure', 'AER', 'F-measure', 'AER']
['HMM+itg']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Hansard Fr-En || F-measure</th> <th>Hansard Fr-En || AER</th> <th>KFTT Ja-En || F-measure</th> <th>KFTT Ja-En || AER</th> <th>BTEC Ja-En || F-measure</th> <th>BTEC Ja-En || AER</th> </tr>...
Table 2
table_2
D16-1210
4
emnlp2016
Table 2 shows the results of word alignment evaluations, where none denotes that the model has no constraint. In KFTT and BTEC Corpus, itg achieved significant improvement against sym and none on IBM Model 4 (p ? 0.05). However, in the Hansard Corpus, itg shows no improvement against sym. This indicates that capturing ...
[1, 1, 1, 2, 2, 2, 0]
['Table 2 shows the results of word alignment evaluations, where none denotes that the model has no constraint.', 'In KFTT and BTEC Corpus, itg achieved significant improvement against sym and none on IBM Model 4 (p ? 0.05).', 'However, in the Hansard Corpus, itg shows no improvement against sym.', 'This indicates that...
[None, ['HMM+itg', 'KFTT Ja-En', 'BTEC Ja-En', 'IBM Model 4+none', 'IBM Model 4+sym'], ['Hansard Fr-En', 'HMM+itg', 'IBM Model 4+itg', 'IBM Model 4+sym'], ['HMM+itg'], None, None, None]
1
D16-1220table_2
Performance on the proverb test data. ∗: significantly different from B with p < .001. #: significantly different from N with p < .001.
2
[['Features', 'B#'], ['Features', 'N*'], ['Features', 'N \\ s*'], ['Features', 'B ∪ N*']]
1
[['P'], ['R'], ['F']]
[['0.75', '0.70', '0.73'], ['0.86', '0.83', '0.85'], ['0.82', '0.87', '0.85'], ['0.87', '0.85', '0.86']]
column
['P', 'R', 'F']
['Features']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F</th> </tr> </thead> <tbody> <tr> <td>Features || B#</td> <td>0.75</td> <td>0.70</td> <td>0.73</td> </tr> <tr> <td>Features || N*</td> <td...
Table 2
table_2
D16-1220
4
emnlp2016
We then evaluated the best configuration from the cross-fold validation (N \ s) and the three feature sets B, N and B ∪ N on the held-out test data. The results of this experiment reported in Table 2 are similar to the cross-fold evaluation, and in this case the contribution of N features is even more accentuated. Inde...
[2, 1, 1, 2, 1, 1, 2, 2, 2]
['We then evaluated the best configuration from the cross-fold validation (N \\ s) and the three feature sets B, N and B ∪ N on the held-out test data.', 'The results of this experiment reported in Table 2 are similar to the cross-fold evaluation, and in this case the contribution of N features is even more accentuated...
[['B#', 'N*', 'N \\ s*', 'B ∪ N*'], None, ['F', 'N*', 'B ∪ N*'], None, ['N \\ s*', 'N*'], ['R', 'P', 'N \\ s*'], None, None, None]
1