paper stringlengths 0 839 | paper_id stringlengths 1 12 | table_caption stringlengths 3 2.35k | table_column_names large_stringlengths 13 1.76k | table_content_values large_stringlengths 2 11.9k | text large_stringlengths 69 2.82k |
|---|---|---|---|---|---|
A context sensitive real-time Spell Checker with language adaptability | 1910.11242v1 | TABLE IV: Synthetic Data Performance on three error generation algorithm | ['[BOLD] Language', '[BOLD] Random Character [BOLD] P@1', '[BOLD] Random Character [BOLD] P@10', '[BOLD] Characters Swap [BOLD] P@1', '[BOLD] Characters Swap [BOLD] P@10', '[BOLD] Character Bigrams [BOLD] P@1', '[BOLD] Character Bigrams [BOLD] P@10'] | [['Bengali', '91.243', '99.493', '82.580', '99.170', '93.694', '99.865'], ['Czech', '94.035', '99.264', '91.560', '99.154', '97.795', '99.909'], ['Danish', '84.605', '98.435', '71.805', '97.160', '90.103', '99.444'], ['Dutch', '85.332', '98.448', '72.800', '96.675', '91.159', '99.305'], ['English', '97.260', '99.897', ... | Table IV presents the system's performance on each error generation algorithm. We included only P@1 and P@10 to show trend on all languages. "Random Character" and "Character Bigrams" includes data for edit distance 1 and 2 whereas "Characters Swap" includes data for edit distance 2. |
A context sensitive real-time Spell Checker with language adaptability | 1910.11242v1 | TABLE I: Average Time taken by suggestion generation algorithms (Edit Distance = 2) (in millisecond) | ['[BOLD] Token', '[BOLD] Trie', '[BOLD] DAWGs', '[BOLD] SDA'] | [['3', '170.50', '180.98', '112.31'], ['4', '175.04', '178.78', '52.97'], ['5', '220.44', '225.10', '25.44'], ['6', '254.57', '259.54', '7.44'], ['7', '287.19', '291.99', '4.59'], ['8', '315.78', '321.58', '2.58'], ['9', '351.19', '356.76', '1.91'], ['10', '379.99', '386.04', '1.26'], ['11', '412.02', '419.55', '1.18']... | We considered four approaches — Trie data structure, Burkhard-Keller Tree (BK Tree) , Directed Acyclic Word Graphs (DAWGs) and Symmetric Delete algorithm (SDA)6. In Table I, we represent the performance of algorithms for edit distance 2 without adding results for BK trees because its performance was in range of couple... |
A context sensitive real-time Spell Checker with language adaptability | 1910.11242v1 | TABLE II: Synthetic Data Performance results | ['[BOLD] Language', '[BOLD] # Test', '[BOLD] P@1', '[BOLD] P@3', '[BOLD] P@5', '[BOLD] P@10', '[BOLD] MRR'] | [['[BOLD] Language', '[BOLD] Samples', '[BOLD] P@1', '[BOLD] P@3', '[BOLD] P@5', '[BOLD] P@10', '[BOLD] MRR'], ['Bengali', '140000', '91.30', '97.83', '98.94', '99.65', '94.68'], ['Czech', '94205', '95.84', '98.72', '99.26', '99.62', '97.37'], ['Danish', '140000', '85.84', '95.19', '97.28', '98.83', '90.85'], ['Dutch',... | The best performances for each language is reported in Table II. We present Precision@k9 for k ∈ 1, 3, 5, 10 and mean reciprocal rank (MRR). The system performs well on synthetic dataset with a minimum of 80% P@1 and 98% P@10. |
A context sensitive real-time Spell Checker with language adaptability | 1910.11242v1 | TABLE III: Synthetic Data Time Performance results | ['[BOLD] Language', '[BOLD] Detection [BOLD] Time ( [ITALIC] μs)', '[BOLD] Suggestion Time [BOLD] ED=1 (ms)', '[BOLD] Suggestion Time [BOLD] ED=2 (ms)', '[BOLD] Ranking [BOLD] Time (ms)'] | [['Bengali', '7.20', '0.48', '14.85', '1.14'], ['Czech', '7.81', '0.75', '26.67', '2.34'], ['Danish', '7.28', '0.67', '23.70', '1.96'], ['Dutch', '10.80', '0.81', '30.44', '2.40'], ['English', '7.27', '0.79', '39.36', '2.35'], ['Finnish', '8.53', '0.46', '15.55', '1.05'], ['French', '7.19', '0.82', '32.02', '2.69'], ['... | The system is able to do each sub-step in real-time; the average time taken to perform for each sub-step is reported in Table III. All the sentences used for this analysis had exactly one error according to our system. Detection time is the average time weighted over number of tokens in query sentence, suggestion time ... |
A context sensitive real-time Spell Checker with language adaptability | 1910.11242v1 | TABLE VI: Public dataset comparison results | ['[EMPTY]', '[BOLD] P@1', '[BOLD] P@3', '[BOLD] P@5', '[BOLD] P@10'] | [['Aspell', '60.82', '80.81', '87.26', '91.35'], ['Hunspell', '61.34', '77.86', '83.47', '87.04'], ['[ITALIC] Ours', '68.99', '83.43', '87.03', '90.16']] | Comparison of most popular spell checkers for English (GNU Aspell and Hunspell12) on this data is presented in Table VI. Since these tools only work on word-error level, we used only unigram probabilities for ranking. Our system outperforms both the systems. |
A context sensitive real-time Spell Checker with language adaptability | 1910.11242v1 | TABLE VII: False Positive Experiment Results | ['[BOLD] Language', '[BOLD] # Sentences', '[BOLD] # Total Words', '[BOLD] # Detected', '[BOLD] %'] | [['Bengali', '663748', '457140', '443650', '97.05'], ['Czech', '6128', '36846', '36072', '97.90'], ['Danish', '16198', '102883', '101798', '98.95'], ['Dutch', '55125', '1048256', '1004274', '95.80'], ['English', '239555', '4981604', '4907733', '98.52'], ['Finnish', '3757', '43457', '39989', '92.02'], ['French', '164916... | As shown in Table VII, most of the words for each language were detected as known but still there was a minor percentage of words which were detected as errors. |
Automatically Identifying Complaints in Social Media | 1906.03890v1 | Table 9: Performance of models trained with tweets from one domain and tested on other domains. All results are reported in ROC AUC. The All line displays results on training on all categories except the category in testing. | ['[BOLD] Test', 'F&B', 'A', 'R', 'Ca', 'Se', 'So', 'T', 'E', 'O'] | [['[BOLD] Train', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['Food & Bev.', '–', '58.1', '52.5', '66.4', '59.7', '58.9', '54.1', '61.4', '53.7'], ['Apparel', '63.9', '–', '74.4', '65.1', '70.8', '71.2', '68.5', '76.9', '85.6'], ['Retail', '58.8', '74.4', '–', '7... | Finally, Table 9 presents the results of models trained on tweets from one domain and tested on all tweets from other domains, with additional models trained on tweets from all domains except the one that the model is tested on. We observe that predictive performance is relatively consistent across all domains with two... |
Automatically Identifying Complaints in Social Media | 1906.03890v1 | Table 3: Number of tweets annotated as complaints across the nine domains. | ['[BOLD] Category', '[BOLD] Complaints', '[BOLD] Not Complaints'] | [['Food & Beverage', '95', '35'], ['Apparel', '141', '117'], ['Retail', '124', '75'], ['Cars', '67', '25'], ['Services', '207', '130'], ['Software & Online Services', '189', '103'], ['Transport', '139', '109'], ['Electronics', '174', '112'], ['Other', '96', '33'], ['Total', '1232', '739']] | In total, 1,232 tweets (62.4%) are complaints and 739 are not complaints (37.6%). The statistics for each category is in Table 3. |
Automatically Identifying Complaints in Social Media | 1906.03890v1 | Table 4: Features associated with complaint and non-complaint tweets, sorted by Pearson correlation (r) computed between the normalized frequency of each feature and the complaint label across all tweets. All correlations are significant at p | ['[BOLD] Complaints [BOLD] Feature', '[BOLD] Complaints [ITALIC] r', '[BOLD] Not Complaints [BOLD] Feature', '[BOLD] Not Complaints [ITALIC] r'] | [['[BOLD] Unigrams', '[BOLD] Unigrams', '[BOLD] Unigrams', '[BOLD] Unigrams'], ['not', '.154', '[URL]', '.150'], ['my', '.131', '!', '.082'], ['working', '.124', 'he', '.069'], ['still', '.123', 'thank', '.067'], ['on', '.119', ',', '.064'], ['can’t', '.113', 'love', '.064'], ['service', '.112', 'lol', '.061'], ['custo... | Top unigrams and part-of-speech features specific of complaints and non-complaints are presented in Table 4. [CONTINUE] All correlations shown in these tables are statistically significant at p < .01, with Simes correction for multiple comparisons. [CONTINUE] Negations are uncovered through unigrams (not, no, won't) [C... |
Automatically Identifying Complaints in Social Media | 1906.03890v1 | Table 5: Group text features associated with tweets that are complaints and not complaints. Features are sorted by Pearson correlation (r) between their each feature’s normalized frequency and the outcome. We restrict to only the top six categories for each feature type. All correlations are significant at p | ['[BOLD] Complaints [BOLD] Label', '[BOLD] Complaints [BOLD] Words', '[BOLD] Complaints [ITALIC] r', '[BOLD] Not Complaints [BOLD] Label', '[BOLD] Not Complaints [BOLD] Words', '[BOLD] Not Complaints [ITALIC] r'] | [['[BOLD] LIWC Features', '[BOLD] LIWC Features', '[BOLD] LIWC Features', '[BOLD] LIWC Features', '[BOLD] LIWC Features', '[BOLD] LIWC Features'], ['NEGATE', 'not, no, can’t, don’t, never, nothing, doesn’t, won’t', '.271', 'POSEMO', 'thanks, love, thank, good, great, support, lol, win', '.185'], ['RELATIV', 'in, on, wh... | The top features for the LIWC categories and Word2Vec topics are presented in Table 5. [CONTINUE] the top LIWC category (NEGATE). [CONTINUE] a cluster (Issues) contain words referring to issues or errors. [CONTINUE] Complaints tend to not contain personal pronouns (he, she, it, him, you, SHEHE, MALE, FEMALE), as the fo... |
Automatically Identifying Complaints in Social Media | 1906.03890v1 | Table 6: Complaint prediction results using logistic regression (with different types of linguistic features), neural network approaches and the most frequent class baseline. Best results are in bold. | ['[BOLD] Model', '[BOLD] Acc', '[BOLD] F1', '[BOLD] AUC'] | [['Most Frequent Class', '64.2', '39.1', '0.500'], ['Logistic Regression', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['Sentiment – MPQA', '64.2', '39.1', '0.499'], ['Sentiment – NRC', '63.9', '42.2', '0.599'], ['Sentiment – V&B', '68.9', '60.0', '0.696'], ['Sentiment – VADER', '66.0', '54.2', '0.654'], ['Sentiment – Stanford',... | Results are presented in Table 6. Most sentiment analysis models show accuracy above chance in predicting complaints. The best results are obtained by the Volkova & Bachrach model (Sentiment – V&B) which achieves 60 F1. However, models trained using linguistic features on the training data obtain significantly higher p... |
Automatically Identifying Complaints in Social Media | 1906.03890v1 | Table 7: Complaint prediction results using the original data set and distantly supervised data. All models are based on logistic regression with bag-of-word and Part-of-Speech tag features. | ['[BOLD] Model', '[BOLD] Acc', '[BOLD] F1', '[BOLD] AUC'] | [['Most Frequent Class', '64.2', '39.1', '0.500'], ['LR-All Features – Original Data', '80.5', '78.0', '0.873'], ['Dist. Supervision + Pooling', '77.2', '75.7', '0.853'], ['Dist. Supervision + EasyAdapt', '[BOLD] 81.2', '[BOLD] 79.0', '[BOLD] 0.885']] | Results presented in Table 7 show that the domain adaptation approach further boosts F1 by 1 point to 79 (t-test, p<0.5) and ROC AUC by 0.012. [CONTINUE] However, simply pooling the data actually hurts predictive performance leading to a drop of more than 2 points in F1. |
Automatically Identifying Complaints in Social Media | 1906.03890v1 | Table 8: Performance of models in Macro F1 on tweets from each domain. | ['[BOLD] Domain', '[BOLD] In-Domain', '[BOLD] Pooling', '[BOLD] EasyAdapt'] | [['Food & Beverage', '63.9', '60.9', '[BOLD] 83.1'], ['Apparel', '[BOLD] 76.2', '71.1', '72.5'], ['Retail', '58.8', '[BOLD] 79.7', '[BOLD] 79.7'], ['Cars', '41.5', '77.8', '[BOLD] 80.9'], ['Services', '65.2', '75.9', '[BOLD] 76.7'], ['Software', '61.3', '73.4', '[BOLD] 78.7'], ['Transport', '56.4', '[BOLD] 73.4', '69.8... | Table 8 shows the model performance in macro-averaged F1 using the best performing feature set. Results show that, in all but one case, adding out-of-domain data helps predictive performance. The apparel domain is qualitatively very different from the others as a large number of complaints are about returns or the comp... |
Localization of Fake News Detection via Multitask Transfer Learning | 1910.09295v3 | Table 4: Consolidated experiment results. The first section shows finetuning results for base transfer learning methods and the baseline siamese network. The second section shows results for ULMFiT without Language Model Finetuning. The last section shows finetuning results for transformer methods augmented with multit... | ['Model', 'Val. Accuracy', 'Loss', 'Val. Loss', 'Pretraining Time', 'Finetuning Time'] | [['Siamese Networks', '77.42%', '0.5601', '0.5329', '[EMPTY]', '4m per epoch'], ['BERT', '87.47%', '0.4655', '0.4419', '66 hours', '2m per epoch'], ['GPT-2', '90.99%', '0.2172', '0.1826', '78 hours', '4m per epoch'], ['ULMFiT', '91.59%', '0.3750', '0.1972', '11 hours', '2m per epoch'], ['ULMFiT (no LM Finetuning)', '78... | BERT achieved a final accuracy of 91.20%, now marginally comparable to ULMFiT's full performance. GPT-2, on the other hand, finetuned to a final accuracy of 96.28%, a full 4.69% improvement over the performance of ULMFiT. [CONTINUE] Rersults for this experiment are outlined in Table 4. |
Localization of Fake News Detection via Multitask Transfer Learning | 1910.09295v3 | Table 5: An ablation study on the effects of pretraining for multitasking-based and standard GPT-2 finetuning. Results show that pretraining greatly accounts for almost half of performance on both finetuning techniques. “Acc. Inc.” refers to the boost in performance contributed by the pretraining step. “% of Perf.” ref... | ['Finetuning', 'Pretrained?', 'Accuracy', 'Val. Loss', 'Acc. Inc.', '% of Perf.'] | [['Multitasking', 'No', '53.61%', '0.7217', '-', '-'], ['[EMPTY]', 'Yes', '96.28%', '0.2197', '+42.67%', '44.32%'], ['Standard', 'No', '51.02%', '0.7024', '-', '-'], ['[EMPTY]', 'Yes', '90.99%', '0.1826', '+39.97%', '43.93%']] | In Table 5, it can be seen that generative pretraining via language modeling does account for a considerable amount of performance, constituting 44.32% of the overall performance (a boost of 42.67% in accuracy) in the multitasking setup, and constituting 43.93% of the overall performance (a boost of 39.97%) in the stan... |
Localization of Fake News Detection via Multitask Transfer Learning | 1910.09295v3 | Table 6: An ablation study on the effect of multiple heads in the attention mechanisms. The results show that increasing the number of heads improves performance, though this plateaus at 10 attention heads. All ablations use the multitask-based finetuning method. “Effect” refers to the increase or decrease of accuracy ... | ['# of Heads', 'Accuracy', 'Val. Loss', 'Effect'] | [['1', '89.44%', '0.2811', '-6.84%'], ['2', '91.20%', '0.2692', '-5.08%'], ['4', '93.85%', '0.2481', '-2.43%'], ['8', '96.02%', '0.2257', '-0.26%'], ['10', '96.28%', '0.2197', '[EMPTY]'], ['16', '96.32%', '0.2190', '+0.04']] | As shown in Table 6, reducing the number of attention heads severely decreases multitasking performance. Using only one attention head, thereby attending to only one context position at once, degrades the performance to less than the performance of 10 heads using the standard finetuning scheme. This shows that more att... |
Hierarchical Graph Network for Multi-hop Question Answering | 1911.03631v2 | Table 6: Error analysis of HGN model. For ‘Multi-hop’ errors, the model jumps to the wrong film (“Tommy (1975 film)”) instead of the correct one (“Quadrophenia (film)”) from the starting entity “rock opera 5:15”. The supporting fact for the ‘MRC’ example is “Childe Byron is a 1977 play by Romulus Linney about the strai... | ['Category', 'Question', 'Answer', 'Prediction', 'Pct (%)'] | [['Annotation', 'Were the films Tonka and 101 Dalmatians released in the same decade?', '1958 Walt Disney Western adventure film', 'No', '9'], ['Multiple Answers', 'Michael J. Hunter replaced the lawyer who became the administrator of which agency?', 'EPA', 'Environmental Protection Agency', '24'], ['Discrete Reasoning... | Table 6 shows the ablation study results on paragraph selection loss Lpara and entity prediction loss Lentity. [CONTINUE] As shown in the table, using paragraph selection and entity prediction loss can further improve the joint F1 by 0.31 points, |
Hierarchical Graph Network for Multi-hop Question Answering | 1911.03631v2 | Table 1: Results on the test set of HotpotQA in the Distractor setting. HGN achieves state-of-the-art results at the time of submission (Dec. 1, 2019). (†) indicates unpublished work. RoBERTa-large is used for context encoding. | ['Model', 'Ans EM', 'Ans F1', 'Sup EM', 'Sup F1', 'Joint EM', 'Joint F1'] | [['DecompRC Min et al. ( 2019b )', '55.20', '69.63', '-', '-', '-', '-'], ['ChainEx Chen et al. ( 2019 )', '61.20', '74.11', '-', '-', '-', '-'], ['Baseline Model Yang et al. ( 2018 )', '45.60', '59.02', '20.32', '64.49', '10.83', '40.16'], ['QFE Nishida et al. ( 2019 )', '53.86', '68.06', '57.75', '84.49', '34.63', '5... | Table 1 and Table 2 summarize our results on the hidden test set of HotpotQA in the Distractor and Fullwiki setting, respectively. The proposed HGN outperforms both published and unpublished work on every metric by a significant margin. For example, HGN achieves a Joint EM/F1 score of 43.57/71.03 and 35.63/59.86 on the... |
Hierarchical Graph Network for Multi-hop Question Answering | 1911.03631v2 | Table 2: Results on the test set of HotpotQA in the Fullwiki setting. HGN achieves close to state-of-the-art results at the time of submission (Dec. 1, 2019). (†) indicates unpublished work. RoBERTa-large is used for context encoding, and SemanticRetrievalMRS is used for retrieval. Leaderboard: https://hotpotqa.github.... | ['Model', 'Ans EM', 'Ans F1', 'Sup EM', 'Sup F1', 'Joint EM', 'Joint F1'] | [['TPReasoner Xiong et al. ( 2019 )', '36.04', '47.43', '-', '-', '-', '-'], ['Baseline Model Yang et al. ( 2018 )', '23.95', '32.89', '3.86', '37.71', '1.85', '16.15'], ['QFE Nishida et al. ( 2019 )', '28.66', '38.06', '14.20', '44.35', '8.69', '23.10'], ['MUPPET Feldman and El-Yaniv ( 2019 )', '30.61', '40.26', '16.6... | Table 1 and Table 2 summarize our results on the hidden test set of HotpotQA in the Distractor and Fullwiki setting, respectively. The proposed HGN outperforms both published and unpublished work on every metric by a significant margin. For example, HGN achieves a Joint EM/F1 score of 43.57/71.03 and 35.63/59.86 on the... |
Hierarchical Graph Network for Multi-hop Question Answering | 1911.03631v2 | Table 3: Ablation study on the effectiveness of the hierarchical graph on the dev set in the Distractor setting. RoBERTa-large is used for context encoding. | ['Model', 'Ans F1', 'Sup F1', 'Joint F1'] | [['w/o Graph', '80.58', '85.83', '71.02'], ['PS Graph', '81.68', '88.44', '73.83'], ['PSE Graph', '82.10', '88.40', '74.13'], ['Hier. Graph', '[BOLD] 82.22', '[BOLD] 88.58', '[BOLD] 74.37']] | Table 3 shows the performance of paragraph selection on the dev set of HotpotQA. In DFGN, paragraphs are selected based on a threshold to maintain high recall (98.27%), leading to a low precision (60.28%). Compared to both threshold-based and pure TopN -based paragraph selection, our two-step paragraph selection proces... |
Hierarchical Graph Network for Multi-hop Question Answering | 1911.03631v2 | Table 5: Results with different pre-trained language models on the dev set in the Distractor setting. (†) is unpublished work with results on the test set, using BERT whole word masking (wwm). | ['Model', 'Ans F1', 'Sup F1', 'Joint F1'] | [['DFGN (BERT-base)', '69.38', '82.23', '59.89'], ['EPS (BERT-wwm)†', '79.05', '86.26', '70.48'], ['SAE (RoBERTa)', '80.75', '87.38', '72.75'], ['HGN (BERT-base)', '74.76', '86.61', '66.90'], ['HGN (BERT-wwm)', '80.51', '88.14', '72.77'], ['HGN (RoBERTa)', '[BOLD] 82.22', '[BOLD] 88.58', '[BOLD] 74.37']] | As shown in Table 5, the use of PS Graph improves the joint F1 score over the plain RoBERTa model by 1.59 points. By further adding entity nodes, the Joint F1 increases by 0.18 points. |
Hierarchical Graph Network for Multi-hop Question Answering | 1911.03631v2 | Table 7: Results of HGN for different reasoning types. | ['Question', 'Ans F1', 'Sup F1', 'Joint F1', 'Pct (%)'] | [['comp-yn', '93.45', '94.22', '88.50', '6.19'], ['comp-span', '79.06', '91.72', '74.17', '13.90'], ['bridge', '81.90', '87.60', '73.31', '79.91']] | Results in Table 7 show that our HGN variants outperform DFGN and EPS, indicating that the performance gain comes from a better model design. |
Semantic Neural Machine Translation using AMR | 1902.07282v1 | Table 4: BLEU scores of Dual2seq on the little prince data, when gold or automatic AMRs are available. | ['AMR Anno.', 'BLEU'] | [['Automatic', '16.8'], ['Gold', '[BOLD] *17.5*']] | Table 4 shows the BLEU scores of our Dual2seq model taking gold or automatic AMRs as inputs. [CONTINUE] The improvement from automatic AMR to gold AMR (+0.7 BLEU) is significant, which shows that the translation quality of our model can be further improved with an increase of AMR parsing accuracy. |
Semantic Neural Machine Translation using AMR | 1902.07282v1 | Table 3: Test performance. NC-v11 represents training only with the NC-v11 data, while Full means using the full training data. * represents significant Koehn (2004) result (p<0.01) over Seq2seq. ↓ indicates the lower the better. | ['System', 'NC-v11 BLEU', 'NC-v11 TER↓', 'NC-v11 Meteor', 'Full BLEU', 'Full TER↓', 'Full Meteor'] | [['OpenNMT-tf', '15.1', '0.6902', '0.3040', '24.3', '0.5567', '0.4225'], ['Transformer-tf', '17.1', '0.6647', '0.3578', '25.1', '0.5537', '0.4344'], ['Seq2seq', '16.0', '0.6695', '0.3379', '23.7', '0.5590', '0.4258'], ['Dual2seq-LinAMR', '17.3', '0.6530', '0.3612', '24.0', '0.5643', '0.4246'], ['Duel2seq-SRL', '17.2', ... | Table 3 shows the TEST BLEU, TER and Meteor scores of all systems trained on the smallscale News Commentary v11 subset or the largescale full set. Dual2seq is consistently better than the other systems under all three metrics, [CONTINUE] Dual2seq is better than both OpenNMT-tf and Transformer-tf . [CONTINUE] Dual2seq i... |
Recent Advances in Natural Language Inference:A Survey of Benchmarks, Resources, and Approaches | 1904.01172v3 | Table 2: Comparison of exact-match accuracy achieved on selected benchmarks by a random or majority-choice baseline, various neural contextual embedding models, and humans. ELMo refers to the highest-performing listed approach using ELMo embeddings. Best system performance on each benchmark in bold. Information extract... | ['[BOLD] Benchmark', '[BOLD] Simple Baseline ', '[BOLD] ELMo', '[BOLD] GPT', '[BOLD] BERT', '[BOLD] MT-DNN', '[BOLD] XLNet', '[BOLD] RoBERTa', '[BOLD] ALBERT', '[BOLD] Human'] | [['[BOLD] CLOTH', '25.0', '70.7', '–', '[BOLD] 86.0', '–', '–', '–', '–', '85.9'], ['[BOLD] Cosmos QA', '–', '–', '54.5', '67.1', '–', '–', '–', '–', '94.0'], ['[BOLD] DREAM', '33.4', '59.5', '55.5', '66.8', '–', '[BOLD] 72.0', '–', '–', '95.5'], ['[BOLD] GLUE', '–', '70.0', '–', '80.5', '87.6', '88.4', '88.5', '[BOLD]... | The most representative models are ELMO, GPT, BERT and its variants, and XLNET. Next, we give a brief overview of these models and summarize their performance on the selected benchmark tasks. Table 2 quantitatively compares the performance of these models on various benchmarks. [CONTINUE] smaller tweaks to various aspe... |
Entity, Relation, and Event Extraction with Contextualized Span Representations | 1909.03546v2 | Table 2: F1 scores on NER. | ['[EMPTY]', 'ACE05', 'SciERC', 'GENIA', 'WLPC'] | [['BERT + LSTM', '85.8', '69.9', '78.4', '[BOLD] 78.9'], ['+RelProp', '85.7', '70.5', '-', '78.7'], ['+CorefProp', '86.3', '[BOLD] 72.0', '78.3', '-'], ['BERT Finetune', '87.3', '70.5', '78.3', '78.5'], ['+RelProp', '86.7', '71.1', '-', '78.8'], ['+CorefProp', '[BOLD] 87.5', '71.1', '[BOLD] 79.5', '-']] | Table 2 shows that Coreference propagation (CorefProp) improves named entity recognition performance across all three domains. The largest gains are on the computer science research abstracts of SciERC, |
Entity, Relation, and Event Extraction with Contextualized Span Representations | 1909.03546v2 | Table 1: DyGIE++ achieves state-of-the-art results. Test set F1 scores of best model, on all tasks and datasets. We define the following notations for events: Trig: Trigger, Arg: argument, ID: Identification, C: Classification. * indicates the use of a 4-model ensemble for trigger detection. See Appendix E for details.... | ['Dataset', 'Task', 'SOTA', 'Ours', 'Δ%'] | [['ACE05', 'Entity', '88.4', '[BOLD] 88.6', '1.7'], ['ACE05', 'Relation', '63.2', '[BOLD] 63.4', '0.5'], ['ACE05-Event*', 'Entity', '87.1', '[BOLD] 90.7', '27.9'], ['ACE05-Event*', 'Trig-ID', '73.9', '[BOLD] 76.5', '9.6'], ['ACE05-Event*', 'Trig-C', '72.0', '[BOLD] 73.6', '5.7'], ['ACE05-Event*', 'Arg-ID', '[BOLD] 57.2... | Table 1 shows test set F1 on the entity, relation and event extraction tasks. Our framework establishes a new state-of-the-art on all three high-level tasks, and on all subtasks except event argument identification. Relative error reductions range from 0.2 - 27.9% over previous state of the art models. |
Entity, Relation, and Event Extraction with Contextualized Span Representations | 1909.03546v2 | Table 3: F1 scores on Relation. | ['[EMPTY]', 'ACE05', 'SciERC', 'WLPC'] | [['BERT + LSTM', '60.6', '40.3', '65.1'], ['+RelProp', '61.9', '41.1', '65.3'], ['+CorefProp', '59.7', '42.6', '-'], ['BERT FineTune', '[BOLD] 62.1', '44.3', '65.4'], ['+RelProp', '62.0', '43.0', '[BOLD] 65.5'], ['+CorefProp', '60.0', '[BOLD] 45.3', '-']] | CorefProp also improves relation extraction on SciERC. [CONTINUE] Relation propagation (RelProp) improves relation extraction performance over pretrained BERT, but does not improve fine-tuned BERT. |
Entity, Relation, and Event Extraction with Contextualized Span Representations | 1909.03546v2 | Table 7: In-domain pre-training: SciBERT vs. BERT | ['[EMPTY]', 'SciERC Entity', 'SciERC Relation', 'GENIA Entity'] | [['Best BERT', '69.8', '41.9', '78.4'], ['Best SciBERT', '[BOLD] 72.0', '[BOLD] 45.3', '[BOLD] 79.5']] | Table 7 compares the results of BERT and SciBERT with the best-performing model configurations. SciBERT significantly boosts performance for scientific datasets including SciERC and GENIA. |
Entity, Relation, and Event Extraction with Contextualized Span Representations | 1909.03546v2 | Table 6: Effect of BERT cross-sentence context. F1 score of relation F1 on ACE05 dev set and entity, arg, trigger extraction F1 on ACE05-E test set, as a function of the BERT context window size. | ['Task', 'Variation', '1', '3'] | [['Relation', 'BERT+LSTM', '59.3', '[BOLD] 60.6'], ['Relation', 'BERT Finetune', '62.0', '[BOLD] 62.1'], ['Entity', 'BERT+LSTM', '90.0', '[BOLD] 90.5'], ['Entity', 'BERT Finetune', '88.8', '[BOLD] 89.7'], ['Trigger', 'BERT+LSTM', '[BOLD] 69.4', '68.9'], ['Trigger', 'BERT Finetune', '68.3', '[BOLD] 69.7'], ['Arg Class',... | Table 6 shows that both variations of our BERT model benefit from wider context windows. Our model achieves the best performance with a 3sentence window across all relation and event extraction tasks. |
Aligning Vector-spaces with Noisy Supervised Lexicons | 1903.10238v1 | Table 1: Bilingual Experiment P@1. Numbers are based on 10 runs of each method. The En→De, En→Fi and En→Es improvements are significant at p<0.05 according to ANOVA on the different runs. | ['Method', 'En→It best', 'En→It avg', 'En→It iters', 'En→De best', 'En→De avg', 'En→De iters', 'En→Fi best', 'En→Fi avg', 'En→Fi iters', 'En→Es best', 'En→Es avg', 'En→Es iters'] | [['Artetxe et\xa0al., 2018b', '[BOLD] 48.53', '48.13', '573', '48.47', '48.19', '773', '33.50', '32.63', '988', '37.60', '37.33', '808'], ['Noise-aware Alignment', '[BOLD] 48.53', '[BOLD] 48.20', '471', '[BOLD] 49.67', '[BOLD] 48.89', '568', '[BOLD] 33.98', '[BOLD] 33.68', '502', '[BOLD] 38.40', '[BOLD] 37.79', '551']] | In Table 1 we report the best and average precision@1 scores and the average number of iterations among 10 experiments, for different language translations. Our model improves the results in the translation tasks. In most setups our average case is better than the former best case. In addition, the noise-aware model is... |
Predicting Discourse Structure using Distant Supervision from Sentiment | 1910.14176v1 | Table 3: Discourse structure prediction results; tested on RST-DTtest and Instr-DTtest. Subscripts in inter-domain evaluation sub-table indicate the training set. Best performance in the category is bold. Consistently best model for inter-domain discourse structure prediction is underlined | ['Approach', 'RST-DTtest', 'Instr-DTtest'] | [['Right Branching', '54.64', '58.47'], ['Left Branching', '53.73', '48.15'], ['Hier. Right Branch.', '[BOLD] 70.82', '[BOLD] 67.86'], ['Hier. Left Branch.', '70.58', '63.49'], ['[BOLD] Intra-Domain Evaluation', '[BOLD] Intra-Domain Evaluation', '[BOLD] Intra-Domain Evaluation'], ['HILDAHernault et al. ( 2010 )', '83.0... | We further analyze our findings with respect to baselines and existing discourse parsers. The first set of results in Table 3 shows that the hierarchical right/left branching baselines dominate the completely right/left branching ones. However, their performance is still significantly worse than any discourse parser (i... |
Dynamic Knowledge Routing Network For Target-Guided Open-Domain Conversation | 2002.01196v2 | Table 4: Results of Self-Play Evaluation. | ['System', 'TGPC Succ. (%)', 'TGPC #Turns', 'CWC Succ. (%)', 'CWC #Turns'] | [['Retrieval\xa0', '7.16', '4.17', '0', '-'], ['Retrieval-Stgy\xa0', '47.80', '6.7', '44.6', '7.42'], ['PMI\xa0', '35.36', '6.38', '47.4', '5.29'], ['Neural\xa0', '54.76', '4.73', '47.6', '5.16'], ['Kernel\xa0', '62.56', '4.65', '53.2', '4.08'], ['DKRN (ours)', '[BOLD] 89.0', '5.02', '[BOLD] 84.4', '4.20']] | Figure 2: A conversation example between human (H) and agents (A) with the same target and starting utterance in CWC dataset.Keywords predicted by the agents or mentioned by human are highlighted in bold. The target achieved at the end of a conversation is underlined. judge whether the current conversation context cont... |
Dynamic Knowledge Routing Network For Target-Guided Open-Domain Conversation | 2002.01196v2 | Table 3: Results of Turn-level Evaluation. | ['Dataset', 'System', 'Keyword Prediction [ITALIC] Rw@1', 'Keyword Prediction [ITALIC] Rw@3', 'Keyword Prediction [ITALIC] Rw@5', 'Keyword Prediction P@1', 'Response Retrieval [ITALIC] R20@1', 'Response Retrieval [ITALIC] R20@3', 'Response Retrieval [ITALIC] R20@5', 'Response Retrieval MRR'] | [['TGPC', 'Retrieval\xa0', '-', '-', '-', '-', '0.5063', '0.7615', '0.8676', '0.6589'], ['TGPC', 'PMI\xa0', '0.0585', '0.1351', '0.1872', '0.0871', '0.5441', '0.7839', '0.8716', '0.6847'], ['TGPC', 'Neural\xa0', '0.0708', '0.1438', '0.1820', '0.1321', '0.5311', '0.7905', '0.8800', '0.6822'], ['TGPC', 'Kernel\xa0', '0.0... | Table 3 shows the turn-level evaluation results. Our approach DKRN outperforms all state-of-the-art methods in terms of all metrics on both datasets with two tasks. |
Dynamic Knowledge Routing Network For Target-Guided Open-Domain Conversation | 2002.01196v2 | Table 5: Results of the Human Rating on CWC. | ['System', 'Succ. (%)', 'Smoothness'] | [['Retrieval-Stgy\xa0', '54.0', '2.48'], ['PMI\xa0', '46.0', '2.56'], ['Neural\xa0', '36.0', '2.50'], ['Kernel\xa0', '58.0', '2.48'], ['DKRN (ours)', '[BOLD] 88.0', '[BOLD] 3.22']] | Table 5 shows the evaluation results. Our DKRN agent outperforms all other agents with a large margin. |
Dynamic Knowledge Routing Network For Target-Guided Open-Domain Conversation | 2002.01196v2 | Table 6: Results of the Human Rating on CWC. | ['[EMPTY]', 'Ours Better(%)', 'No Prefer(%)', 'Ours Worse(%)'] | [['Retrieval-Stgy\xa0', '[BOLD] 62', '22', '16'], ['PMI\xa0', '[BOLD] 54', '32', '14'], ['Neural\xa0', '[BOLD] 60', '22', '18'], ['Kernel\xa0', '[BOLD] 62', '26', '12']] | Table 6 shows the results of the second study. Our agent outperforms the comparison agents with a large margin. |
Modulated Self-attention Convolutional Network for VQA | 1910.03343v2 | Table 1: Experiments run on a ResNet-34. Numbers following S (stages) and B (blocks) indicate where SA (self-attention) modules are put. Parameters count concerns only SA and are in millions (M). | ['[BOLD] ResNet-34', '[BOLD] Eval set %', '[BOLD] #param'] | [['Baseline (No SA)Anderson et al. ( 2018 )', '55.00', '0M'], ['SA (S: 1,2,3 - B: 1)', '55.11', '} 0.107M'], ['SA (S: 1,2,3 - B: 2)', '55.17', '} 0.107M'], ['[BOLD] SA (S: 1,2,3 - B: 3)', '[BOLD] 55.27', '} 0.107M']] | guistic input a CNN augmented with self-attentionWe show encouraging relative improvements for future research in this direction. [CONTINUE] We empirically found that self-attention was the most efficient in the 3rd stage. [CONTINUE] We notice small improvements relative to the baseline showing that self-attention alon... |
Modulated Self-attention Convolutional Network for VQA | 1910.03343v2 | Table 1: Experiments run on a ResNet-34. Numbers following S (stages) and B (blocks) indicate where SA (self-attention) modules are put. Parameters count concerns only SA and are in millions (M). | ['[BOLD] ResNet-34', '[BOLD] Eval set %', '[BOLD] #param'] | [['SA (S: 3 - M: 1)', '55.25', '} 0.082M'], ['[BOLD] SA (S: 3 - B: 3)', '[BOLD] 55.42', '} 0.082M'], ['SA (S: 3 - B: 4)', '55.33', '} 0.082M'], ['SA (S: 3 - B: 6)', '55.31', '} 0.082M'], ['SA (S: 3 - B: 1,3,5)', '55.45', '} 0.245M'], ['[BOLD] SA (S: 3 - B: 2,4,6)', '[BOLD] 55.56', '} 0.245M']] | However, we managed to show improvements with the β modulation with a ResNet-152. Though the improvement is slim, it is encouraging to continue researching into visual modulation |
Toward Extractive Summarization of Online Forum Discussions via Hierarchical Attention Networks | 1805.10390v2 | Table 1: Results of thread summarization. ‘HAN’ models are our proposed approaches adapted from the hierarchical attention networks [Yang et al.2016]. The models can be pretrained using unlabeled threads from TripAdvisor (‘T’) and Ubuntuforum (‘U’). r indicates a redundancy removal step is applied. We report the varian... | ['[BOLD] System', '[BOLD] ROUGE-1 [BOLD] R (%)', '[BOLD] ROUGE-1 [BOLD] P (%)', '[BOLD] ROUGE-1 [BOLD] F (%)', '[BOLD] ROUGE-2 [BOLD] R (%)', '[BOLD] ROUGE-2 [BOLD] P (%)', '[BOLD] ROUGE-2 [BOLD] F (%)', '[BOLD] Sentence-Level [BOLD] R (%)', '[BOLD] Sentence-Level [BOLD] P (%)', '[BOLD] Sentence-Level [BOLD] F... | [['[BOLD] ILP', '24.5', '41.1', '29.3±0.5', '7.9', '15.0', '9.9±0.5', '13.6', '22.6', '15.6±0.4'], ['[BOLD] Sum-Basic', '28.4', '44.4', '33.1±0.5', '8.5', '15.6', '10.4±0.4', '14.7', '22.9', '16.7±0.5'], ['[BOLD] KL-Sum', '39.5', '34.6', '35.5±0.5', '13.0', '12.7', '12.3±0.5', '15.2', '21.1', '16.3±0.5'], ['[BOLD] LexR... | The experimental results of all models are shown in Table 1. [CONTINUE] First, HAN models appear to be more appealing than SVM and LogReg because there is less variation in program implementation, hence less effort is required to reproduce the results. HAN models outperform both LogReg and SVM using the current set of ... |
Towards Universal Dialogue State Tracking | 1810.09587v1 | Table 1: Joint goal accuracy on DSTC2 and WOZ 2.0 test set vs. various approaches as reported in the literature. | ['[BOLD] DST Models', '[BOLD] Joint Acc. DSTC2', '[BOLD] Joint Acc. WOZ 2.0'] | [['Delexicalisation-Based (DB) Model Mrkšić et\xa0al. ( 2017 )', '69.1', '70.8'], ['DB Model + Semantic Dictionary Mrkšić et\xa0al. ( 2017 )', '72.9', '83.7'], ['Scalable Multi-domain DST Rastogi et\xa0al. ( 2017 )', '70.3', '-'], ['MemN2N Perez and Liu ( 2017 )', '74.0', '-'], ['PtrNet Xu and Hu ( 2018 )', '72.1', '-'... | The results in Table 1 show the effectiveness of parameter sharing and initialization. StateNet PS outperforms StateNet, and StateNet PSI performs best among all 3 models. [CONTINUE] Besides, StateNet PSI beats all the mod [CONTINUE] Table 1: Joint goal accuracy on DSTC2 and WOZ 2.0 test set vs. various approaches as r... |
Towards Universal Dialogue State Tracking | 1810.09587v1 | Table 2: Joint goal accuracy on DSTC2 and WOZ 2.0 of StateNet_PSI using different pre-trained models based on different single slot. | ['[BOLD] Initialization', '[BOLD] Joint Acc. DSTC2', '[BOLD] Joint Acc. WOZ 2.0'] | [['[ITALIC] food', '[BOLD] 75.5', '[BOLD] 88.9'], ['[ITALIC] pricerange', '73.6', '88.2'], ['[ITALIC] area', '73.5', '87.8']] | Table 2: Joint goal accuracy on DSTC2 and WOZ 2.0 of StateNet PSI using different pre-trained models based on different single slot. [CONTINUE] We also test StateNet PSI with different pre-trained models, as shown in Table 2. The fact that the food initialization has the best performance verifies our selection of the s... |
Aligning Multilingual Word Embeddings for Cross-Modal Retrieval Task | 1910.03291v1 | Table 5: Textual similarity scores (asymmetric, Multi30k). | ['[EMPTY]', 'EN → DE R@1', 'EN → DE R@5', 'EN → DE R@10', 'DE → EN R@1', 'DE → EN R@5', 'DE → EN R@10'] | [['FME', '51.4', '76.4', '84.5', '46.9', '71.2', '79.1'], ['AME', '[BOLD] 51.7', '[BOLD] 76.7', '[BOLD] 85.1', '[BOLD] 49.1', '[BOLD] 72.6', '[BOLD] 80.5']] | Table 5, we show the performance on Multi30k dataset in asymmetric mode. AME outperforms the FME model, confirming the importance of word embeddings adaptation. |
Aligning Multilingual Word Embeddings for Cross-Modal Retrieval Task | 1910.03291v1 | Table 1: Image-caption ranking results for English (Multi30k) | ['[EMPTY]', 'Image to Text R@1', 'Image to Text R@5', 'Image to Text R@10', 'Image to Text Mr', 'Text to Image R@1', 'Text to Image R@5', 'Text to Image R@10', 'Text to Image Mr', 'Alignment'] | [['[BOLD] symmetric', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['Parallel\xa0gella:17', '31.7', '62.4', '74.1', '3', '24.7', '53.9', '65.7', '5', '-'], ['UVS\xa0kiros:15', '23.0', '50.7', '62.9', '5', '16.8', '42.0', '56.5', '8', '-'], ['EmbeddingNet\xa0wang:18... | we show the results for English and German captions. For English captions, we see 21.28% improvement on average compared to Kiros et al. (2014). There is a 1.8% boost on average compared to Mono due to more training data and multilingual text encoder. AME performs better than FME model on both symmetric and asymmetric ... |
Aligning Multilingual Word Embeddings for Cross-Modal Retrieval Task | 1910.03291v1 | Table 2: Image-caption ranking results for German (Multi30k) | ['[EMPTY]', 'Image to Text R@1', 'Image to Text R@5', 'Image to Text R@10', 'Image to Text Mr', 'Text to Image R@1', 'Text to Image R@5', 'Text to Image R@10', 'Text to Image Mr', 'Alignment'] | [['[BOLD] symmetric', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['Parallel\xa0gella:17', '28.2', '57.7', '71.3', '4', '20.9', '46.9', '59.3', '6', '-'], ['Mono', '34.2', '67.5', '79.6', '3', '26.5', '54.7', '66.2', '4', '-'], ['FME', '36.8', '69.4', '80.8', '2',... | For German descriptions, The results are 11.05% better on average compared to (Gella et al., 2017) in symmetric mode. AME also achieves competitive or better results than FME model in German descriptions too. |
Aligning Multilingual Word Embeddings for Cross-Modal Retrieval Task | 1910.03291v1 | Table 3: Image-caption ranking results for English (MS-COCO) | ['[EMPTY]', 'Image to Text R@1', 'Image to Text R@5', 'Image to Text R@10', 'Image to Text Mr', 'Text to Image R@1', 'Text to Image R@5', 'Text to Image R@10', 'Text to Image Mr', 'Alignment'] | [['[BOLD] symmetric', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['UVS\xa0kiros:15', '43.4', '75.7', '85.8', '2', '31.0', '66.7', '79.9', '3', '-'], ['EmbeddingNet\xa0wang:18', '50.4', '79.3', '89.4', '-', '39.8', '75.3', '86.6', '-', '-'], ['sm-LSTM\xa0huang:17'... | We achieve 10.42% improvement on aver [CONTINUE] age compared to Kiros et al. (2014) in the symmetric manner. We show that adapting the word embedding for the task at hand, boosts the general performance, since AME model significantly outperforms FME model in both languages. |
Aligning Multilingual Word Embeddings for Cross-Modal Retrieval Task | 1910.03291v1 | Table 4: Image-caption ranking results for Japanese (MS-COCO) | ['[EMPTY]', 'Image to Text R@1', 'Image to Text R@5', 'Image to Text R@10', 'Image to Text Mr', 'Text to Image R@1', 'Text to Image R@5', 'Text to Image R@10', 'Text to Image Mr', 'Alignment'] | [['[BOLD] symmetric', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['Mono', '42.7', '77.7', '88.5', '2', '33.1', '69.8', '84.3', '3', '-'], ['FME', '40.7', '77.7', '88.3', '2', '30.0', '68.9', '83.1', '3', '92.70%'], ['AME', '[BOLD] 50.2', '[BOLD] 85.6', '[BOLD] 93... | For the Japanese captions, AME reaches 6.25% and 3.66% better results on average compared to monolingual model in symmetric and asymmetric modes, respectively. |
How does Grammatical Gender Affect Noun Representations in Gender-Marking Languages? | 1910.14161v1 | Table 4: Averages of similarities of pairs with same vs. different gender in Italian and German compared to English. The last row is the difference between the averages of the two sets. “Reduction” stands for gap reduction when removing gender signals from the context. | ['[EMPTY]', 'Italian Original', 'Italian Debiased', 'Italian English', 'Italian Reduction', 'German Original', 'German Debiased', 'German English', 'German Reduction'] | [['Same Gender', '0.442', '0.434', '0.424', '–', '0.491', '0.478', '0.446', '–'], ['Different Gender', '0.385', '0.421', '0.415', '–', '0.415', '0.435', '0.403', '–'], ['difference', '0.057', '0.013', '0.009', '[BOLD] 91.67%', '0.076', '0.043', '0.043', '[BOLD] 100%']] | Table 4 shows the results for Italian and German, compared to English, both for the original and the debiased embeddings (for each language we show the results of the best performing debiased embeddings). [CONTINUE] As expected, in both languages, the difference between the average of the two sets with the debiased emb... |
How does Grammatical Gender Affect Noun Representations in Gender-Marking Languages? | 1910.14161v1 | Table 2: Averages of rankings of the words in same-gender pairs vs. different-gender pairs for Italian and German, along with their differences. Og stands for the original embeddings, Db for the debiased embeddings, and En for English. Each row presents the averages of pairs with the respective scores in SimLex-999 (0–... | ['[EMPTY]', 'Italian Same-gender', 'Italian Diff-Gender', 'Italian difference', 'German Same-gender', 'German Diff-Gender', 'German difference'] | [['7–10', 'Og: 4884', 'Og: 12947', 'Og: 8063', 'Og: 5925', 'Og: 33604', 'Og: 27679'], ['7–10', 'Db: 5523', 'Db: 7312', 'Db: 1789', 'Db: 7653', 'Db: 26071', 'Db: 18418'], ['7–10', 'En: 6978', 'En: 2467', 'En: -4511', 'En: 4517', 'En: 8666', 'En: 4149'], ['4–7', 'Og: 10954', 'Og: 15838', 'Og: 4884', 'Og: 19271', 'Og: 272... | Table 2 shows the results for Italian and German, compared to English. As expected, the average ranking of samegender pairs is significantly lower than that of different-gender pairs, both for German and Italian, while the difference between the sets in English is much smaller. [CONTINUE] Table 2 shows the results for ... |
How does Grammatical Gender Affect Noun Representations in Gender-Marking Languages? | 1910.14161v1 | Table 6: Results on SimLex-999 and WordSim-353, in Italian and German, before and after debiasing. | ['[EMPTY]', 'Italian Orig', 'Italian Debias', 'German Orig', 'German Debias'] | [['SimLex', '0.280', '[BOLD] 0.288', '0.343', '[BOLD] 0.356'], ['WordSim', '0.548', '[BOLD] 0.577', '0.547', '[BOLD] 0.553']] | Table 6 shows the results for Italian and German for both datasets, compared to the original embeddings. In both cases, the new embeddings perform better than the original ones. |
How does Grammatical Gender Affect Noun Representations in Gender-Marking Languages? | 1910.14161v1 | Table 7: Cross-lingual embedding alignment in Italian and in German, before and after debiasing. | ['[EMPTY]', 'Italian → En', 'Italian En →', 'German → En', 'German En →'] | [['Orig', '58.73', '59.68', '47.58', '50.48'], ['Debias', '[BOLD] 60.03', '[BOLD] 60.96', '[BOLD] 47.89', '[BOLD] 51.76']] | The results reported in Table 7 show that precision on BDI indeed increases as a result of the reduced effect of grammatical gender on the embeddings for German and Italian, i.e. that the embeddings spaces can be aligned better with the debiased embeddings. |
Towards Quantifying the Distance between Opinions | 2001.09879v1 | Table 6: Performance comparison of the distance measures on all 18 datasets. The semantic distance in opinion distance (OD) measure is computed via cosine distance over either Word2vec (OD-w2v with semantic distance threshold 0.6) or Doc2vec (OD-d2v with distance threshold 0.3) embeddings. Sil. refers to Silhouette Coe... | ['Topic Name', 'Size', 'TF-IDF ARI', 'WMD ARI', 'Sent2vec ARI', 'Doc2vec ARI', 'BERT ARI', '[ITALIC] OD-w2v ARI', '[ITALIC] OD-d2v ARI', 'TF-IDF [ITALIC] Sil.', 'WMD [ITALIC] Sil.', 'Sent2vec [ITALIC] Sil.', 'Doc2vec [ITALIC] Sil.', 'BERT [ITALIC] Sil.', '[ITALIC] OD-w2v [ITALIC] Sil.', '[ITALIC] OD-d2v [ITALIC]... | [['Affirmative Action', '81', '-0.07', '-0.02', '0.03', '-0.01', '-0.02', '[BOLD] 0.14', '[ITALIC] 0.02', '0.01', '0.01', '-0.01', '-0.02', '-0.04', '[BOLD] 0.06', '[ITALIC] 0.01'], ['Atheism', '116', '[BOLD] 0.19', '0.07', '0.00', '0.03', '-0.01', '0.11', '[ITALIC] 0.16', '0.02', '0.01', '0.02', '0.01', '0.01', '[ITAL... | The semantic threshold for OD-d2v is set at 0.3 while for OD-w2v is set at 0.6. [CONTINUE] We evaluate our distance measures in the unsupervised setting, specifically, evaluating the clustering quality using the Adjusted Rand Index (ARI) and Silhouette coefficient. We benchmark against the following baselines: WMD (whi... |
Towards Quantifying the Distance between Opinions | 2001.09879v1 | Table 3: ARI and Silhouette coefficient scores. | ['Methods', 'Seanad Abolition ARI', 'Seanad Abolition [ITALIC] Sil', 'Video Games ARI', 'Video Games [ITALIC] Sil', 'Pornography ARI', 'Pornography [ITALIC] Sil'] | [['TF-IDF', '0.23', '0.02', '-0.01', '0.01', '-0.02', '0.01'], ['WMD', '0.09', '0.01', '0.01', '0.01', '-0.02', '0.01'], ['Sent2vec', '-0.01', '-0.01', '0.11', '0.06', '0.01', '0.02'], ['Doc2vec', '-0.01', '-0.03', '-0.01', '0.01', '0.02', '-0.01'], ['BERT', '0.03', '-0.04', '0.08', '0.05', '-0.01', '0.03'], ['OD-parse... | among opinions: We see that OD significantly outperforms the baseline methods and the OD-parse variant [CONTINUE] OD achieves high ARI and Sil scores, [CONTINUE] From the above table, we observe that the text-similarity based baselines, such as TF-IDF, WMD and Doc2vec achieving ARI and Silhouette coefficient scores of ... |
Towards Quantifying the Distance between Opinions | 2001.09879v1 | Table 4: The quality of opinion distance when leveraged as a feature for multi-class classification. Each entry in + X feature should be treated independently. The second best result is italicized and underlined. | ['Baselines', 'Seanad Abolition', 'Video Games', 'Pornography'] | [['Unigrams', '0.54', '0.66', '0.63'], ['Bigrams', '0.54', '0.64', '0.56'], ['LSA', '0.68', '0.57', '0.57'], ['Sentiment', '0.35', '0.60', '0.69'], ['Bigrams', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['+ Sentiment', '0.43', '0.58', '0.66'], ['TF-IDF', '0.50', '0.65', '0.57'], ['WMD', '0.40', '0.73', '0.57'], ['Sent2vec', '0.... | For completeness, here we also compare against unigram or n-gram based classifiers [CONTINUE] The classification performance of the baselines is reported in Table 4. [CONTINUE] SVM with only OD features outperforms many baselines: We see that on "Video Games" and "Pornography" datasets, the classification performance b... |
Towards Quantifying the Distance between Opinions | 2001.09879v1 | Table 5: We compare the quality of variants of Opinion Distance measures on opinion clustering task with ARI. | ['[EMPTY]', 'Difference Function', 'Seanad Abolition', 'Video Games', 'Pornography'] | [['OD-parse', 'Absolute', '0.01', '-0.01', '0.07'], ['OD-parse', 'JS div.', '0.01', '-0.01', '-0.01'], ['OD-parse', 'EMD', '0.07', '0.01', '-0.01'], ['OD', 'Absolute', '[BOLD] 0.54', '[BOLD] 0.56', '[BOLD] 0.41'], ['OD', 'JS div.', '0.07', '-0.01', '-0.02'], ['OD', 'EMD', '0.26', '-0.01', '0.01'], ['OD (no polarity shi... | distribuThe results of different variants are shown in Table 5. [CONTINUE] OD significantly outperforms OD-parse: We observe that compared to OD-parse, OD is much more accurate. On the three datasets, OD achieves an average weighted F1 score of 0.54, 0.56 and 0.41 respectively compared to the scores of 0.01, -0.01 and ... |
Turing at SemEval-2017 Task 8: Sequential Approach to Rumour Stance Classification with Branch-LSTM | 1704.07221v1 | Table 3: Results on the development and testing sets. Accuracy and F1 scores: macro-averaged and per class (S: supporting, D: denying, Q: querying, C: commenting). | ['[EMPTY]', '[BOLD] Accuracy', '[BOLD] Macro F', '[BOLD] S', '[BOLD] D', '[BOLD] Q', '[BOLD] C'] | [['Development', '0.782', '0.561', '0.621', '0.000', '0.762', '0.860'], ['Testing', '[BOLD] 0.784', '0.434', '0.403', '0.000', '0.462', '0.873']] | The performance of our model on the testing and development set is shown in Table 3. [CONTINUE] The difference in accuracy between testing and development set is minimal, however we see significant difference in Macro-F score due to different class balance in these sets. [CONTINUE] The branch-LSTM model predicts commen... |
Turing at SemEval-2017 Task 8: Sequential Approach to Rumour Stance Classification with Branch-LSTM | 1704.07221v1 | Table 5: Confusion matrix for testing set predictions | ['[BOLD] LabelPrediction', '[BOLD] C', '[BOLD] D', '[BOLD] Q', '[BOLD] S'] | [['[BOLD] Commenting', '760', '0', '12', '6'], ['[BOLD] Denying', '68', '0', '1', '2'], ['[BOLD] Querying', '69', '0', '36', '1'], ['[BOLD] Supporting', '67', '0', '1', '26']] | Most denying instances get misclassified as commenting (see Table 5), |
Language Independent Sequence Labelling for Opinion Target Extraction | 1901.09755v1 | Table 1: ABSA SemEval 2014-2016 datasets for the restaurant domain. B-target indicates the number of opinion targets in each set; I-target refers to the number of multiword targets. | ['Language', 'ABSA', 'No. of Tokens and Opinion Targets Train', 'No. of Tokens and Opinion Targets Train', 'No. of Tokens and Opinion Targets Train', 'No. of Tokens and Opinion Targets Test', 'No. of Tokens and Opinion Targets Test', 'No. of Tokens and Opinion Targets Test'] | [['[EMPTY]', '[EMPTY]', 'Token', 'B-target', 'I-target', 'Token', 'B-target', 'I-target'], ['en', '2014', '47028', '3687', '1457', '12606', '1134', '524'], ['en', '2015', '18488', '1199', '538', '10412', '542', '264'], ['en', '2016', '28900', '1743', '797', '9952', '612', '274'], ['es', '2016', '35847', '1858', '742', ... | Table 1 shows the ABSA datasets from the restaurants domain for English, Spanish, French, Dutch, Russian and Turkish. From left to right each row displays the number of tokens, number of targets and the number of multiword targets for each training and test set. |
Language Independent Sequence Labelling for Opinion Target Extraction | 1901.09755v1 | Table 3: ABSA SemEval 2014-2016 English results. BY: Brown Yelp 1000 classes; CYF100-CYR200: Clark Yelp Food 100 classes and Clark Yelp Reviews 200 classes; W2VW400: Word2vec Wikipedia 400 classes; ALL: BY+CYF100-CYR200+W2VW400. | ['Features', '2014 P', '2014 R', '2014 F1', '2015 P', '2015 R', '2015 F1', '2016 P', '2016 R', '2016 F1'] | [['Local (L)', '81.84', '74.69', '78.10', '[BOLD] 76.82', '54.43', '63.71', '74.41', '61.76', '67.50'], ['L + BY', '77.84', '84.57', '81.07', '71.73', '63.65', '67.45', '[BOLD] 74.49', '71.08', '72.74'], ['L + CYF100-CYR200', '[BOLD] 82.91', '84.30', '83.60', '73.25', '61.62', '66.93', '74.12', '72.06', '73.07'], ['L +... | Table 3 provides detailed results on the Opinion Target Extraction (OTE) task for English. We show in bold our best model (ALL) chosen via 5-fold CV on the training data. Moreover, we also [CONTINUE] show the results of the best models using only one type of clustering feature, namely, the best Brown, Clark and Word2ve... |
Language Independent Sequence Labelling for Opinion Target Extraction | 1901.09755v1 | Table 6: ABSA SemEval 2016: Comparison of multilingual results in terms of F1 scores. | ['Language', 'System', 'F1'] | [['es', 'GTI', '68.51'], ['es', 'L + [BOLD] CW600 + W2VW300', '[BOLD] 69.92'], ['es', 'Baseline', '51.91'], ['fr', 'IIT-T', '66.67'], ['fr', 'L + [BOLD] CW100', '[BOLD] 69.50'], ['fr', 'Baseline', '45.45'], ['nl', 'IIT-T', '56.99'], ['nl', 'L + [BOLD] W2VW400', '[BOLD] 66.39'], ['nl', 'Baseline', '50.64'], ['ru', 'D... | Table 6 shows that our system outperforms the best previous approaches across the five languages. |
Language Independent Sequence Labelling for Opinion Target Extraction | 1901.09755v1 | Table 7: False Positives and Negatives for every ABSA 2014-2016 setting. | ['Error type', '2014 en', '2015 en', '2016 en', '2016 es', '2016 fr', '2016 nl', '2016 ru', '2016 tr'] | [['FP', '[BOLD] 230', '151', '[BOLD] 189', '165', '194', '117', '[BOLD] 390', '62'], ['FN', '143', '[BOLD] 169', '163', '[BOLD] 248', '[BOLD] 202', '[BOLD] 132', '312', '[BOLD] 65']] | most errors in our system are caused by false negatives [FN], as it can be seen in Table 7. |
Evaluation of Greek Word Embeddings | 1904.04032v3 | Table 8: Summary for 3CosMul and top-1 nearest vectors. | ['Category Semantic', 'Category no oov words', 'gr_def 60.60%', 'gr_neg10 62.50%', 'cc.el.300 [BOLD] 70.90%', 'wiki.el 37.50%', 'gr_cbow_def 29.80%', 'gr_d300_nosub 62.50%', 'gr_w2v_sg_n5 54.60%'] | [['[EMPTY]', 'with oov words', '54.90%', '57.00%', '[BOLD] 65.50%', '35.00%', '27.10%', '56.60%', '49.50%'], ['Syntactic', 'no oov words', '67.90%', '62.90%', '[BOLD] 69.60%', '50.70%', '63.80%', '56.50%', '55.40%'], ['[EMPTY]', 'with oov words', '[BOLD] 55.70%', '51.30%', '50.20%', '33.40%', '52.30%', '46.40%', '45.50... | We noticed that the sub-category in which most models had the worst performance was currency country category, [CONTINUE] Sub-categories as adjectives antonyms and performer action had the highest percentage of out-of-vocabulary terms, so we observed lower performance in these categories for all models. |
Evaluation of Greek Word Embeddings | 1904.04032v3 | Table 1: The Greek word analogy test set. | ['Relation', '#pairs', '#tuples'] | [['Semantic: (13650 tuples)', 'Semantic: (13650 tuples)', 'Semantic: (13650 tuples)'], ['common_capital_country', '42', '1722'], ['all_capital_country', '78', '6006'], ['eu_city_country', '50', '2366'], ['city_in_region', '40', '1536'], ['currency_country', '24', '552'], ['man_woman_family', '18', '306'], ['profession_... | Our Greek analogy test set contains 39,174 questions divided into semantic and syntactic analogy questions. [CONTINUE] Semantic questions are divided into 9 categories and include 13,650 questions in total. [CONTINUE] Syntactic questions are divided into 15 categories, which are mostly language specific. They include 2... |
Evaluation of Greek Word Embeddings | 1904.04032v3 | Table 3: Summary for 3CosAdd and top-1 nearest vectors. | ['Category Semantic', 'Category no oov words', 'gr_def 58.42%', 'gr_neg10 59.33%', 'cc.el.300 [BOLD] 68.80%', 'wiki.el 27.20%', 'gr_cbow_def 31.76%', 'gr_d300_nosub 60.79%', 'gr_w2v_sg_n5 52.70%'] | [['[EMPTY]', 'with oov words', '52.97%', '55.33%', '[BOLD] 64.34%', '25.73%', '28.80%', '55.11%', '47.82%'], ['Syntactic', 'no oov words', '65.73%', '61.02%', '[BOLD] 69.35%', '40.90%', '64.02%', '53.69%', '52.60%'], ['[EMPTY]', 'with oov words', '[BOLD] 53.95%', '48.69%', '49.43%', '28.42%', '52.54%', '44.06%', '43.13... | We compared these models in word analogy task. Due to space limitations, we show summarized results only for 3CosAdd in Table 3 and move the rest in supplementary material. Considering the two aggregated categories of syntactic and semantic word analogies respectively and both 3CosAdd and 3CosMul metrics, model cc.el.3... |
Evaluation of Greek Word Embeddings | 1904.04032v3 | Table 7: Summary for 3CosMul and top-5 nearest vectors. | ['Category Semantic', 'Category no oov words', 'gr_def 83.72%', 'gr_neg10 84.38%', 'cc.el.300 [BOLD] 88.50%', 'wiki.el 65.85%', 'gr_cbow_def 52.05%', 'gr_d300_nosub 83.26%', 'gr_w2v_sg_n5 80.00%'] | [['[EMPTY]', 'with oov words', '75.90%', '76.50%', '[BOLD] 81.70%', '61.40%', '47.20%', '75.50%', '72.50%'], ['Syntactic', 'no oov words', '83.86%', '80.42%', '[BOLD] 85.07%', '72.56%', '76.22%', '75.97%', '74.55%'], ['[EMPTY]', 'with oov words', '[BOLD] 68.80%', '66.00%', '61.40%', '47.80%', '62.60%', '62.30%', '61.20... | We noticed that the sub-category in which most models had the worst performance was currency country category, [CONTINUE] Sub-categories as adjectives antonyms and performer action had the highest percentage of out-of-vocabulary terms, so we observed lower performance in these categories for all models. |
Evaluation of Greek Word Embeddings | 1904.04032v3 | Table 4: Word similarity. | ['Model', 'Pearson', 'p-value', 'Pairs (unknown)'] | [['gr_def', '[BOLD] 0.6042', '3.1E-35', '2.3%'], ['gr_neg10', '0.5973', '2.9E-34', '2.3%'], ['cc.el.300', '0.5311', '1.7E-25', '4.9%'], ['wiki.el', '0.5812', '2.2E-31', '4.5%'], ['gr_cbow_def', '0.5232', '2.7E-25', '2.3%'], ['gr_d300_nosub', '0.5889', '3.8E-33', '2.3%'], ['gr_w2v_sg_n5', '0.5879', '4.4E-33', '2.3%']] | According to Pearson correlation, gr def model had the highest correlation with human ratings of similarity. |
Using Linguistic Features to Improve the Generalization Capability of Neural Coreference Resolvers | 1708.00160v2 | Table 7: In-domain and out-of-domain evaluations for the pt and wb genres of the CoNLL test set. The highest scores are boldfaced. | ['[EMPTY]', '[EMPTY]', 'in-domain CoNLL', 'in-domain LEA', 'out-of-domain CoNLL', 'out-of-domain LEA'] | [['[EMPTY]', '[EMPTY]', 'pt (Bible)', 'pt (Bible)', 'pt (Bible)', 'pt (Bible)'], ['deep-coref', 'ranking', '75.61', '71.00', '66.06', '57.58'], ['deep-coref', '+EPM', '76.08', '71.13', '<bold>68.14</bold>', '<bold>60.74</bold>'], ['e2e-coref', 'single', '77.80', '73.73', '65.22', '58.26'], ['e2e-coref', 'ensemble', '<b... | "+EPM" generalizes best, and in out-ofdomain evaluations, it considerably outperforms the ensemble model of e2e-coref, |
Using Linguistic Features to Improve the Generalization Capability of Neural Coreference Resolvers | 1708.00160v2 | Table 5: Impact of different EPM feature groups on the CoNLL development set. | ['[EMPTY]', 'MUC', '<italic>B</italic>3', 'CEAF<italic>e</italic>', 'CoNLL', 'LEA'] | [['+EPM', '74.92', '65.03', '60.88', '66.95', '61.34'], ['-pairwise', '74.37', '64.55', '60.46', '66.46', '60.71'], ['-type', '74.71', '64.87', '61.00', '66.86', '61.07'], ['-dep', '74.57', '64.79', '60.65', '66.67', '61.01'], ['-NER', '74.61', '65.05', '60.93', '66.86', '61.27'], ['-POS', '74.74', '65.04', '60.88', '6... | The POS and named entity tags have the least and the pairwise features have the most significant effect. [CONTINUE] The results of "-pairwise" compared to "+pairwise" show that pairwise feature-values have a significant impact, but only when they are considered in combination with other EPM feature-values. |
Using Linguistic Features to Improve the Generalization Capability of Neural Coreference Resolvers | 1708.00160v2 | Table 6: Out-of-domain evaluation on the WikiCoref dataset. The highest F1 scores are boldfaced. | ['[EMPTY]', '[EMPTY]', 'MUC R', 'MUC P', 'MUC F1', '<italic>B</italic>3 R', '<italic>B</italic>3 P', '<italic>B</italic>3 F1', 'CEAF<italic>e</italic> R', 'CEAF<italic>e</italic> P', 'CEAF<italic>e</italic> F1', 'CoNLL', 'LEA R', 'LEA P', 'LEA F1'] | [['deep-coref', 'ranking', '57.72', '69.57', '63.10', '41.42', '58.30', '48.43', '42.20', '53.50', '47.18', '52.90', '37.57', '54.27', '44.40'], ['deep-coref', 'reinforce', '62.12', '58.98', '60.51', '46.98', '45.79', '46.38', '44.28', '46.35', '45.29', '50.73', '42.28', '41.70', '41.98'], ['deep-coref', 'top-pairs', '... | Incorporating EPM feature-values improves the three points. performance by about [CONTINUE] it achieves onpar performance with that of "G&L". |
Using Linguistic Features to Improve the Generalization Capability of Neural Coreference Resolvers | 1708.00160v2 | Table 1: Impact of linguistic features on deep-coref models on the CoNLL development set. | ['[EMPTY]', 'MUC', '<italic>B</italic>3', 'CEAF<italic>e</italic>', 'CoNLL', 'LEA'] | [['ranking', '74.31', '64.23', '59.73', '66.09', '60.47'], ['+linguistic', '74.35', '63.96', '60.19', '66.17', '60.20'], ['top-pairs', '73.95', '63.98', '59.52', '65.82', '60.07'], ['+linguistic', '74.32', '64.45', '60.19', '66.32', '60.62']] | We observe that incorporating all the linguistic features bridges the gap between the performance of "top-pairs" and "ranking". [CONTINUE] However, it does not improve significantly over "ranking". |
Using Linguistic Features to Improve the Generalization Capability of Neural Coreference Resolvers | 1708.00160v2 | Table 2: Out-of-domain evaluation of deep-coref models on the WikiCoref dataset. | ['[EMPTY]', 'MUC', '<italic>B</italic>3', 'CEAF<italic>e</italic>', 'CoNLL', 'LEA'] | [['ranking', '63.10', '48.43', '47.18', '52.90', '44.40'], ['top-pairs', '63.09', '48.42', '46.05', '52.52', '44.21'], ['+linguistic', '63.99', '49.63', '46.60', '53.40', '45.66']] | We observe that the impact on generalization is also not notable, i.e. the CoNLL score improves only by 0.5pp over "ranking". |
Using Linguistic Features to Improve the Generalization Capability of Neural Coreference Resolvers | 1708.00160v2 | Table 4: Comparisons on the CoNLL test set. The F1 gains that are statistically significant: (1) “+EPM” compared to “top-pairs”, “ranking” and “JIM”, (2) “+EPM” compared to “reinforce” based on MUC, B3 and LEA, (3) “single” compared to “+EPM” based on MUC and B3, and (4) “ensemble” compared to other systems. Significan... | ['[EMPTY]', '[EMPTY]', 'MUC R', 'MUC P', 'MUC F1', '<italic>B</italic>3 R', '<italic>B</italic>3 P', '<italic>B</italic>3 F1', 'CEAF<italic>e</italic> R', 'CEAF<italic>e</italic> P', 'CEAF<italic>e</italic> F1', 'CoNLL', 'LEA R', 'LEA P', 'LEA F1'] | [['deep-coref', 'ranking', '70.43', '79.57', '74.72', '58.08', '69.26', '63.18', '54.43', '64.17', '58.90', '65.60', '54.55', '65.68', '59.60'], ['deep-coref', 'reinforce', '69.84', '79.79', '74.48', '57.41', '70.96', '63.47', '55.63', '63.83', '59.45', '65.80', '53.78', '67.23', '59.76'], ['deep-coref', 'top-pairs', '... | The performance of the "+EPM" model compared to recent state-of-the-art coreference models on the CoNLL test set is presented in Table 4. [CONTINUE] EPM feature-values result in significantly better performance than those of JIM while the number of EPM feature-values is considerably less than JIM. |
From Text to Lexicon: Bridging the Gap betweenWord Embeddings and Lexical Resources | 7 | Table 4: Lexicon member coverage (%) | ['target', 'VN', 'WN-V', 'WN-N'] | [['type', '81', '66', '47'], ['x+POS', '54', '39', '43'], ['lemma', '88', '76', '53'], ['x+POS', '79', '63', '50'], ['shared', '54', '39', '41']] | We first analyze the coverage of the VSMs in question with respect to the lexica at hand, see Table 4. For brevity we only report coverage on w2 contexts [CONTINUE] lemmatization allows more targets to exceed the SGNS frequency threshold, which results in consistently better coverage. POS-disambiguation, in turn, fragm... |
From Text to Lexicon: Bridging the Gap betweenWord Embeddings and Lexical Resources | 7 | Table 1: Benchmark performance, Spearman’s ρ. SGNS results with * taken from [morphfit]. Best results per column (benchmark) annotated for our setup only. | ['Context: w2', 'Context: w2 SimLex', 'Context: w2 SimLex', 'Context: w2 SimLex', 'Context: w2 SimLex', 'Context: w2 SimVerb'] | [['target', 'N', 'V', 'A', 'all', 'V'], ['type', '.334', '<bold>.336</bold>', '<bold>.518</bold>', '.348', '.307'], ['x + POS', '.342', '.323', '.513', '.350', '.279'], ['lemma', '<bold>.362</bold>', '.333', '.497', '<bold>.351</bold>', '.400'], ['x + POS', '.354', '<bold>.336</bold>', '.504', '.345', '<bold>.406</bold... | Table 1 summarizes the performance of the VSMs in question on similarity benchmarks. [CONTINUE] Lemmatized targets generally perform better, with the boost being more pronounced on SimVerb. [CONTINUE] Adding POS information benefits the SimVerb and SimLex verb performance, [CONTINUE] the type.POS targets show a conside... |
From Text to Lexicon: Bridging the Gap betweenWord Embeddings and Lexical Resources | 7 | Table 5: WCS performance, shared vocabulary, k=1. Best results across VSMs in bold. | ['[EMPTY]', 'WN-N P', 'WN-N R', 'WN-N F', 'WN-V P', 'WN-V R', 'WN-V F', 'VN P', 'VN R', 'VN F'] | [['Context: w2', 'Context: w2', 'Context: w2', 'Context: w2', 'Context: w2', 'Context: w2', 'Context: w2', 'Context: w2', 'Context: w2', 'Context: w2'], ['type', '.700', '.654', '.676', '.535', '.474', '.503', '.327', '.309', '.318'], ['x+POS', '.699', '.651', '.674', '.544', '.472', '.505', '.339', '.312', '.325'], ['... | Table 5 provides exact scores for reference. [CONTINUE] Note that the shared vocabulary setup puts the type and type.POS VSMs at advantage since it eliminates the effect of low coverage. Still, lemma-based targets significantly7 (p ≤ .005) outperform type-based targets in terms of F-measure in all cases. For window-bas... |
A bag-of-concepts model improves relation extraction in a narrow knowledge domain with limited data | 1904.10743v1 | Table 1: Performance of supervised learning models with different features. | ['Feature', 'LR P', 'LR R', 'LR F1', 'SVM P', 'SVM R', 'SVM F1', 'ANN P', 'ANN R', 'ANN F1'] | [['+BoW', '0.93', '0.91', '0.92', '0.94', '0.92', '0.93', '0.91', '0.91', '0.91'], ['+BoC (Wiki-PubMed-PMC)', '0.94', '0.92', '[BOLD] 0.93', '0.94', '0.92', '[BOLD] 0.93', '0.91', '0.91', '[BOLD] 0.91'], ['+BoC (GloVe)', '0.93', '0.92', '0.92', '0.94', '0.92', '0.93', '0.91', '0.91', '0.91'], ['+ASM', '0.90', '0.85', '... | Word embeddings derived from Wiki-PubMed-PMC outperform GloVe-based embeddings (Table 1). The models using BoC outperform models using BoW as well as ASM features. [CONTINUE] Wikipedia-PubMed-PMC embeddings (Moen and Ananiadou, 2013) outperforms GloVe (Mikolov et al., 2013a) in the extraction of most relation types (Ta... |
A bag-of-concepts model improves relation extraction in a narrow knowledge domain with limited data | 1904.10743v1 | Table 2: F1 score results per relation type of the best performing models. | ['Relation type', 'Count', 'Intra-sentential co-occ. [ITALIC] ρ=0', 'Intra-sentential co-occ. [ITALIC] ρ=5', 'Intra-sentential co-occ. [ITALIC] ρ=10', 'BoC(Wiki-PubMed-PMC) LR', 'BoC(Wiki-PubMed-PMC) SVM', 'BoC(Wiki-PubMed-PMC) ANN'] | [['TherapyTiming(TP,TD)', '428', '[BOLD] 0.84', '0.59', '0.47', '0.78', '0.81', '0.78'], ['NextReview(Followup,TP)', '164', '[BOLD] 0.90', '0.83', '0.63', '0.86', '0.88', '0.84'], ['Toxicity(TP,CF/TR)', '163', '[BOLD] 0.91', '0.77', '0.55', '0.85', '0.86', '0.86'], ['TestTiming(TN,TD/TP)', '184', '0.90', '0.81', '0.42'... | intra-sentential cooccurrence baseline outperforms other approaches which allow boundary expansion. [CONTINUE] As the results of applying the co-occurrence baseline (ρ = 0) shows (Table 2), the semantic relations in this data are strongly concentrated within a sentence boundary, especially for the relation of RecurLink... |
Multi-News: a Large-Scale Multi-Document Summarization Dataset and Abstractive Hierarchical Model | 1906.01749v3 | Table 3: Comparison of our Multi-News dataset to other MDS datasets as well as an SDS dataset used as training data for MDS (CNNDM). Training, validation and testing size splits (article(s) to summary) are provided when applicable. Statistics for multi-document inputs are calculated on the concatenation of all input so... | ['[BOLD] Dataset', '[BOLD] # pairs', '[BOLD] # words (doc)', '[BOLD] # sents (docs)', '[BOLD] # words (summary)', '[BOLD] # sents (summary)', '[BOLD] vocab size'] | [['Multi-News', '44,972/5,622/5,622', '2,103.49', '82.73', '263.66', '9.97', '666,515'], ['DUC03+04', '320', '4,636.24', '173.15', '109.58', '2.88', '19,734'], ['TAC 2011', '176', '4,695.70', '188.43', '99.70', '1.00', '24,672'], ['CNNDM', '287,227/13,368/11,490', '810.57', '39.78', '56.20', '3.68', '717,951']] | Table 3 compares Multi-News to other news datasets used in experiments below. We choose to compare Multi-News with DUC data from 2003 and 2004 and TAC 2011 data, which are typically used in multi-document settings. Additionally, we compare to the single-document CNNDM dataset, as this has been recently used in work whi... |
Multi-News: a Large-Scale Multi-Document Summarization Dataset and Abstractive Hierarchical Model | 1906.01749v3 | Table 4: Percentage of n-grams in summaries which do not appear in the input documents , a measure of the abstractiveness, in relevant datasets. | ['[BOLD] % novel n-grams', '[BOLD] Multi-News', '[BOLD] DUC03+04', '[BOLD] TAC11', '[BOLD] CNNDM'] | [['uni-grams', '17.76', '27.74', '16.65', '19.50'], ['bi-grams', '57.10', '72.87', '61.18', '56.88'], ['tri-grams', '75.71', '90.61', '83.34', '74.41'], ['4-grams', '82.30', '96.18', '92.04', '82.83']] | We report the percentage of n-grams in the gold summaries which do not appear in the input documents as a measure of how abstractive our summaries are in Table 4. As the table shows, the smaller MDS datasets tend to be more abstractive, but Multi-News is comparable and similar to the abstractiveness of SDS datasets. Gr... |
Multi-News: a Large-Scale Multi-Document Summarization Dataset and Abstractive Hierarchical Model | 1906.01749v3 | Table 6: ROUGE scores for models trained and tested on the Multi-News dataset. | ['[BOLD] Method', '[BOLD] R-1', '[BOLD] R-2', '[BOLD] R-SU'] | [['First-1', '26.83', '7.25', '6.46'], ['First-2', '35.99', '10.17', '12.06'], ['First-3', '39.41', '11.77', '14.51'], ['LexRank Erkan and Radev ( 2004 )', '38.27', '12.70', '13.20'], ['TextRank Mihalcea and Tarau ( 2004 )', '38.44', '13.10', '13.50'], ['MMR Carbonell and Goldstein ( 1998 )', '38.77', '11.98', '12.91']... | Our model outperforms PG-MMR when trained and tested on the Multi-News dataset. The Transformer performs best in terms of R-1 while Hi-MAP outperforms it on R-2 and R-SU. Also, we notice a drop in performance between PG-original, and PG-MMR (which takes the pre-trained PG-original and applies MMR on top of the model). |
RC-QED: Evaluating Natural Language Derivationsin Multi-Hop Reading Comprehension | 1910.04601v1 | Table 5: Performance breakdown of the PRKGC+NS model. Derivation Precision denotes ROUGE-L F1 of generated NLDs. | ['# gold NLD steps', 'Answer Prec.', 'Derivation Prec.'] | [['1', '79.2', '38.4'], ['2', '64.4', '48.6'], ['3', '62.3', '41.3']] | As shown in Table 5, as the required derivation step increases, the PRKGC+NS model suffers from predicting answer entities and generating correct NLDs. [CONTINUE] This indicates that the challenge of RC-QEDE is in how to extract relevant information from supporting documents and synthesize these multiple facts to deriv... |
RC-QED: Evaluating Natural Language Derivationsin Multi-Hop Reading Comprehension | 1910.04601v1 | Table 2: Ratings of annotated NLDs by human judges. | ['# steps', 'Reachability', 'Derivability Step 1', 'Derivability Step 2', 'Derivability Step 3'] | [['1', '3.0', '3.8', '-', '-'], ['2', '2.8', '3.8', '3.7', '-'], ['3', '2.3', '3.9', '3.8', '3.8']] | The evaluation results shown in Table 2 indicate that the annotated NLDs are of high quality (Reachability), and each NLD is properly derived from supporting documents (Derivability). [CONTINUE] On the other hand, we found the quality of 3-step NLDs is relatively lower than the others. [CONTINUE] Crowdworkers found tha... |
RC-QED: Evaluating Natural Language Derivationsin Multi-Hop Reading Comprehension | 1910.04601v1 | Table 4: Performance of RC-QEDE of our baseline models (see Section 2.1 for further details of each evaluation metrics). “NS” indicates the use of annotated NLDs as supervision (i.e. using Ld during training). | ['Model', 'Answerability Macro P/R/F', '# Answerable', 'Answer Prec.', 'Derivation Prec. RG-L (P/R/F)', 'Derivation Prec. BL-4'] | [['Shortest Path', '54.8/55.5/53.2', '976', '3.6', '56.7/38.5/41.5', '31.3'], ['PRKGC', '52.6/51.5/50.7', '1,021', '45.2', '40.7/60.7/44.7', '30.9'], ['PRKGC+NS', '53.6/54.1/52.1', '980', '45.4', '42.2/61.6/46.1', '33.4']] | As shown in Table 4, the PRKGC models learned to reason over more than simple shortest paths. [CONTINUE] Yet, the PRKGC model do not give considerably good results, which indicates the non-triviality of RC-QEDE. [CONTINUE] Although the PRKGC model do not receive supervision about human-generated NLDs, paths with the ma... |
RC-QED: Evaluating Natural Language Derivationsin Multi-Hop Reading Comprehension | 1910.04601v1 | Table 7: Accuracy of our baseline models and previous work on WikiHop Welbl2017a’s development set. Note that our baseline models are explainable, whereas the others are not. “NS” indicates the use of annotated NLDs as supervision. Accuracies of existing models are taken from the papers. | ['Model', 'Accuracy'] | [['PRKGC (our work)', '51.4'], ['PRKGC+NS (our work)', '[BOLD] 52.7'], ['BiDAF\xa0Welbl2017a', '42.1'], ['CorefGRU\xa0Dhingra2018NeuralCoreference', '56.0'], ['MHPGM+NOIC\xa0Bauer2018CommonsenseTasks', '58.2'], ['EntityGCN\xa0DeCao2018QuestionNetworks', '65.3'], ['CFC\xa0Zhong2019Coarse-GrainAnswering', '66.4']] | As shown in Table 7, the PRKGC models achieve a comparable performance to other sophisticated neural models. |
Team EP at TAC 2018: Automating data extraction in systematic reviews of environmental agents | 1901.02081v1 | Table 3: The estimation of impact of various design choices on the final result. The entries are sorted by the out-of-fold scores from CV. The SUBMISSION here uses score from ep_1 run for the single model and ep_2 for the ensemble performance. | ['ID LSTM-800', '5-fold CV 70.56', 'Δ 0.66', 'Single model 67.54', 'Δ 0.78', 'Ensemble 67.65', 'Δ 0.30'] | [['LSTM-400', '70.50', '0.60', '[BOLD] 67.59', '0.83', '[BOLD] 68.00', '0.65'], ['IN-TITLE', '70.11', '0.21', '[EMPTY]', '[EMPTY]', '67.52', '0.17'], ['[BOLD] SUBMISSION', '69.90', '–', '66.76', '–', '67.35', '–'], ['NO-HIGHWAY', '69.72', '−0.18', '66.42', '−0.34', '66.64', '−0.71'], ['NO-OVERLAPS', '69.46', '−0.44', '... | The results are presented in Table 3. [CONTINUE] Perhaps the most striking thing about the ablation results is that the 'traditional' LSTM layout outsperformed the 'alternating' one we chose for our submission. [CONTINUE] Apart of the flipped results of the LSTM-800 and the LSTM-400, small differences in CV score are s... |
Team EP at TAC 2018: Automating data extraction in systematic reviews of environmental agents | 1901.02081v1 | Table 1: The scores of our three submitted runs for similarity threshold 50%. | ['Run ID', 'Official score', 'Score with correction'] | [['ep_1', '60.29', '66.76'], ['ep_2', '[BOLD] 60.90', '[BOLD] 67.35'], ['ep_3', '60.61', '67.07']] | The system's official score was 60.9% (micro-F1) [CONTINUE] af [CONTINUE] Therefore, we report both the official score (from our second submission) and the result of re-scoring our second submission after replacing these 10 files with the ones from our first submission. The results are presented in Tables 1 and 2. |
Team EP at TAC 2018: Automating data extraction in systematic reviews of environmental agents | 1901.02081v1 | Table 2: Detailed results of our best run (after correcting the submission format), along with numbers of mentions in the training set. | ['Mention class', 'No. examples', 'F1 (5-CV)', 'F1 (Test)'] | [['Total', '15265', '69.90', '67.35'], ['Endpoint', '4411', '66.89', '61.47'], ['TestArticle', '1922', '63.29', '64.19'], ['Species', '1624', '95.33', '95.95'], ['GroupName', '963', '67.08', '62.40'], ['EndpointUnitOfMeasure', '706', '42.27', '40.41'], ['TimeEndpointAssessed', '672', '57.27', '55.51'], ['Dose', '659', ... | Therefore, we report both the official score (from our second submission) and the result of re-scoring our second submission after replacing these 10 files with the ones from our first submission. The results are presented in Tables 1 and 2. |
Reward Learning for Efficient Reinforcement Learning in Extractive Document Summarisation | 1907.12894v1 | Table 3: Results of non-RL (top), cross-input (DeepTD) and input-specific (REAPER) RL approaches (middle) compared with RELIS. | ['[EMPTY]', 'DUC’01 <italic>R</italic>1', 'DUC’01 <italic>R</italic>2', 'DUC’02 <italic>R</italic>1', 'DUC’02 <italic>R</italic>2', 'DUC’04 <italic>R</italic>1', 'DUC’04 <italic>R</italic>2'] | [['ICSI', '33.31', '7.33', '35.04', '8.51', '37.31', '9.36'], ['PriorSum', '35.98', '7.89', '36.63', '8.97', '38.91', '10.07'], ['TCSum', '<bold>36.45</bold>', '7.66', '36.90', '8.61', '38.27', '9.66'], ['TCSum−', '33.45', '6.07', '34.02', '7.39', '35.66', '8.66'], ['SRSum', '36.04', '8.44', '<bold>38.93</bold>', '<bol... | In Table 3, we compare RELIS with non-RL-based and RL-based summarisation systems. For non-RL-based systems, we report ICSI [Gillick and Favre, 2009] maximising [CONTINUE] the bigram overlap of summary and input using integer linear programming, PriorSum [Cao et al., 2015] learning sentence quality with CNNs, TCSum [Ca... |
Reward Learning for Efficient Reinforcement Learning in Extractive Document Summarisation | 1907.12894v1 | Table 2: The correlation of approximated and ground-truth ranking. ^σUx has significantly higher correlation over all other approaches. | ['[EMPTY]', 'DUC’01 <italic>ρ</italic>', 'DUC’01 ndcg', 'DUC’02 <italic>ρ</italic>', 'DUC’02 ndcg', 'DUC’04 <italic>ρ</italic>', 'DUC’04 ndcg'] | [['ASRL', '.176', '.555', '.131', '.537', '.145', '.558'], ['REAPER', '.316', '.638', '.301', '.639', '.372', '.701'], ['JS', '.549', '.736', '.525', '.700', '.570', '.763'], ['Our ^<italic>σUx</italic>', '<bold>.601</bold>', '<bold>.764</bold>', '<bold>.560</bold>', '<bold>.727</bold>', '<bold>.617</bold>', '<bold>.80... | Table 2 compares the quality of our ^(cid:27)U x with other widely used rewards for input-specific RL (see x4). ^(cid:27)U x has significantly higher correlation to the ground-truth ranking compared with all other approaches, confirming that our proposed L2R method yields a superior reward oracle. |
UKP TU-DA at GermEval 2017:Deep Learning for Aspect Based Sentiment Detection | 3 | Table 6: Task B results with polarity features | ['[EMPTY]', 'Micro F1'] | [['Baseline', '0.709'], ['W2V (<italic>d</italic>=50)', '0.748'], ['W2V (<italic>d</italic>=500)', '0.756'], ['S2V', '0.748'], ['S2V + W2V (<italic>d</italic>=50)', '0.755'], ['S2V + K + W2V(<italic>d</italic>=50)', '0.751'], ['SIF (DE)', '0.748'], ['SIF (DE-EN)', '<bold>0.757</bold>']] | We furthermore trained models on additional polarity features for Task B as mentioned before. Adding the polarity features improved the results for all models except for those using SIF embeddings (Table6). |
UKP TU-DA at GermEval 2017:Deep Learning for Aspect Based Sentiment Detection | 3 | Table 4: Task A results | ['[EMPTY]', 'Micro F1'] | [['Baseline', '0.882'], ['W2V (<italic>d</italic>=50)', '0.883'], ['W2V (<italic>d</italic>=500)', '<bold>0.897</bold>'], ['S2V', '0.885'], ['S2V + W2V (<italic>d</italic>=50)', '0.891'], ['S2V + K + W2V(<italic>d</italic>=50)', '0.890'], ['SIF (DE)', '0.895'], ['SIF (DE-EN)', '0.892']] | The results of the models better than the baseline are reported in Tables 4. As can be seen, all models only slightly outperform the baseline in Task A. |
UKP TU-DA at GermEval 2017:Deep Learning for Aspect Based Sentiment Detection | 3 | Table 5: Task B results | ['[EMPTY]', 'Micro F1'] | [['Baseline', '0.709'], ['W2V (<italic>d</italic>=50)', '0.736'], ['W2V (<italic>d</italic>=500)', '0.753'], ['S2V', '0.748'], ['S2V + W2V (<italic>d</italic>=50)', '0.744'], ['S2V + K + W2V(<italic>d</italic>=50)', '0.749'], ['SIF (DE)', '0.759'], ['SIF (DE-EN)', '<bold>0.765</bold>']] | For Task B, all models trained on the stacked learner beat the baseline substantially even when using only plain averaged word embeddings. |
IIIDYT at IEST 2018: Implicit Emotion Classification With Deep Contextualized Word Representations | 1808.08672v2 | Table 2: Ablation study results. | ['[BOLD] Variation', '[BOLD] Accuracy (%)', '[BOLD] Δ%'] | [['Submitted', '[BOLD] 69.23', '-'], ['No emoji', '68.36', '- 0.87'], ['No ELMo', '65.52', '- 3.71'], ['Concat Pooling', '68.47', '- 0.76'], ['LSTM hidden=4096', '69.10', '- 0.13'], ['LSTM hidden=1024', '68.93', '- 0.30'], ['LSTM hidden=512', '68.43', '- 0.80'], ['POS emb dim=100', '68.99', '- 0.24'], ['POS emb dim=75'... | We performed an ablation study on a single model having obtained 69.23% accuracy on the validation set. Results are summarized in Table 2. [CONTINUE] We can observe that the architectural choice that had the greatest impact on our model was the ELMo layer, providing a 3.71% boost in performance as compared to using Glo... |
IIIDYT at IEST 2018: Implicit Emotion Classification With Deep Contextualized Word Representations | 1808.08672v2 | Table 3: Classification Report (Test Set). | ['[EMPTY]', '[BOLD] Precision', '[BOLD] Recall', '[BOLD] F1-score'] | [['anger', '0.643', '0.601', '0.621'], ['disgust', '0.703', '0.661', '0.682'], ['fear', '0.742', '0.721', '0.732'], ['joy', '0.762', '0.805', '0.783'], ['sad', '0.685', '0.661', '0.673'], ['surprise', '0.627', '0.705', '0.663'], ['Average', '0.695', '0.695', '0.694']] | Table 3 the [CONTINUE] corresponding classification report. [CONTINUE] In general, we confirm what Klinger et al. (2018) report: anger was the most difficult class to predict, followed by surprise, whereas joy, fear, and disgust are the better performing ones. |
IIIDYT at IEST 2018: Implicit Emotion Classification With Deep Contextualized Word Representations | 1808.08672v2 | Table 4: Number of tweets on the test set with and without emoji and hashtags. The number between parentheses is the proportion of tweets classified correctly. | ['[EMPTY]', '[BOLD] Present', '[BOLD] Not Present'] | [['Emoji', '4805 (76.6%)', '23952 (68.0%)'], ['Hashtags', '2122 (70.5%)', '26635 (69.4%)']] | Table 4 shows the overall effect of hashtags and emoji on classification performance. [CONTINUE] Tweets containing emoji seem to be easier for the model to classify than those without. [CONTINUE] Hashtags also have a [CONTINUE] positive effect on classification performance, however it is less significant. |
IIIDYT at IEST 2018: Implicit Emotion Classification With Deep Contextualized Word Representations | 1808.08672v2 | Table 5: Fine grained performance on tweets containing emoji, and the effect of removing them. | ['[BOLD] Emoji alias', '[BOLD] N', '[BOLD] emoji #', '[BOLD] emoji %', '[BOLD] no-emoji #', '[BOLD] no-emoji %', '[BOLD] Δ%'] | [['mask', '163', '154', '94.48', '134', '82.21', '- 12.27'], ['two_hearts', '87', '81', '93.10', '77', '88.51', '- 4.59'], ['heart_eyes', '122', '109', '89.34', '103', '84.43', '- 4.91'], ['heart', '267', '237', '88.76', '235', '88.01', '- 0.75'], ['rage', '92', '78', '84.78', '66', '71.74', '- 13.04'], ['cry', '116', ... | Table 5 shows the effect specific emoji have on classification performance. [CONTINUE] It is clear some emoji strongly contribute to improving prediction quality. [CONTINUE] The most interesting ones are mask, rage, and cry, which significantly increase accuracy. [CONTINUE] Further, contrary to intuition, the sob emoji... |
Solving Hard Coreference Problems | 1907.05524v1 | Table 7: Performance results on Winograd and WinoCoref datasets. All our three systems are trained on WinoCoref, and we evaluate the predictions on both datasets. Our systems improve over the baselines by over than 20% on Winograd and over 15% on WinoCoref. | ['Dataset', 'Metric', 'Illinois', 'IlliCons', 'rahman2012resolving', 'KnowFeat', 'KnowCons', 'KnowComb'] | [['[ITALIC] Winograd', 'Precision', '51.48', '53.26', '73.05', '71.81', '74.93', '[BOLD] 76.41'], ['[ITALIC] WinoCoref', 'AntePre', '68.37', '74.32', '—–', '88.48', '88.95', '[BOLD] 89.32']] | Performance results on Winograd and WinoCoref datasets are shown in Table 7. The best performing system is KnowComb. It improves by over 20% over a state-of-art general coreference system on Winograd and also outperforms Rahman and Ng (2012) by a margin of 3.3%. On the WinoCoref dataset, it improves by 15%. These resul... |
Solving Hard Coreference Problems | 1907.05524v1 | Table 8: Performance results on ACE and OntoNotes datasets. Our system gets the same level of performance compared to a state-of-art general coreference system. | ['System', 'MUC', 'BCUB', 'CEAFe', 'AVG'] | [['ACE', 'ACE', 'ACE', 'ACE', 'ACE'], ['IlliCons', '[BOLD] 78.17', '81.64', '[BOLD] 78.45', '[BOLD] 79.42'], ['KnowComb', '77.51', '[BOLD] 81.97', '77.44', '78.97'], ['OntoNotes', 'OntoNotes', 'OntoNotes', 'OntoNotes', 'OntoNotes'], ['IlliCons', '84.10', '[BOLD] 78.30', '[BOLD] 68.74', '[BOLD] 77.05'], ['KnowComb', '[B... | Performance results on standard ACE and OntoNotes datasets are shown in Table 8. Our KnowComb system achieves the same level of performance as does the state-of-art general coreference system we base it on. As hard coreference problems are rare in standard coreference datasets, we do not have significant performance im... |
Solving Hard Coreference Problems | 1907.05524v1 | Table 9: Distribution of instances in Winograd dataset of each category. Cat1/Cat2 is the subset of instances that require Type 1/Type 2 schema knowledge, respectively. All other instances are put into Cat3. Cat1 and Cat2 instances can be covered by our proposed Predicate Schemas. | ['Category', 'Cat1', 'Cat2', 'Cat3'] | [['Size', '317', '1060', '509'], ['Portion', '16.8%', '56.2%', '27.0%']] | Detailed Analysis To study the coverage of our Predicate Schemas knowledge, we label the instances in Winograd (which also applies to WinoCoref) with the type of Predicate Schemas knowledge required. The distribution of the instances is shown in Table 9. Our proposed Predicate Schemas cover 73% of the instances. |
Solving Hard Coreference Problems | 1907.05524v1 | Table 10: Ablation Study of Knowledge Schemas on WinoCoref. The first line specifies the preformance for KnowComb with only Type 1 schema knowledge tested on all data while the third line specifies the preformance using the same model but tested on Cat1 data. The second line specifies the preformance results for KnowCo... | ['Schema', 'AntePre(Test)', 'AntePre(Train)'] | [['Type 1', '76.67', '86.79'], ['Type 2', '79.55', '88.86'], ['Type 1 (Cat1)', '90.26', '93.64'], ['Type 2 (Cat2)', '83.38', '92.49']] | We also provide an ablation study on the WinoCoref dataset in Table 10. These results use the best performing KnowComb system. They showthat both Type 1 and Type 2 schema knowledge havehigher precision on Category 1 and Category 2 datainstances, respectively, compared to that on full data. Type 1 and Type 2 knowledge h... |
Dirichlet uncertainty wrappers for actionable algorithm accuracy accountability and auditability | 1912.12628v1 | Table 1: Accuracy obtained by training an standalone classifier, applying the API and the proposed wrapper for each domain | ['[EMPTY]', '[BOLD] BB source acc.', '[BOLD] BB target acc.', '[BOLD] Non-reject. acc. (10/20/30%)', '[BOLD] Class. quality (10/20/30%)', '[BOLD] Reject. quality (10/20/30%)'] | [['[BOLD] Apply Yelp BB to SST-2', '89.18±0.08%', '77.13±0.52%', '82.43±0.22% 88.19±0.50% 93.60±0.16%', '80.40±0.39% 83.11±0.80% 83.05±0.23%', '6.03±0.45 6.04±0.51 4.97±0.07'], ['[BOLD] Apply SST-2 BB to Yelp', '83.306±0.18%', '82.106±0.88%', '87,98±0.18% 92.13±0.38% 94.19±0.33%', '85.49±0.88% 84.53±0.38% 78.99±0.46%',... | Table 1 shows the numerical results obtained during the experiments for the four combinations tested. [CONTINUE] In general terms, the results displayed in table 1 show that the rejection method can reduce the error of the output predictions when applying a pre-trained black-box classification system to a new domain. [... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.