prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the GEC-DI (LM+GED) model in the Improving Seq2Seq Grammatical Error Correction via Decoding Interventions paper on the MuCGEC dataset? | F0.5 |
What metrics were used to measure the VERNet model in the Neural Quality Estimation with Multiple Hypotheses for Grammatical Error Correction paper on the JFLEG dataset? | GLEU |
What metrics were used to measure the Transformer + Pre-train with Pseudo Data + BERT model in the Encoder-Decoder Models Can Benefit from Pre-trained Masked Language Models in Grammatical Error Correction paper on the JFLEG dataset? | GLEU |
What metrics were used to measure the SMT + BiGRU model in the Near Human-Level Performance in Grammatical Error Correction with Hybrid Machine Translation paper on the JFLEG dataset? | GLEU |
What metrics were used to measure the Copy-augmented Model (4 Ensemble +Denoising Autoencoder) model in the Improving Grammatical Error Correction via Pre-Training a Copy-Augmented Architecture with Unlabeled Data paper on the JFLEG dataset? | GLEU |
What metrics were used to measure the Transformer model in the Approaching Neural Grammatical Error Correction as a Low-Resource Machine Translation Task paper on the JFLEG dataset? | GLEU |
What metrics were used to measure the CNN Seq2Seq model in the A Multilayer Convolutional Encoder-Decoder Neural Network for Grammatical Error Correction paper on the JFLEG dataset? | GLEU |
What metrics were used to measure the STG-Joint model in the FCGEC: Fine-Grained Corpus for Chinese Grammatical Error Correction paper on the FCGEC dataset? | exact match, F0.5 |
What metrics were used to measure the RedPenNet model in the RedPenNet for Grammatical Error Correction: Outputs to Tokens, Attentions to Spans paper on the BEA-2019 (test) dataset? | F0.5 |
What metrics were used to measure the clang_large_ft2-gector model in the Improved grammatical error correction by ranking elementary edits paper on the BEA-2019 (test) dataset? | F0.5 |
What metrics were used to measure the DeBERTa + RoBERTa + XLNet model in the Ensembling and Knowledge Distilling of Large Sequence Taggers for Grammatical Error Correction paper on the BEA-2019 (test) dataset? | F0.5 |
What metrics were used to measure the Sequence tagging + token-level transformations + two-stage fine-tuning (+RoBERTa, XLNet) model in the GECToR -- Grammatical Error Correction: Tag, Not Rewrite paper on the BEA-2019 (test) dataset? | F0.5 |
What metrics were used to measure the BEA Combination model in the Learning to combine Grammatical Error Corrections paper on the BEA-2019 (test) dataset? | F0.5 |
What metrics were used to measure the GEC-DI (LM+GED) model in the Improving Seq2Seq Grammatical Error Correction via Decoding Interventions paper on the BEA-2019 (test) dataset? | F0.5 |
What metrics were used to measure the LM-Critic model in the LM-Critic: Language Models for Unsupervised Grammatical Error Correction paper on the BEA-2019 (test) dataset? | F0.5 |
What metrics were used to measure the Sequence tagging + token-level transformations + two-stage fine-tuning (+XLNet) model in the GECToR -- Grammatical Error Correction: Tag, Not Rewrite paper on the BEA-2019 (test) dataset? | F0.5 |
What metrics were used to measure the Transformer + Pre-train with Pseudo Data model in the An Empirical Study of Incorporating Pseudo Data into Grammatical Error Correction paper on the BEA-2019 (test) dataset? | F0.5 |
What metrics were used to measure the Transformer + Pre-train with Pseudo Data (+BERT) model in the Encoder-Decoder Models Can Benefit from Pre-trained Masked Language Models in Grammatical Error Correction paper on the BEA-2019 (test) dataset? | F0.5 |
What metrics were used to measure the Transformer model in the Neural Grammatical Error Correction Systems with Unsupervised Pre-training on Synthetic Data paper on the BEA-2019 (test) dataset? | F0.5 |
What metrics were used to measure the Transformer model in the A Neural Grammatical Error Correction System Built On Better Pre-training and Sequential Transfer Learning paper on the BEA-2019 (test) dataset? | F0.5 |
What metrics were used to measure the VERNet model in the Neural Quality Estimation with Multiple Hypotheses for Grammatical Error Correction paper on the BEA-2019 (test) dataset? | F0.5 |
What metrics were used to measure the Ensemble of models model in the The LAIX Systems in the BEA-2019 GEC Shared Task paper on the BEA-2019 (test) dataset? | F0.5 |
What metrics were used to measure the Transformer model in the Approaching Neural Grammatical Error Correction as a Low-Resource Machine Translation Task paper on the _Restricted_ dataset? | GLEU |
What metrics were used to measure the CNN Seq2Seq model in the A Multilayer Convolutional Encoder-Decoder Neural Network for Grammatical Error Correction paper on the _Restricted_ dataset? | GLEU |
What metrics were used to measure the gT5 xxl model in the A Simple Recipe for Multilingual Grammatical Error Correction paper on the Falko-MERLIN dataset? | F0.5 |
What metrics were used to measure the Transformer model in the Grammatical Error Correction in Low-Resource Scenarios paper on the Falko-MERLIN dataset? | F0.5 |
What metrics were used to measure the Transformer - synthetic pretrain only model in the Grammatical Error Correction in Low-Resource Scenarios paper on the Falko-MERLIN dataset? | F0.5 |
What metrics were used to measure the Multilayer Convolutional Encoder-Decoder model in the Using Wikipedia Edits in Low Resource Grammatical Error Correction paper on the Falko-MERLIN dataset? | F0.5 |
What metrics were used to measure the SMT + BiGRU model in the Near Human-Level Performance in Grammatical Error Correction with Hybrid Machine Translation paper on the CoNLL-2014 Shared Task (10 annotations) dataset? | F0.5 |
What metrics were used to measure the CNN Seq2Seq model in the A Multilayer Convolutional Encoder-Decoder Neural Network for Grammatical Error Correction paper on the CoNLL-2014 Shared Task (10 annotations) dataset? | F0.5 |
What metrics were used to measure the GEC-DI (LM+GED) model in the Improving Seq2Seq Grammatical Error Correction via Decoding Interventions paper on the CoNLL-2014 Shared Task dataset? | F0.5, Precision, Recall |
What metrics were used to measure the T5 model in the A Simple Recipe for Multilingual Grammatical Error Correction paper on the CoNLL-2014 Shared Task dataset? | F0.5, Precision, Recall |
What metrics were used to measure the SynGEC model in the SynGEC: Syntax-Enhanced Grammatical Error Correction with a Tailored GEC-Oriented Parser paper on the CoNLL-2014 Shared Task dataset? | F0.5, Precision, Recall |
What metrics were used to measure the Sequence tagging + token-level transformations + two-stage fine-tuning (+BERT, RoBERTa, XLNet) model in the GECToR -- Grammatical Error Correction: Tag, Not Rewrite paper on the CoNLL-2014 Shared Task dataset? | F0.5, Precision, Recall |
What metrics were used to measure the LM-Critic model in the LM-Critic: Language Models for Unsupervised Grammatical Error Correction paper on the CoNLL-2014 Shared Task dataset? | F0.5, Precision, Recall |
What metrics were used to measure the Sequence tagging + token-level transformations + two-stage fine-tuning (+XLNet) model in the GECToR -- Grammatical Error Correction: Tag, Not Rewrite paper on the CoNLL-2014 Shared Task dataset? | F0.5, Precision, Recall |
What metrics were used to measure the Transformer + Pre-train with Pseudo Data (+BERT) model in the Encoder-Decoder Models Can Benefit from Pre-trained Masked Language Models in Grammatical Error Correction paper on the CoNLL-2014 Shared Task dataset? | F0.5, Precision, Recall |
What metrics were used to measure the Transformer + Pre-train with Pseudo Data model in the An Empirical Study of Incorporating Pseudo Data into Grammatical Error Correction paper on the CoNLL-2014 Shared Task dataset? | F0.5, Precision, Recall |
What metrics were used to measure the VERNet model in the Neural Quality Estimation with Multiple Hypotheses for Grammatical Error Correction paper on the CoNLL-2014 Shared Task dataset? | F0.5, Precision, Recall |
What metrics were used to measure the BART model in the Stronger Baselines for Grammatical Error Correction Using Pretrained Encoder-Decoder Model paper on the CoNLL-2014 Shared Task dataset? | F0.5, Precision, Recall |
What metrics were used to measure the Sequence Labeling with edits using BERT, Faster inference model in the Parallel Iterative Edit Models for Local Sequence Transduction paper on the CoNLL-2014 Shared Task dataset? | F0.5, Precision, Recall |
What metrics were used to measure the Copy-augmented Model (4 Ensemble +Denoising Autoencoder) model in the Improving Grammatical Error Correction via Pre-Training a Copy-Augmented Architecture with Unlabeled Data paper on the CoNLL-2014 Shared Task dataset? | F0.5, Precision, Recall |
What metrics were used to measure the Sequence Labeling with edits using BERT, Faster inference (Single Model) model in the Parallel Iterative Edit Models for Local Sequence Transduction paper on the CoNLL-2014 Shared Task dataset? | F0.5, Precision, Recall |
What metrics were used to measure the CNN Seq2Seq + Quality Estimation model in the Neural Quality Estimation of Grammatical Error Correction paper on the CoNLL-2014 Shared Task dataset? | F0.5, Precision, Recall |
What metrics were used to measure the SMT + BiGRU model in the Near Human-Level Performance in Grammatical Error Correction with Hybrid Machine Translation paper on the CoNLL-2014 Shared Task dataset? | F0.5, Precision, Recall |
What metrics were used to measure the Transformer model in the Approaching Neural Grammatical Error Correction as a Low-Resource Machine Translation Task paper on the CoNLL-2014 Shared Task dataset? | F0.5, Precision, Recall |
What metrics were used to measure the CNN Seq2Seq model in the A Multilayer Convolutional Encoder-Decoder Neural Network for Grammatical Error Correction paper on the CoNLL-2014 Shared Task dataset? | F0.5, Precision, Recall |
What metrics were used to measure the CNN Seq2Seq + Fluency Boost model in the Reaching Human-level Performance in Automatic Grammatical Error Correction: An Empirical Study paper on the Unrestricted dataset? | F0.5, GLEU |
What metrics were used to measure the + BIFI (ours) model in the LM-Critic: Language Models for Unsupervised Grammatical Error Correction paper on the Unrestricted dataset? | F0.5, GLEU |
What metrics were used to measure the CNN Seq2Seq + Fluency Boost and inference model in the Reaching Human-level Performance in Automatic Grammatical Error Correction: An Empirical Study paper on the Unrestricted dataset? | F0.5, GLEU |
What metrics were used to measure the CamemBERT model in the CamemBERT: a Tasty French Language Model paper on the ParTUT dataset? | LAS, UAS |
What metrics were used to measure the UDify model in the 75 Languages, 1 Model: Parsing Universal Dependencies Universally paper on the ParTUT dataset? | LAS, UAS |
What metrics were used to measure the MFVI model in the Second-Order Neural Dependency Parsing with Message Passing and End-to-End Training paper on the Chinese Treebank dataset? | LAS, UAS |
What metrics were used to measure the SuPar-BERTweet model in the Cross-Dialect Social Media Dependency Parsing for Social Scientific Entity Attribute Analysis paper on the Tweebank dataset? | Labelled Attachment Score, Unlabeled Attachment Score |
What metrics were used to measure the Ensemble (20) model in the Parsing Tweets into Universal Dependencies paper on the Tweebank dataset? | Labelled Attachment Score, Unlabeled Attachment Score |
What metrics were used to measure the spaCy-XLM-RoBERTa model in the Annotating the Tweebank Corpus on Named Entity Recognition and Building NLP Models for Social Media Analysis paper on the Tweebank dataset? | Labelled Attachment Score, Unlabeled Attachment Score |
What metrics were used to measure the CamemBERT model in the CamemBERT: a Tasty French Language Model paper on the Sequoia Treebank dataset? | LAS, UAS |
What metrics were used to measure the UDify model in the 75 Languages, 1 Model: Parsing Universal Dependencies Universally paper on the Sequoia Treebank dataset? | LAS, UAS |
What metrics were used to measure the UDPipe 2.0 + mBERT + FLAIR model in the Evaluating Contextualized Embeddings on 54 Languages in POS Tagging, Lemmatization and Dependency Parsing paper on the Universal Dependencies dataset? | LAS, UAS, BLEX |
What metrics were used to measure the UDify model in the 75 Languages, 1 Model: Parsing Universal Dependencies Universally paper on the Universal Dependencies dataset? | LAS, UAS, BLEX |
What metrics were used to measure the HIT-SCIR model in the Towards Better UD Parsing: Deep Contextualized Word Embeddings, Ensemble, and Treebank Concatenation paper on the Universal Dependencies dataset? | LAS, UAS, BLEX |
What metrics were used to measure the Stanford+ model in the Universal Dependency Parsing from Scratch paper on the Universal Dependencies dataset? | LAS, UAS, BLEX |
What metrics were used to measure the TurkuNLP model in the Turku Neural Parser Pipeline: An End-to-End System for the CoNLL 2018 Shared Task paper on the Universal Dependencies dataset? | LAS, UAS, BLEX |
What metrics were used to measure the UDPipe 2.0 model in the UDPipe 2.0 Prototype at CoNLL 2018 UD Shared Task paper on the Universal Dependencies dataset? | LAS, UAS, BLEX |
What metrics were used to measure the CRFPar model in the Efficient Second-Order TreeCRF for Neural Dependency Parsing paper on the NLPCC-2019 dataset? | LAS, UAS |
What metrics were used to measure the da_dacy_large_tft-0.0.0 model in the DaCy: A Unified Framework for Danish NLP paper on the DaNE dataset? | LAS, UAS |
What metrics were used to measure the BiLSTM-CRF model in the From POS tagging to dependency parsing for biomedical event extraction paper on the GENIA - UAS dataset? | F1 |
What metrics were used to measure the SciBERT (SciVocab) model in the SciBERT: A Pretrained Language Model for Scientific Text paper on the GENIA - UAS dataset? | F1 |
What metrics were used to measure the SciBERT (Base Vocab) model in the SciBERT: A Pretrained Language Model for Scientific Text paper on the GENIA - UAS dataset? | F1 |
What metrics were used to measure the CRFPar model in the Efficient Second-Order TreeCRF for Neural Dependency Parsing paper on the CoNLL-2009 dataset? | LAS, UAS |
What metrics were used to measure the Biaffine Parser model in the Deep Biaffine Attention for Neural Dependency Parsing paper on the CoNLL-2009 dataset? | LAS, UAS |
What metrics were used to measure the Label Attention Layer + HPSG + XLNet model in the Rethinking Self-Attention: Towards Interpretability in Neural Parsing paper on the Penn Treebank dataset? | LAS, UAS, POS |
What metrics were used to measure the DMPar + XLNet model in the Enhancing Structure-aware Encoder with Extremely Limited Data for Graph-based Dependency Parsing paper on the Penn Treebank dataset? | LAS, UAS, POS |
What metrics were used to measure the ACE model in the Automated Concatenation of Embeddings for Structured Prediction paper on the Penn Treebank dataset? | LAS, UAS, POS |
What metrics were used to measure the Deep Biaffine + RoBERTa model in the Deep Biaffine Attention for Neural Dependency Parsing paper on the Penn Treebank dataset? | LAS, UAS, POS |
What metrics were used to measure the HPSG Parser (Joint) + XLNet model in the Head-Driven Phrase Structure Grammar Parsing on Penn Treebank paper on the Penn Treebank dataset? | LAS, UAS, POS |
What metrics were used to measure the MFVI model in the Second-Order Neural Dependency Parsing with Message Passing and End-to-End Training paper on the Penn Treebank dataset? | LAS, UAS, POS |
What metrics were used to measure the CVT + Multi-Task model in the Semi-Supervised Sequence Modeling with Cross-View Training paper on the Penn Treebank dataset? | LAS, UAS, POS |
What metrics were used to measure the RNG Transformer model in the Recursive Non-Autoregressive Graph-to-Graph Transformer for Dependency Parsing with Iterative Refinement paper on the Penn Treebank dataset? | LAS, UAS, POS |
What metrics were used to measure the SpanRel model in the Generalizing Natural Language Analysis through Span-relation Representations paper on the Penn Treebank dataset? | LAS, UAS, POS |
What metrics were used to measure the CRFPar model in the Efficient Second-Order TreeCRF for Neural Dependency Parsing paper on the Penn Treebank dataset? | LAS, UAS, POS |
What metrics were used to measure the Left-to-Right Pointer Network model in the Left-to-Right Dependency Parsing with Pointer Networks paper on the Penn Treebank dataset? | LAS, UAS, POS |
What metrics were used to measure the Graph-based parser with GNNs model in the Graph-based Dependency Parsing with Graph Neural Networks paper on the Penn Treebank dataset? | LAS, UAS, POS |
What metrics were used to measure the Deep Biaffine model in the Deep Biaffine Attention for Neural Dependency Parsing paper on the Penn Treebank dataset? | LAS, UAS, POS |
What metrics were used to measure the Stack-Pointer Network model in the Stack-Pointer Networks for Dependency Parsing paper on the Penn Treebank dataset? | LAS, UAS, POS |
What metrics were used to measure the jPTDP model in the An improved neural network model for joint POS tagging and dependency parsing paper on the Penn Treebank dataset? | LAS, UAS, POS |
What metrics were used to measure the Experiment-bert model in the paper on the Penn Treebank dataset? | LAS, UAS, POS |
What metrics were used to measure the Andor et al. model in the Globally Normalized Transition-Based Neural Networks paper on the Penn Treebank dataset? | LAS, UAS, POS |
What metrics were used to measure the Distilled neural FOG model in the Distilling an Ensemble of Greedy Dependency Parsers into One MST Parser paper on the Penn Treebank dataset? | LAS, UAS, POS |
What metrics were used to measure the Weiss et al. model in the Structured Training for Neural Network Transition-Based Parsing paper on the Penn Treebank dataset? | LAS, UAS, POS |
What metrics were used to measure the BIST transition-based parser model in the Simple and Accurate Dependency Parsing Using Bidirectional LSTM Feature Representations paper on the Penn Treebank dataset? | LAS, UAS, POS |
What metrics were used to measure the Arc-hybrid model in the Training with Exploration Improves a Greedy Stack-LSTM Parser paper on the Penn Treebank dataset? | LAS, UAS, POS |
What metrics were used to measure the BIST graph-based parser model in the Simple and Accurate Dependency Parsing Using Bidirectional LSTM Feature Representations paper on the Penn Treebank dataset? | LAS, UAS, POS |
What metrics were used to measure the CamemBERT model in the CamemBERT: a Tasty French Language Model paper on the French GSD dataset? | LAS, UAS |
What metrics were used to measure the UDify model in the 75 Languages, 1 Model: Parsing Universal Dependencies Universally paper on the French GSD dataset? | LAS, UAS |
What metrics were used to measure the CamemBERT model in the CamemBERT: a Tasty French Language Model paper on the Spoken Corpus dataset? | LAS, UAS |
What metrics were used to measure the UDify model in the 75 Languages, 1 Model: Parsing Universal Dependencies Universally paper on the Spoken Corpus dataset? | LAS, UAS |
What metrics were used to measure the BiLSTM-CRF model in the From POS tagging to dependency parsing for biomedical event extraction paper on the GENIA - LAS dataset? | F1 |
What metrics were used to measure the SciBERT (SciVocab) model in the SciBERT: A Pretrained Language Model for Scientific Text paper on the GENIA - LAS dataset? | F1 |
What metrics were used to measure the SciBERT (Base Vocab) model in the SciBERT: A Pretrained Language Model for Scientific Text paper on the GENIA - LAS dataset? | F1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.