prompts
stringlengths
81
413
metrics_response
stringlengths
0
371
What metrics were used to measure the Deep Convolutional Encoder; single-layer decoder model in the A Convolutional Encoder Model for Neural Machine Translation paper on the WMT2014 English-French dataset?
BLEU score, SacreBLEU, Hardware Burden, Operations per network pass
What metrics were used to measure the LSTM model in the Sequence to Sequence Learning with Neural Networks paper on the WMT2014 English-French dataset?
BLEU score, SacreBLEU, Hardware Burden, Operations per network pass
What metrics were used to measure the CSLM + RNN + WP model in the Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation paper on the WMT2014 English-French dataset?
BLEU score, SacreBLEU, Hardware Burden, Operations per network pass
What metrics were used to measure the FLAN 137B zero-shot model in the Finetuned Language Models Are Zero-Shot Learners paper on the WMT2014 English-French dataset?
BLEU score, SacreBLEU, Hardware Burden, Operations per network pass
What metrics were used to measure the Regularized LSTM model in the Recurrent Neural Network Regularization paper on the WMT2014 English-French dataset?
BLEU score, SacreBLEU, Hardware Burden, Operations per network pass
What metrics were used to measure the Unsupervised PBSMT model in the Phrase-Based & Neural Unsupervised Machine Translation paper on the WMT2014 English-French dataset?
BLEU score, SacreBLEU, Hardware Burden, Operations per network pass
What metrics were used to measure the PBSMT + NMT model in the Phrase-Based & Neural Unsupervised Machine Translation paper on the WMT2014 English-French dataset?
BLEU score, SacreBLEU, Hardware Burden, Operations per network pass
What metrics were used to measure the GRU+Attention model in the Can Active Memory Replace Attention? paper on the WMT2014 English-French dataset?
BLEU score, SacreBLEU, Hardware Burden, Operations per network pass
What metrics were used to measure the SMT + iterative backtranslation (unsupervised) model in the Unsupervised Statistical Machine Translation paper on the WMT2014 English-French dataset?
BLEU score, SacreBLEU, Hardware Burden, Operations per network pass
What metrics were used to measure the Unsupervised NMT + Transformer model in the Phrase-Based & Neural Unsupervised Machine Translation paper on the WMT2014 English-French dataset?
BLEU score, SacreBLEU, Hardware Burden, Operations per network pass
What metrics were used to measure the Unsupervised attentional encoder-decoder + BPE model in the Unsupervised Neural Machine Translation paper on the WMT2014 English-French dataset?
BLEU score, SacreBLEU, Hardware Burden, Operations per network pass
What metrics were used to measure the OmniNetP model in the OmniNet: Omnidirectional Representations from Transformers paper on the WMT2017 Russian-English dataset?
BLEU
What metrics were used to measure the Evolved Transformer Big model in the The Evolved Transformer paper on the WMT2014 English-Czech dataset?
BLEU score
What metrics were used to measure the Evolved Transformer Base model in the The Evolved Transformer paper on the WMT2014 English-Czech dataset?
BLEU score
What metrics were used to measure the Vega-MT model in the Vega-MT: The JD Explore Academy Translation System for WMT22 paper on the WMT 2022 German-English dataset?
SacreBLEU
What metrics were used to measure the Multilingual Transformer model in the Training and Adapting Multilingual NMT for Less-resourced and Morphologically Rich Languages paper on the ACCURAT balanced test corpus for under resourced languages Russian-Estonian dataset?
BLEU
What metrics were used to measure the Vega-MT model in the Vega-MT: The JD Explore Academy Translation System for WMT22 paper on the WMT 2022 Japanese-English dataset?
SacreBLEU
What metrics were used to measure the Vega-MT model in the Vega-MT: The JD Explore Academy Translation System for WMT22 paper on the WMT 2022 English-Chinese dataset?
SacreBLEU
What metrics were used to measure the Transformer base + BPE-Dropout model in the BPE-Dropout: Simple and Effective Subword Regularization paper on the IWSLT2017 French-English dataset?
Cased sacreBLEU, SacreBLEU
What metrics were used to measure the NLLB-200 model in the No Language Left Behind: Scaling Human-Centered Machine Translation paper on the IWSLT2017 French-English dataset?
Cased sacreBLEU, SacreBLEU
What metrics were used to measure the PaLM 2 model in the PaLM 2 Technical Report paper on the FRMT (Chinese - Taiwan) dataset?
BLEURT
What metrics were used to measure the PaLM model in the PaLM 2 Technical Report paper on the FRMT (Chinese - Taiwan) dataset?
BLEURT
What metrics were used to measure the Google Translate model in the PaLM 2 Technical Report paper on the FRMT (Chinese - Taiwan) dataset?
BLEURT
What metrics were used to measure the Vega-MT model in the Vega-MT: The JD Explore Academy Translation System for WMT22 paper on the WMT 2022 Czech-English dataset?
SacreBLEU
What metrics were used to measure the slone/mbart-large-51-myv-mul-v1 model in the The first neural machine translation system for the Erzya language paper on the slone/myv_ru_2022 myv-ru dataset?
ChrF++
What metrics were used to measure the Seq-KD + Seq-Inter + Word-KD model in the Sequence-Level Knowledge Distillation paper on the IWSLT2015 Thai-English dataset?
BLEU score
What metrics were used to measure the PaLM 2 model in the PaLM 2 Technical Report paper on the FRMT (Portuguese - Portugal) dataset?
BLEURT
What metrics were used to measure the PaLM model in the PaLM 2 Technical Report paper on the FRMT (Portuguese - Portugal) dataset?
BLEURT
What metrics were used to measure the Google Translate model in the PaLM 2 Technical Report paper on the FRMT (Portuguese - Portugal) dataset?
BLEURT
What metrics were used to measure the Transformer-base model in the Designing the Business Conversation Corpus paper on the Business Scene Dialogue EN-JA dataset?
BLEU
What metrics were used to measure the FLAN 137B zero-shot model in the Finetuned Language Models Are Zero-Shot Learners paper on the WMT2014 French-English dataset?
BLEU score
What metrics were used to measure the SMT + iterative backtranslation (unsupervised) model in the Unsupervised Statistical Machine Translation paper on the WMT2014 French-English dataset?
BLEU score
What metrics were used to measure the M_C model in the On Automatic Parsing of Log Records paper on the V_C (trained on T_H) dataset?
Median Relative Edit Distance
What metrics were used to measure the Vega-MT model in the Vega-MT: The JD Explore Academy Translation System for WMT22 paper on the WMT 2022 English-Russian dataset?
SacreBLEU
What metrics were used to measure the Multilingual Transformer model in the Training and Adapting Multilingual NMT for Less-resourced and Morphologically Rich Languages paper on the ACCURAT balanced test corpus for under resourced languages Estonian-Russian dataset?
BLEU
What metrics were used to measure the Attentional encoder-decoder + BPE model in the Edinburgh Neural Machine Translation Systems for WMT 16 paper on the WMT2016 Czech-English dataset?
BLEU score
What metrics were used to measure the Transformer-base model in the Designing the Business Conversation Corpus paper on the Business Scene Dialogue JA-EN dataset?
BLEU
What metrics were used to measure the Vega-MT model in the Vega-MT: The JD Explore Academy Translation System for WMT22 paper on the WMT 2022 English-German dataset?
SacreBLEU
What metrics were used to measure the PS-KD model in the Self-Knowledge Distillation with Progressive Refinement of Targets paper on the IWSLT2015 German-English dataset?
BLEU score
What metrics were used to measure the Pervasive Attention model in the Pervasive Attention: 2D Convolutional Neural Networks for Sequence-to-Sequence Prediction paper on the IWSLT2015 German-English dataset?
BLEU score
What metrics were used to measure the Transformer with FRAGE model in the FRAGE: Frequency-Agnostic Word Representation paper on the IWSLT2015 German-English dataset?
BLEU score
What metrics were used to measure the ConvS2S+Risk model in the Classical Structured Prediction Losses for Sequence to Sequence Learning paper on the IWSLT2015 German-English dataset?
BLEU score
What metrics were used to measure the Denoising autoencoders (non-autoregressive) model in the Deterministic Non-Autoregressive Neural Sequence Modeling by Iterative Refinement paper on the IWSLT2015 German-English dataset?
BLEU score
What metrics were used to measure the ConvS2S model in the Convolutional Sequence to Sequence Learning paper on the IWSLT2015 German-English dataset?
BLEU score
What metrics were used to measure the Conv-LSTM (deep+pos) model in the A Convolutional Encoder Model for Neural Machine Translation paper on the IWSLT2015 German-English dataset?
BLEU score
What metrics were used to measure the NPMT + language model model in the Towards Neural Phrase-based Machine Translation paper on the IWSLT2015 German-English dataset?
BLEU score
What metrics were used to measure the RNNsearch model in the An Actor-Critic Algorithm for Sequence Prediction paper on the IWSLT2015 German-English dataset?
BLEU score
What metrics were used to measure the DCCL model in the Compressing Word Embeddings via Deep Compositional Code Learning paper on the IWSLT2015 German-English dataset?
BLEU score
What metrics were used to measure the Bi-GRU (MLE+SLE) model in the Neural Machine Translation by Jointly Learning to Align and Translate paper on the IWSLT2015 German-English dataset?
BLEU score
What metrics were used to measure the FlowSeq-base model in the FlowSeq: Non-Autoregressive Conditional Sequence Generation with Generative Flow paper on the IWSLT2015 German-English dataset?
BLEU score
What metrics were used to measure the Word-level CNN w/attn, input feeding model in the Sequence-to-Sequence Learning as Beam-Search Optimization paper on the IWSLT2015 German-English dataset?
BLEU score
What metrics were used to measure the Word-level LSTM w/attn model in the Sequence Level Training with Recurrent Neural Networks paper on the IWSLT2015 German-English dataset?
BLEU score
What metrics were used to measure the QRNN model in the Quasi-Recurrent Neural Networks paper on the IWSLT2015 German-English dataset?
BLEU score
What metrics were used to measure the slone/mbart-large-51-mul-myv-v1 model in the The first neural machine translation system for the Erzya language paper on the slone/myv_ru_2022 ru-myv dataset?
ChrF++
What metrics were used to measure the PiNMT model in the Integrating Pre-trained Language Model into Neural Machine Translation paper on the IWSLT2014 English-German dataset?
BLEU score
What metrics were used to measure the Bi-SimCut model in the Bi-SimCut: A Simple Strategy for Boosting Neural Machine Translation paper on the IWSLT2014 English-German dataset?
BLEU score
What metrics were used to measure the SimCut model in the Bi-SimCut: A Simple Strategy for Boosting Neural Machine Translation paper on the IWSLT2014 English-German dataset?
BLEU score
What metrics were used to measure the Unidrop model in the UniDrop: A Simple yet Effective Technique to Improve Transformer without Extra Cost paper on the IWSLT2014 English-German dataset?
BLEU score
What metrics were used to measure the MixedRepresentations model in the Sequence Generation with Mixed Representations paper on the IWSLT2014 English-German dataset?
BLEU score
What metrics were used to measure the OmniNetP model in the OmniNet: Omnidirectional Representations from Transformers paper on the WMT2017 English-Finnish dataset?
BLEU
What metrics were used to measure the Facebook FAIR (ensemble) model in the Facebook FAIR's WMT19 News Translation Task Submission paper on the WMT2019 English-German dataset?
BLEU score, SacreBLEU
What metrics were used to measure the Exploiting Mono at Scale (single) model in the Exploiting Monolingual Data at Scale for Neural Machine Translation paper on the WMT2019 English-German dataset?
BLEU score, SacreBLEU
What metrics were used to measure the Vega-MT model in the Vega-MT: The JD Explore Academy Translation System for WMT22 paper on the WMT 2022 English-Japanese dataset?
SacreBLEU
What metrics were used to measure the C2-50k Segmentation model in the Neural Machine Translation of Rare Words with Subword Units paper on the WMT2015 English-Russian dataset?
BLEU score
What metrics were used to measure the PENELOPIE (Transformers-based Greek-to-English NMT) model in the PENELOPIE: Enabling Open Information Extraction for the Greek Language through Machine Translation paper on the Tatoeba (EL-to-EN) dataset?
BLEU
What metrics were used to measure the DynamicConv model in the Pay Less Attention with Lightweight and Dynamic Convolutions paper on the WMT 2017 English-Chinese dataset?
BLEU score
What metrics were used to measure the LightConv model in the Pay Less Attention with Lightweight and Dynamic Convolutions paper on the WMT 2017 English-Chinese dataset?
BLEU score
What metrics were used to measure the Hassan et al. (2018) model in the Achieving Human Parity on Automatic Chinese to English News Translation paper on the WMT 2017 English-Chinese dataset?
BLEU score
What metrics were used to measure the Attentional encoder-decoder + BPE model in the Edinburgh Neural Machine Translation Systems for WMT 16 paper on the WMT2016 Russian-English dataset?
BLEU score
What metrics were used to measure the Transformer trained on highly filtered data model in the Impact of Corpora Quality on Neural Machine Translation paper on the WMT 2017 Latvian-English dataset?
BLEU
What metrics were used to measure the mLSTM with factored data model in the Tilde's Machine Translation Systems for WMT 2017 paper on the WMT 2017 Latvian-English dataset?
BLEU
What metrics were used to measure the Attention-based Hybrid NMT combination model in the Confidence through Attention paper on the WMT 2017 Latvian-English dataset?
BLEU
What metrics were used to measure the RNN model in the Debugging Neural Machine Translations paper on the WMT 2017 Latvian-English dataset?
BLEU
What metrics were used to measure the StrokeNet model in the Breaking the Representation Bottleneck of Chinese Characters: Neural Machine Translation with Stroke Sequence Modeling paper on the WMT2017 Chinese-English dataset?
BLEU
What metrics were used to measure the T2R + Pretrain model in the Finetuning Pretrained Transformers into RNNs paper on the WMT2017 Chinese-English dataset?
BLEU
What metrics were used to measure the OmniNetP model in the OmniNet: Omnidirectional Representations from Transformers paper on the WMT2017 Chinese-English dataset?
BLEU
What metrics were used to measure the HWTSC-Teacher-Sim model in the ACES: Translation Accuracy Challenge Sets for Evaluating Machine Translation Metrics paper on the ACES dataset?
Score
What metrics were used to measure the MS-COMET-22 model in the ACES: Translation Accuracy Challenge Sets for Evaluating Machine Translation Metrics paper on the ACES dataset?
Score
What metrics were used to measure the MS-COMET-QE-22 model in the ACES: Translation Accuracy Challenge Sets for Evaluating Machine Translation Metrics paper on the ACES dataset?
Score
What metrics were used to measure the KG-BERTScore model in the ACES: Translation Accuracy Challenge Sets for Evaluating Machine Translation Metrics paper on the ACES dataset?
Score
What metrics were used to measure the metricx_xl_DA_2019 model in the ACES: Translation Accuracy Challenge Sets for Evaluating Machine Translation Metrics paper on the ACES dataset?
Score
What metrics were used to measure the COMET-QE model in the ACES: Translation Accuracy Challenge Sets for Evaluating Machine Translation Metrics paper on the ACES dataset?
Score
What metrics were used to measure the COMET-22 model in the ACES: Translation Accuracy Challenge Sets for Evaluating Machine Translation Metrics paper on the ACES dataset?
Score
What metrics were used to measure the UniTE-src model in the ACES: Translation Accuracy Challenge Sets for Evaluating Machine Translation Metrics paper on the ACES dataset?
Score
What metrics were used to measure the UniTE-ref model in the ACES: Translation Accuracy Challenge Sets for Evaluating Machine Translation Metrics paper on the ACES dataset?
Score
What metrics were used to measure the metricx_xxl_DA_2019 model in the ACES: Translation Accuracy Challenge Sets for Evaluating Machine Translation Metrics paper on the ACES dataset?
Score
What metrics were used to measure the UniTE model in the ACES: Translation Accuracy Challenge Sets for Evaluating Machine Translation Metrics paper on the ACES dataset?
Score
What metrics were used to measure the Cross-QE model in the ACES: Translation Accuracy Challenge Sets for Evaluating Machine Translation Metrics paper on the ACES dataset?
Score
What metrics were used to measure the chrF model in the ACES: Translation Accuracy Challenge Sets for Evaluating Machine Translation Metrics paper on the ACES dataset?
Score
What metrics were used to measure the metricx_xl_MQM_2020 model in the ACES: Translation Accuracy Challenge Sets for Evaluating Machine Translation Metrics paper on the ACES dataset?
Score
What metrics were used to measure the COMET-20 model in the ACES: Translation Accuracy Challenge Sets for Evaluating Machine Translation Metrics paper on the ACES dataset?
Score
What metrics were used to measure the BLEURT-20 model in the ACES: Translation Accuracy Challenge Sets for Evaluating Machine Translation Metrics paper on the ACES dataset?
Score
What metrics were used to measure the YiSi-1 model in the ACES: Translation Accuracy Challenge Sets for Evaluating Machine Translation Metrics paper on the ACES dataset?
Score
What metrics were used to measure the BERTScore model in the ACES: Translation Accuracy Challenge Sets for Evaluating Machine Translation Metrics paper on the ACES dataset?
Score
What metrics were used to measure the BLEU model in the ACES: Translation Accuracy Challenge Sets for Evaluating Machine Translation Metrics paper on the ACES dataset?
Score
What metrics were used to measure the f101spBLEU model in the ACES: Translation Accuracy Challenge Sets for Evaluating Machine Translation Metrics paper on the ACES dataset?
Score
What metrics were used to measure the f200spBLEU model in the ACES: Translation Accuracy Challenge Sets for Evaluating Machine Translation Metrics paper on the ACES dataset?
Score
What metrics were used to measure the Larger model in the Sicilian Translator: A Recipe for Low-Resource NMT paper on the Arba Sicula dataset?
BLEU (En-Scn), BLEU (It-Scn), BLEU (Scn-En), BLEU (Scn-It)
What metrics were used to measure the Many-to-Many model in the Sicilian Translator: A Recipe for Low-Resource NMT paper on the Arba Sicula dataset?
BLEU (En-Scn), BLEU (It-Scn), BLEU (Scn-En), BLEU (Scn-It)
What metrics were used to measure the OmniNetP model in the OmniNet: Omnidirectional Representations from Transformers paper on the WMT2017 English-German dataset?
BLEU