prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the Denoising autoencoders (non-autoregressive) model in the Deterministic Non-Autoregressive Neural Sequence Modeling by Iterative Refinement paper on the WMT2014 English-German dataset? | BLEU score, SacreBLEU, Number of Params, Hardware Burden, Operations per network pass |
What metrics were used to measure the RNN Enc-Dec Att model in the Effective Approaches to Attention-based Neural Machine Translation paper on the WMT2014 English-German dataset? | BLEU score, SacreBLEU, Number of Params, Hardware Burden, Operations per network pass |
What metrics were used to measure the FlowSeq-large model in the FlowSeq: Non-Autoregressive Conditional Sequence Generation with Generative Flow paper on the WMT2014 English-German dataset? | BLEU score, SacreBLEU, Number of Params, Hardware Burden, Operations per network pass |
What metrics were used to measure the PBMT model in the paper on the WMT2014 English-German dataset? | BLEU score, SacreBLEU, Number of Params, Hardware Burden, Operations per network pass |
What metrics were used to measure the Deep-Att model in the Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation paper on the WMT2014 English-German dataset? | BLEU score, SacreBLEU, Number of Params, Hardware Burden, Operations per network pass |
What metrics were used to measure the Phrase Based MT model in the Edinburgh's Syntax-Based Systems at WMT 2015 paper on the WMT2014 English-German dataset? | BLEU score, SacreBLEU, Number of Params, Hardware Burden, Operations per network pass |
What metrics were used to measure the PBSMT + NMT model in the Phrase-Based & Neural Unsupervised Machine Translation paper on the WMT2014 English-German dataset? | BLEU score, SacreBLEU, Number of Params, Hardware Burden, Operations per network pass |
What metrics were used to measure the NAT +FT + NPD model in the Non-Autoregressive Neural Machine Translation paper on the WMT2014 English-German dataset? | BLEU score, SacreBLEU, Number of Params, Hardware Burden, Operations per network pass |
What metrics were used to measure the FlowSeq-base model in the FlowSeq: Non-Autoregressive Conditional Sequence Generation with Generative Flow paper on the WMT2014 English-German dataset? | BLEU score, SacreBLEU, Number of Params, Hardware Burden, Operations per network pass |
What metrics were used to measure the Seq-KD + Seq-Inter + Word-KD model in the Sequence-Level Knowledge Distillation paper on the WMT2014 English-German dataset? | BLEU score, SacreBLEU, Number of Params, Hardware Burden, Operations per network pass |
What metrics were used to measure the Unsupervised PBSMT model in the Phrase-Based & Neural Unsupervised Machine Translation paper on the WMT2014 English-German dataset? | BLEU score, SacreBLEU, Number of Params, Hardware Burden, Operations per network pass |
What metrics were used to measure the NSE-NSE model in the Neural Semantic Encoders paper on the WMT2014 English-German dataset? | BLEU score, SacreBLEU, Number of Params, Hardware Burden, Operations per network pass |
What metrics were used to measure the Unsupervised NMT + Transformer model in the Phrase-Based & Neural Unsupervised Machine Translation paper on the WMT2014 English-German dataset? | BLEU score, SacreBLEU, Number of Params, Hardware Burden, Operations per network pass |
What metrics were used to measure the SMT + iterative backtranslation (unsupervised) model in the Unsupervised Statistical Machine Translation paper on the WMT2014 English-German dataset? | BLEU score, SacreBLEU, Number of Params, Hardware Burden, Operations per network pass |
What metrics were used to measure the Reverse RNN Enc-Dec model in the Effective Approaches to Attention-based Neural Machine Translation paper on the WMT2014 English-German dataset? | BLEU score, SacreBLEU, Number of Params, Hardware Burden, Operations per network pass |
What metrics were used to measure the RNN Enc-Dec model in the Effective Approaches to Attention-based Neural Machine Translation paper on the WMT2014 English-German dataset? | BLEU score, SacreBLEU, Number of Params, Hardware Burden, Operations per network pass |
What metrics were used to measure the MAT model in the Multi-branch Attentive Transformer paper on the WMT2014 English-German dataset? | BLEU score, SacreBLEU, Number of Params, Hardware Burden, Operations per network pass |
What metrics were used to measure the MADL model in the Multi-Agent Dual Learning paper on the WMT2016 English-German dataset? | BLEU score, SacreBLEU |
What metrics were used to measure the Attentional encoder-decoder + BPE model in the Edinburgh Neural Machine Translation Systems for WMT 16 paper on the WMT2016 English-German dataset? | BLEU score, SacreBLEU |
What metrics were used to measure the Linguistic Input Features model in the Linguistic Input Features Improve Neural Machine Translation paper on the WMT2016 English-German dataset? | BLEU score, SacreBLEU |
What metrics were used to measure the DeLighT model in the DeLighT: Deep and Light-weight Transformer paper on the WMT2016 English-German dataset? | BLEU score, SacreBLEU |
What metrics were used to measure the FLAN 137B zero-shot model in the Finetuned Language Models Are Zero-Shot Learners paper on the WMT2016 English-German dataset? | BLEU score, SacreBLEU |
What metrics were used to measure the Transformer model in the On the adequacy of untuned warmup for adaptive optimization paper on the WMT2016 English-German dataset? | BLEU score, SacreBLEU |
What metrics were used to measure the BiRNN + GCN (Syn + Sem) model in the Exploiting Semantics in Neural Machine Translation with Graph Convolutional Networks paper on the WMT2016 English-German dataset? | BLEU score, SacreBLEU |
What metrics were used to measure the SMT + iterative backtranslation (unsupervised) model in the Unsupervised Statistical Machine Translation paper on the WMT2016 English-German dataset? | BLEU score, SacreBLEU |
What metrics were used to measure the Unsupervised NMT + weight-sharing model in the Unsupervised Neural Machine Translation with Weight Sharing paper on the WMT2016 English-German dataset? | BLEU score, SacreBLEU |
What metrics were used to measure the Unsupervised S2S with attention model in the Unsupervised Machine Translation Using Monolingual Corpora Only paper on the WMT2016 English-German dataset? | BLEU score, SacreBLEU |
What metrics were used to measure the Exploiting Mono at Scale (single) model in the Exploiting Monolingual Data at Scale for Neural Machine Translation paper on the WMT2016 English-German dataset? | BLEU score, SacreBLEU |
What metrics were used to measure the ByteNet model in the Neural Machine Translation in Linear Time paper on the WMT2015 English-German dataset? | BLEU score |
What metrics were used to measure the S2Tree+5gram NPLM model in the paper on the WMT2015 English-German dataset? | BLEU score |
What metrics were used to measure the Enc-Dec Att (char) model in the A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation paper on the WMT2015 English-German dataset? | BLEU score |
What metrics were used to measure the BPE word segmentation model in the Neural Machine Translation of Rare Words with Subword Units paper on the WMT2015 English-German dataset? | BLEU score |
What metrics were used to measure the Enc-Dec Att (BPE) model in the A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation paper on the WMT2015 English-German dataset? | BLEU score |
What metrics were used to measure the Unsupervised attentional encoder-decoder + BPE model in the Unsupervised Neural Machine Translation paper on the WMT2015 English-German dataset? | BLEU score |
What metrics were used to measure the CT+B/S construction model in the The University of Sydney's Machine Translation System for WMT19 paper on the WMT2019 Finnish-English dataset? | BLEU |
What metrics were used to measure the DeLighT model in the DeLighT: Deep and Light-weight Transformer paper on the WMT2016 English-Romanian dataset? | BLEU score, BLEU-4 |
What metrics were used to measure the CMLM+LAT+4 iterations model in the Incorporating a Local Translation Mechanism into Non-autoregressive Translation paper on the WMT2016 English-Romanian dataset? | BLEU score, BLEU-4 |
What metrics were used to measure the FlowSeq-large (NPD n = 30) model in the FlowSeq: Non-Autoregressive Conditional Sequence Generation with Generative Flow paper on the WMT2016 English-Romanian dataset? | BLEU score, BLEU-4 |
What metrics were used to measure the FlowSeq-large (NPD n=15) model in the FlowSeq: Non-Autoregressive Conditional Sequence Generation with Generative Flow paper on the WMT2016 English-Romanian dataset? | BLEU score, BLEU-4 |
What metrics were used to measure the FlowSeq-large (IWD n = 15) model in the FlowSeq: Non-Autoregressive Conditional Sequence Generation with Generative Flow paper on the WMT2016 English-Romanian dataset? | BLEU score, BLEU-4 |
What metrics were used to measure the CMLM+LAT+1 iterations model in the Incorporating a Local Translation Mechanism into Non-autoregressive Translation paper on the WMT2016 English-Romanian dataset? | BLEU score, BLEU-4 |
What metrics were used to measure the ConvS2S BPE40k model in the Convolutional Sequence to Sequence Learning paper on the WMT2016 English-Romanian dataset? | BLEU score, BLEU-4 |
What metrics were used to measure the FlowSeq-large model in the FlowSeq: Non-Autoregressive Conditional Sequence Generation with Generative Flow paper on the WMT2016 English-Romanian dataset? | BLEU score, BLEU-4 |
What metrics were used to measure the NAT +FT + NPD model in the Non-Autoregressive Neural Machine Translation paper on the WMT2016 English-Romanian dataset? | BLEU score, BLEU-4 |
What metrics were used to measure the Denoising autoencoders (non-autoregressive) model in the Deterministic Non-Autoregressive Neural Sequence Modeling by Iterative Refinement paper on the WMT2016 English-Romanian dataset? | BLEU score, BLEU-4 |
What metrics were used to measure the FlowSeq-base model in the FlowSeq: Non-Autoregressive Conditional Sequence Generation with Generative Flow paper on the WMT2016 English-Romanian dataset? | BLEU score, BLEU-4 |
What metrics were used to measure the GRU BPE90k model in the The QT21/HimL Combined Machine Translation System paper on the WMT2016 English-Romanian dataset? | BLEU score, BLEU-4 |
What metrics were used to measure the BiGRU model in the Edinburgh Neural Machine Translation Systems for WMT 16 paper on the WMT2016 English-Romanian dataset? | BLEU score, BLEU-4 |
What metrics were used to measure the Deep Convolutional Encoder; single-layer decoder model in the A Convolutional Encoder Model for Neural Machine Translation paper on the WMT2016 English-Romanian dataset? | BLEU score, BLEU-4 |
What metrics were used to measure the BiLSTM model in the A Convolutional Encoder Model for Neural Machine Translation paper on the WMT2016 English-Romanian dataset? | BLEU score, BLEU-4 |
What metrics were used to measure the PBSMT + NMT model in the Phrase-Based & Neural Unsupervised Machine Translation paper on the WMT2016 English-Romanian dataset? | BLEU score, BLEU-4 |
What metrics were used to measure the Unsupervised PBSMT model in the Phrase-Based & Neural Unsupervised Machine Translation paper on the WMT2016 English-Romanian dataset? | BLEU score, BLEU-4 |
What metrics were used to measure the Unsupervised NMT + Transformer model in the Phrase-Based & Neural Unsupervised Machine Translation paper on the WMT2016 English-Romanian dataset? | BLEU score, BLEU-4 |
What metrics were used to measure the FLAN 137B zero-shot model in the Finetuned Language Models Are Zero-Shot Learners paper on the WMT2016 English-Romanian dataset? | BLEU score, BLEU-4 |
What metrics were used to measure the BART (TextBox 2.0) model in the TextBox 2.0: A Text Generation Library with Pre-trained Language Models paper on the WMT2016 English-Romanian dataset? | BLEU score, BLEU-4 |
What metrics were used to measure the HeadMask (Random-18) model in the Alleviating the Inequality of Attention Heads for Neural Machine Translation paper on the IWSLT2015 Vietnamese-English dataset? | BLEU |
What metrics were used to measure the HeadMask (Impt-18) model in the Alleviating the Inequality of Attention Heads for Neural Machine Translation paper on the IWSLT2015 Vietnamese-English dataset? | BLEU |
What metrics were used to measure the M_C model in the On Automatic Parsing of Log Records paper on the V_A (trained on T_H) dataset? | Median Relative Edit Distance |
What metrics were used to measure the FLAN 137B zero-shot model in the Finetuned Language Models Are Zero-Shot Learners paper on the WMT2016 German-English dataset? | BLEU score, SacreBLEU |
What metrics were used to measure the Attentional encoder-decoder + BPE model in the Edinburgh Neural Machine Translation Systems for WMT 16 paper on the WMT2016 German-English dataset? | BLEU score, SacreBLEU |
What metrics were used to measure the Linguistic Input Features model in the Linguistic Input Features Improve Neural Machine Translation paper on the WMT2016 German-English dataset? | BLEU score, SacreBLEU |
What metrics were used to measure the SMT + iterative backtranslation (unsupervised) model in the Unsupervised Statistical Machine Translation paper on the WMT2016 German-English dataset? | BLEU score, SacreBLEU |
What metrics were used to measure the Unsupervised NMT + weight-sharing model in the Unsupervised Neural Machine Translation with Weight Sharing paper on the WMT2016 German-English dataset? | BLEU score, SacreBLEU |
What metrics were used to measure the Unsupervised S2S with attention model in the Unsupervised Machine Translation Using Monolingual Corpora Only paper on the WMT2016 German-English dataset? | BLEU score, SacreBLEU |
What metrics were used to measure the Exploiting Mono at Scale (single) model in the Exploiting Monolingual Data at Scale for Neural Machine Translation paper on the WMT2016 German-English dataset? | BLEU score, SacreBLEU |
What metrics were used to measure the Vega-MT model in the Vega-MT: The JD Explore Academy Translation System for WMT22 paper on the WMT 2022 English-Czech dataset? | SacreBLEU |
What metrics were used to measure the PaLM 2 model in the PaLM 2 Technical Report paper on the FRMT (Portuguese - Brazil) dataset? | BLEURT |
What metrics were used to measure the Google Translate model in the PaLM 2 Technical Report paper on the FRMT (Portuguese - Brazil) dataset? | BLEURT |
What metrics were used to measure the PaLM model in the PaLM 2 Technical Report paper on the FRMT (Portuguese - Brazil) dataset? | BLEURT |
What metrics were used to measure the Bi-SimCut model in the Bi-SimCut: A Simple Strategy for Boosting Neural Machine Translation paper on the WMT2014 German-English dataset? | BLEU score |
What metrics were used to measure the BiBERT model in the BERT, mBERT, or BiBERT? A Study on Contextualized Embeddings for Neural Machine Translation paper on the WMT2014 German-English dataset? | BLEU score |
What metrics were used to measure the SimCut model in the Bi-SimCut: A Simple Strategy for Boosting Neural Machine Translation paper on the WMT2014 German-English dataset? | BLEU score |
What metrics were used to measure the Mega model in the Mega: Moving Average Equipped Gated Attention paper on the WMT2014 German-English dataset? | BLEU score |
What metrics were used to measure the CMLM+LAT+4 iterations model in the Incorporating a Local Translation Mechanism into Non-autoregressive Translation paper on the WMT2014 German-English dataset? | BLEU score |
What metrics were used to measure the MAT+Knee model in the Wide-minima Density Hypothesis and the Explore-Exploit Learning Rate Schedule paper on the WMT2014 German-English dataset? | BLEU score |
What metrics were used to measure the CNAT model in the Non-Autoregressive Translation by Learning Target Categorical Codes paper on the WMT2014 German-English dataset? | BLEU score |
What metrics were used to measure the CMLM+LAT+1 iterations model in the Incorporating a Local Translation Mechanism into Non-autoregressive Translation paper on the WMT2014 German-English dataset? | BLEU score |
What metrics were used to measure the FlowSeq-large (NPD n = 30) model in the FlowSeq: Non-Autoregressive Conditional Sequence Generation with Generative Flow paper on the WMT2014 German-English dataset? | BLEU score |
What metrics were used to measure the FlowSeq-large (NPD n = 15) model in the FlowSeq: Non-Autoregressive Conditional Sequence Generation with Generative Flow paper on the WMT2014 German-English dataset? | BLEU score |
What metrics were used to measure the FlowSeq-large (IWD n=15) model in the FlowSeq: Non-Autoregressive Conditional Sequence Generation with Generative Flow paper on the WMT2014 German-English dataset? | BLEU score |
What metrics were used to measure the Denoising autoencoders (non-autoregressive) model in the Deterministic Non-Autoregressive Neural Sequence Modeling by Iterative Refinement paper on the WMT2014 German-English dataset? | BLEU score |
What metrics were used to measure the FlowSeq-large model in the FlowSeq: Non-Autoregressive Conditional Sequence Generation with Generative Flow paper on the WMT2014 German-English dataset? | BLEU score |
What metrics were used to measure the FlowSeq-base model in the FlowSeq: Non-Autoregressive Conditional Sequence Generation with Generative Flow paper on the WMT2014 German-English dataset? | BLEU score |
What metrics were used to measure the NAT +FT + NPD model in the Non-Autoregressive Neural Machine Translation paper on the WMT2014 German-English dataset? | BLEU score |
What metrics were used to measure the SMT + iterative backtranslation (unsupervised) model in the Unsupervised Statistical Machine Translation paper on the WMT2014 German-English dataset? | BLEU score |
What metrics were used to measure the CT+B/S construction model in the The University of Sydney's Machine Translation System for WMT19 paper on the WMT2017 Finnish-English dataset? | BLEU |
What metrics were used to measure the EnViT5 + MTet model in the MTet: Multi-domain Translation for English and Vietnamese paper on the IWSLT2015 English-Vietnamese dataset? | BLEU, SacreBLEU |
What metrics were used to measure the Tall Transformer with Style-Augmented Training model in the Better Translation for Vietnamese paper on the IWSLT2015 English-Vietnamese dataset? | BLEU, SacreBLEU |
What metrics were used to measure the Transformer+BPE-dropout model in the BPE-Dropout: Simple and Effective Subword Regularization paper on the IWSLT2015 English-Vietnamese dataset? | BLEU, SacreBLEU |
What metrics were used to measure the Transformer+BPE+FixNorm+ScaleNorm model in the Transformers without Tears: Improving the Normalization of Self-Attention paper on the IWSLT2015 English-Vietnamese dataset? | BLEU, SacreBLEU |
What metrics were used to measure the Transformer+LayerNorm-simple model in the Understanding and Improving Layer Normalization paper on the IWSLT2015 English-Vietnamese dataset? | BLEU, SacreBLEU |
What metrics were used to measure the CVT model in the Semi-Supervised Sequence Modeling with Cross-View Training paper on the IWSLT2015 English-Vietnamese dataset? | BLEU, SacreBLEU |
What metrics were used to measure the Self-Adaptive Control of Temperature model in the Learning When to Concentrate or Divert Attention: Self-Adaptive Attention Temperature for Neural Machine Translation paper on the IWSLT2015 English-Vietnamese dataset? | BLEU, SacreBLEU |
What metrics were used to measure the SAWR model in the Syntax-Enhanced Neural Machine Translation with Syntax-Aware Word Representations paper on the IWSLT2015 English-Vietnamese dataset? | BLEU, SacreBLEU |
What metrics were used to measure the DeconvDec model in the Deconvolution-Based Global Decoding for Neural Machine Translation paper on the IWSLT2015 English-Vietnamese dataset? | BLEU, SacreBLEU |
What metrics were used to measure the LSTM+Attention+Ensemble model in the Stanford Neural Machine Translation Systems for Spoken Language Domains paper on the IWSLT2015 English-Vietnamese dataset? | BLEU, SacreBLEU |
What metrics were used to measure the NLLB-200 model in the No Language Left Behind: Scaling Human-Centered Machine Translation paper on the IWSLT2015 English-Vietnamese dataset? | BLEU, SacreBLEU |
What metrics were used to measure the PiNMT model in the Integrating Pre-trained Language Model into Neural Machine Translation paper on the IWSLT2014 German-English dataset? | BLEU score, Number of Params |
What metrics were used to measure the BiBERT model in the BERT, mBERT, or BiBERT? A Study on Contextualized Embeddings for Neural Machine Translation paper on the IWSLT2014 German-English dataset? | BLEU score, Number of Params |
What metrics were used to measure the Bi-SimCut model in the Bi-SimCut: A Simple Strategy for Boosting Neural Machine Translation paper on the IWSLT2014 German-English dataset? | BLEU score, Number of Params |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.