prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the BERT-LARGE (Ensemble+TriviaQA) model in the BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding paper on the SQuAD1.1 dev dataset? | EM, F1 |
What metrics were used to measure the T5-Base model in the Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer paper on the SQuAD1.1 dev dataset? | EM, F1 |
What metrics were used to measure the BERT-LARGE (Single+TriviaQA) model in the BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding paper on the SQuAD1.1 dev dataset? | EM, F1 |
What metrics were used to measure the BERT-Large-uncased-PruneOFA (90% unstruct sparse) model in the Prune Once for All: Sparse Pre-Trained Language Models paper on the SQuAD1.1 dev dataset? | EM, F1 |
What metrics were used to measure the BERT-Large-uncased-PruneOFA (90% unstruct sparse, QAT Int8) model in the Prune Once for All: Sparse Pre-Trained Language Models paper on the SQuAD1.1 dev dataset? | EM, F1 |
What metrics were used to measure the BERT-Base-uncased-PruneOFA (85% unstruct sparse) model in the Prune Once for All: Sparse Pre-Trained Language Models paper on the SQuAD1.1 dev dataset? | EM, F1 |
What metrics were used to measure the BERT-Base-uncased-PruneOFA (85% unstruct sparse, QAT Int8) model in the Prune Once for All: Sparse Pre-Trained Language Models paper on the SQuAD1.1 dev dataset? | EM, F1 |
What metrics were used to measure the BERT-Base-uncased-PruneOFA (90% unstruct sparse) model in the Prune Once for All: Sparse Pre-Trained Language Models paper on the SQuAD1.1 dev dataset? | EM, F1 |
What metrics were used to measure the TinyBERT (M=6;d' =768;d'i=3072) model in the TinyBERT: Distilling BERT for Natural Language Understanding paper on the SQuAD1.1 dev dataset? | EM, F1 |
What metrics were used to measure the T5-Small model in the Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer paper on the SQuAD1.1 dev dataset? | EM, F1 |
What metrics were used to measure the R.M-Reader (single) model in the Reinforced Mnemonic Reader for Machine Reading Comprehension paper on the SQuAD1.1 dev dataset? | EM, F1 |
What metrics were used to measure the DensePhrases model in the Learning Dense Representations of Phrases at Scale paper on the SQuAD1.1 dev dataset? | EM, F1 |
What metrics were used to measure the DistilBERT-uncased-PruneOFA (85% unstruct sparse) model in the Prune Once for All: Sparse Pre-Trained Language Models paper on the SQuAD1.1 dev dataset? | EM, F1 |
What metrics were used to measure the DistilBERT model in the DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter paper on the SQuAD1.1 dev dataset? | EM, F1 |
What metrics were used to measure the DistilBERT-uncased-PruneOFA (85% unstruct sparse, QAT Int8) model in the Prune Once for All: Sparse Pre-Trained Language Models paper on the SQuAD1.1 dev dataset? | EM, F1 |
What metrics were used to measure the DistilBERT-uncased-PruneOFA (90% unstruct sparse) model in the Prune Once for All: Sparse Pre-Trained Language Models paper on the SQuAD1.1 dev dataset? | EM, F1 |
What metrics were used to measure the KAR model in the Explicit Utilization of General Knowledge in Machine Reading Comprehension paper on the SQuAD1.1 dev dataset? | EM, F1 |
What metrics were used to measure the SAN (single) model in the Stochastic Answer Networks for Machine Reading Comprehension paper on the SQuAD1.1 dev dataset? | EM, F1 |
What metrics were used to measure the DistilBERT-uncased-PruneOFA (90% unstruct sparse, QAT Int8) model in the Prune Once for All: Sparse Pre-Trained Language Models paper on the SQuAD1.1 dev dataset? | EM, F1 |
What metrics were used to measure the FusionNet model in the FusionNet: Fusing via Fully-Aware Attention with Application to Machine Comprehension paper on the SQuAD1.1 dev dataset? | EM, F1 |
What metrics were used to measure the QANet (data aug x3) model in the QANet: Combining Local Convolution with Global Self-Attention for Reading Comprehension paper on the SQuAD1.1 dev dataset? | EM, F1 |
What metrics were used to measure the QANet (data aug x2) model in the QANet: Combining Local Convolution with Global Self-Attention for Reading Comprehension paper on the SQuAD1.1 dev dataset? | EM, F1 |
What metrics were used to measure the DCN+ (single) model in the DCN+: Mixed Objective and Deep Residual Coattention for Question Answering paper on the SQuAD1.1 dev dataset? | EM, F1 |
What metrics were used to measure the QANet model in the QANet: Combining Local Convolution with Global Self-Attention for Reading Comprehension paper on the SQuAD1.1 dev dataset? | EM, F1 |
What metrics were used to measure the PhaseCond (single) model in the Phase Conductor on Multi-layered Attentions for Machine Comprehension paper on the SQuAD1.1 dev dataset? | EM, F1 |
What metrics were used to measure the SRU model in the Simple Recurrent Units for Highly Parallelizable Recurrence paper on the SQuAD1.1 dev dataset? | EM, F1 |
What metrics were used to measure the Smarnet model in the Smarnet: Teaching Machines to Read and Comprehend Like Human paper on the SQuAD1.1 dev dataset? | EM, F1 |
What metrics were used to measure the DCN (Char + CoVe) model in the Learned in Translation: Contextualized Word Vectors paper on the SQuAD1.1 dev dataset? | EM, F1 |
What metrics were used to measure the R-NET (single) model in the Gated Self-Matching Networks for Reading Comprehension and Question Answering paper on the SQuAD1.1 dev dataset? | EM, F1 |
What metrics were used to measure the Ruminating Reader model in the Ruminating Reader: Reasoning with Gated Multi-Hop Attention paper on the SQuAD1.1 dev dataset? | EM, F1 |
What metrics were used to measure the FastQAExt (beam-size 5) model in the Making Neural QA as Simple as Possible but not Simpler paper on the SQuAD1.1 dev dataset? | EM, F1 |
What metrics were used to measure the DrQA (Document Reader only) model in the Reading Wikipedia to Answer Open-Domain Questions paper on the SQuAD1.1 dev dataset? | EM, F1 |
What metrics were used to measure the jNet (TreeLSTM adaptation, QTLa, K=100) model in the Exploring Question Understanding and Adaptation in Neural-Network-Based Question Answering paper on the SQuAD1.1 dev dataset? | EM, F1 |
What metrics were used to measure the SEDT-LSTM model in the Structural Embedding of Syntactic Trees for Machine Comprehension paper on the SQuAD1.1 dev dataset? | EM, F1 |
What metrics were used to measure the BIDAF (single) model in the Bidirectional Attention Flow for Machine Comprehension paper on the SQuAD1.1 dev dataset? | EM, F1 |
What metrics were used to measure the SECT-LSTM model in the Structural Embedding of Syntactic Trees for Machine Comprehension paper on the SQuAD1.1 dev dataset? | EM, F1 |
What metrics were used to measure the RASOR model in the Learning Recurrent Span Representations for Extractive Question Answering paper on the SQuAD1.1 dev dataset? | EM, F1 |
What metrics were used to measure the MPCM model in the Multi-Perspective Context Matching for Machine Comprehension paper on the SQuAD1.1 dev dataset? | EM, F1 |
What metrics were used to measure the DCN model in the Dynamic Coattention Networks For Question Answering paper on the SQuAD1.1 dev dataset? | EM, F1 |
What metrics were used to measure the FABIR model in the A Fully Attention-Based Information Retriever paper on the SQuAD1.1 dev dataset? | EM, F1 |
What metrics were used to measure the Match-LSTM with Bi-Ans-Ptr (Boundary+Search+b) model in the Machine Comprehension Using Match-LSTM and Answer Pointer paper on the SQuAD1.1 dev dataset? | EM, F1 |
What metrics were used to measure the OTF dict+spelling (single) model in the Learning to Compute Word Embeddings On the Fly paper on the SQuAD1.1 dev dataset? | EM, F1 |
What metrics were used to measure the DCR model in the End-to-End Answer Chunk Extraction and Ranking for Reading Comprehension paper on the SQuAD1.1 dev dataset? | EM, F1 |
What metrics were used to measure the FG fine-grained gate model in the Words or Characters? Fine-grained Gating for Reading Comprehension paper on the SQuAD1.1 dev dataset? | EM, F1 |
What metrics were used to measure the BART Base (with text infilling) model in the BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension paper on the SQuAD1.1 dev dataset? | EM, F1 |
What metrics were used to measure the BERT large (LAMB optimizer) model in the Large Batch Optimization for Deep Learning: Training BERT in 76 minutes paper on the SQuAD1.1 dev dataset? | EM, F1 |
What metrics were used to measure the BiDAF + Self Attention + ELMo model in the Deep contextualized word representations paper on the SQuAD1.1 dev dataset? | EM, F1 |
What metrics were used to measure the TANDA DeBERTa-V3-Large + ALL model in the Structural Self-Supervised Objectives for Transformers paper on the TrecQA dataset? | MAP, MRR |
What metrics were used to measure the TANDA-RoBERTa (ASNQ, TREC-QA) model in the TANDA: Transfer and Adapt Pre-Trained Transformer Models for Answer Sentence Selection paper on the TrecQA dataset? | MAP, MRR |
What metrics were used to measure the DeBERTa-V3-Large + SSP model in the Pre-training Transformer Models with Sentence-Level Objectives for Answer Sentence Selection paper on the TrecQA dataset? | MAP, MRR |
What metrics were used to measure the Contextual DeBERTa-V3-Large + SSP model in the Context-Aware Transformer Pre-Training for Answer Sentence Selection paper on the TrecQA dataset? | MAP, MRR |
What metrics were used to measure the RLAS-BIABC model in the RLAS-BIABC: A Reinforcement Learning-Based Answer Selection Using the BERT Model Boosted by an Improved ABC Algorithm paper on the TrecQA dataset? | MAP, MRR |
What metrics were used to measure the RoBERTa-Base Joint + MSPP model in the Paragraph-based Transformer Pre-training for Multi-Sentence Inference paper on the TrecQA dataset? | MAP, MRR |
What metrics were used to measure the RoBERTa-Base + PSD model in the Pre-training Transformer Models with Sentence-Level Objectives for Answer Sentence Selection paper on the TrecQA dataset? | MAP, MRR |
What metrics were used to measure the Comp-Clip + LM + LC model in the A Compare-Aggregate Model with Latent Clustering for Answer Selection paper on the TrecQA dataset? | MAP, MRR |
What metrics were used to measure the NLP-Capsule model in the Towards Scalable and Reliable Capsule Networks for Challenging NLP Applications paper on the TrecQA dataset? | MAP, MRR |
What metrics were used to measure the HyperQA model in the Hyperbolic Representation Learning for Fast and Efficient Neural Question Answering paper on the TrecQA dataset? | MAP, MRR |
What metrics were used to measure the PWIN model in the Pairwise Word Interaction Modeling with Deep Neural Networks for Semantic Similarity Measurement paper on the TrecQA dataset? | MAP, MRR |
What metrics were used to measure the aNMM model in the aNMM: Ranking Short Answer Texts with Attention-Based Neural Matching Model paper on the TrecQA dataset? | MAP, MRR |
What metrics were used to measure the CNN model in the Deep Learning for Answer Sentence Selection paper on the TrecQA dataset? | MAP, MRR |
What metrics were used to measure the Human benchmark model in the TAPE: Assessing Few-shot Russian Language Understanding paper on the MultiQ dataset? | Accuracy |
What metrics were used to measure the RuGPT-3 Large model in the TAPE: Assessing Few-shot Russian Language Understanding paper on the MultiQ dataset? | Accuracy |
What metrics were used to measure the RuGPT-3 Medium model in the TAPE: Assessing Few-shot Russian Language Understanding paper on the MultiQ dataset? | Accuracy |
What metrics were used to measure the RuGPT-3 Small model in the TAPE: Assessing Few-shot Russian Language Understanding paper on the MultiQ dataset? | Accuracy |
What metrics were used to measure the TempoQR-Hard model in the TempoQR: Temporal Question Reasoning over Knowledge Graphs paper on the CronQuestions dataset? | Hits@1 |
What metrics were used to measure the TSQA model in the Improving Time Sensitivity for Question Answering over Temporal Knowledge Graphs paper on the CronQuestions dataset? | Hits@1 |
What metrics were used to measure the TempoQR-Soft model in the TempoQR: Temporal Question Reasoning over Knowledge Graphs paper on the CronQuestions dataset? | Hits@1 |
What metrics were used to measure the EntityQR model in the TempoQR: Temporal Question Reasoning over Knowledge Graphs paper on the CronQuestions dataset? | Hits@1 |
What metrics were used to measure the CronKGQA model in the Question Answering Over Temporal Knowledge Graphs paper on the CronQuestions dataset? | Hits@1 |
What metrics were used to measure the BERT model in the TempoQR: Temporal Question Reasoning over Knowledge Graphs paper on the CronQuestions dataset? | Hits@1 |
What metrics were used to measure the RoBERTa model in the TempoQR: Temporal Question Reasoning over Knowledge Graphs paper on the CronQuestions dataset? | Hits@1 |
What metrics were used to measure the Parallel-Hierarchical model in the A Parallel-Hierarchical Model for Machine Comprehension on Sparse Data paper on the MCTest-500 dataset? | Accuracy |
What metrics were used to measure the syntax, frame, coreference, and word embedding features model in the A Parallel-Hierarchical Model for Machine Comprehension on Sparse Data paper on the MCTest-500 dataset? | Accuracy |
What metrics were used to measure the APOLLO model in the APOLLO: An Optimized Training Approach for Long-form Numerical Reasoning paper on the FinQA dataset? | Execution Accuracy, Program Accuracy |
What metrics were used to measure the ELASTIC (RoBERTa-large) model in the ELASTIC: Numerical Reasoning with Adaptive Symbolic Compiler paper on the FinQA dataset? | Execution Accuracy, Program Accuracy |
What metrics were used to measure the FinQANet (RoBERTa-large) model in the FinQA: A Dataset of Numerical Reasoning over Financial Data paper on the FinQA dataset? | Execution Accuracy, Program Accuracy |
What metrics were used to measure the FinQANet (BERT-large) model in the FinQA: A Dataset of Numerical Reasoning over Financial Data paper on the FinQA dataset? | Execution Accuracy, Program Accuracy |
What metrics were used to measure the FinQANet (FinBert) model in the FinQA: A Dataset of Numerical Reasoning over Financial Data paper on the FinQA dataset? | Execution Accuracy, Program Accuracy |
What metrics were used to measure the DPR model in the Dense Passage Retrieval for Open-Domain Question Answering paper on the NaturalQA dataset? | EM, F1 |
What metrics were used to measure the FLAN 137B zero-shot model in the Finetuned Language Models Are Zero-Shot Learners paper on the NaturalQA dataset? | EM, F1 |
What metrics were used to measure the SpanBERT model in the SpanBERT: Improving Pre-training by Representing and Predicting Spans paper on the NaturalQA dataset? | EM, F1 |
What metrics were used to measure the DyREX model in the DyREx: Dynamic Query Representation for Extractive Question Answering paper on the NaturalQA dataset? | EM, F1 |
What metrics were used to measure the PaLM 2 (few-shot, CoT, SC) model in the PaLM 2 Technical Report paper on the StrategyQA dataset? | Accuracy |
What metrics were used to measure the Rethinking with retrieval (GPT-3) model in the Rethinking with Retrieval: Faithful Large Language Model Inference paper on the StrategyQA dataset? | Accuracy |
What metrics were used to measure the Self-Evaluation Guided Decoding
(Codex, CoT, single reasoning chain, 6-shot gen, 4-shot eval) model in the Self-Evaluation Guided Beam Search for Reasoning paper on the StrategyQA dataset? | Accuracy |
What metrics were used to measure the U-PaLM 540B model in the Transcending Scaling Laws with 0.1% Extra Compute paper on the StrategyQA dataset? | Accuracy |
What metrics were used to measure the PaLM 540B model in the Transcending Scaling Laws with 0.1% Extra Compute paper on the StrategyQA dataset? | Accuracy |
What metrics were used to measure the Minerva 540B model in the Transcending Scaling Laws with 0.1% Extra Compute paper on the StrategyQA dataset? | Accuracy |
What metrics were used to measure the XLNet (single model) model in the XLNet: Generalized Autoregressive Pretraining for Language Understanding paper on the SQuAD2.0 dev dataset? | F1, EM |
What metrics were used to measure the XLNet+DSC model in the Dice Loss for Data-imbalanced NLP Tasks paper on the SQuAD2.0 dev dataset? | F1, EM |
What metrics were used to measure the RoBERTa (no data aug) model in the RoBERTa: A Robustly Optimized BERT Pretraining Approach paper on the SQuAD2.0 dev dataset? | F1, EM |
What metrics were used to measure the ALBERT xxlarge model in the ALBERT: A Lite BERT for Self-supervised Learning of Language Representations paper on the SQuAD2.0 dev dataset? | F1, EM |
What metrics were used to measure the SG-Net model in the SG-Net: Syntax-Guided Machine Reading Comprehension paper on the SQuAD2.0 dev dataset? | F1, EM |
What metrics were used to measure the SpanBERT model in the SpanBERT: Improving Pre-training by Representing and Predicting Spans paper on the SQuAD2.0 dev dataset? | F1, EM |
What metrics were used to measure the ALBERT xlarge model in the ALBERT: A Lite BERT for Self-supervised Learning of Language Representations paper on the SQuAD2.0 dev dataset? | F1, EM |
What metrics were used to measure the SemBERT large model in the Semantics-aware BERT for Language Understanding paper on the SQuAD2.0 dev dataset? | F1, EM |
What metrics were used to measure the ALBERT large model in the ALBERT: A Lite BERT for Self-supervised Learning of Language Representations paper on the SQuAD2.0 dev dataset? | F1, EM |
What metrics were used to measure the ALBERT base model in the ALBERT: A Lite BERT for Self-supervised Learning of Language Representations paper on the SQuAD2.0 dev dataset? | F1, EM |
What metrics were used to measure the RMR + ELMo (Model-III) model in the Read + Verify: Machine Reading Comprehension with Unanswerable Questions paper on the SQuAD2.0 dev dataset? | F1, EM |
What metrics were used to measure the U-Net model in the U-Net: Machine Reading Comprehension with Unanswerable Questions paper on the SQuAD2.0 dev dataset? | F1, EM |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.