prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the DCN + Char + CoVe model in the Learned in Translation: Contextualized Word Vectors paper on the SQuAD1.1 dataset? | EM, F1, Hardware Burden, Exact Match, Operations per network pass |
What metrics were used to measure the M-NET (single) model in the paper on the SQuAD1.1 dataset? | EM, F1, Hardware Burden, Exact Match, Operations per network pass |
What metrics were used to measure the Mnemonic Reader (single model) model in the Reinforced Mnemonic Reader for Machine Reading Comprehension paper on the SQuAD1.1 dataset? | EM, F1, Hardware Burden, Exact Match, Operations per network pass |
What metrics were used to measure the MAMCN (single model) model in the paper on the SQuAD1.1 dataset? | EM, F1, Hardware Burden, Exact Match, Operations per network pass |
What metrics were used to measure the FastQAExt model in the Making Neural QA as Simple as Possible but not Simpler paper on the SQuAD1.1 dataset? | EM, F1, Hardware Burden, Exact Match, Operations per network pass |
What metrics were used to measure the RaSoR (single model) model in the Learning Recurrent Span Representations for Extractive Question Answering paper on the SQuAD1.1 dataset? | EM, F1, Hardware Burden, Exact Match, Operations per network pass |
What metrics were used to measure the Document Reader (single model) model in the Reading Wikipedia to Answer Open-Domain Questions paper on the SQuAD1.1 dataset? | EM, F1, Hardware Burden, Exact Match, Operations per network pass |
What metrics were used to measure the Ruminating Reader (single model) model in the Ruminating Reader: Reasoning with Gated Multi-Hop Attention paper on the SQuAD1.1 dataset? | EM, F1, Hardware Burden, Exact Match, Operations per network pass |
What metrics were used to measure the jNet (single model) model in the Exploring Question Understanding and Adaptation in Neural-Network-Based Question Answering paper on the SQuAD1.1 dataset? | EM, F1, Hardware Burden, Exact Match, Operations per network pass |
What metrics were used to measure the ReasoNet (single model) model in the ReasoNet: Learning to Stop Reading in Machine Comprehension paper on the SQuAD1.1 dataset? | EM, F1, Hardware Burden, Exact Match, Operations per network pass |
What metrics were used to measure the Multi-Perspective Matching (single model) model in the Multi-Perspective Context Matching for Machine Comprehension paper on the SQuAD1.1 dataset? | EM, F1, Hardware Burden, Exact Match, Operations per network pass |
What metrics were used to measure the SimpleBaseline (single model) model in the paper on the SQuAD1.1 dataset? | EM, F1, Hardware Burden, Exact Match, Operations per network pass |
What metrics were used to measure the SSR-BiDAF model in the paper on the SQuAD1.1 dataset? | EM, F1, Hardware Burden, Exact Match, Operations per network pass |
What metrics were used to measure the SEDT+BiDAF (single model) model in the Structural Embedding of Syntactic Trees for Machine Comprehension paper on the SQuAD1.1 dataset? | EM, F1, Hardware Burden, Exact Match, Operations per network pass |
What metrics were used to measure the FastQA model in the Making Neural QA as Simple as Possible but not Simpler paper on the SQuAD1.1 dataset? | EM, F1, Hardware Burden, Exact Match, Operations per network pass |
What metrics were used to measure the PQMN (single model) model in the paper on the SQuAD1.1 dataset? | EM, F1, Hardware Burden, Exact Match, Operations per network pass |
What metrics were used to measure the SEDT (single model) model in the Structural Embedding of Syntactic Trees for Machine Comprehension paper on the SQuAD1.1 dataset? | EM, F1, Hardware Burden, Exact Match, Operations per network pass |
What metrics were used to measure the T-gating (single model) model in the paper on the SQuAD1.1 dataset? | EM, F1, Hardware Burden, Exact Match, Operations per network pass |
What metrics were used to measure the BiDAF (single model) model in the Bidirectional Attention Flow for Machine Comprehension paper on the SQuAD1.1 dataset? | EM, F1, Hardware Burden, Exact Match, Operations per network pass |
What metrics were used to measure the Match-LSTM with Ans-Ptr (Boundary) (ensemble) model in the Machine Comprehension Using Match-LSTM and Answer Pointer paper on the SQuAD1.1 dataset? | EM, F1, Hardware Burden, Exact Match, Operations per network pass |
What metrics were used to measure the FABIR model in the A Fully Attention-Based Information Retriever paper on the SQuAD1.1 dataset? | EM, F1, Hardware Burden, Exact Match, Operations per network pass |
What metrics were used to measure the AllenNLP BiDAF (single model) model in the paper on the SQuAD1.1 dataset? | EM, F1, Hardware Burden, Exact Match, Operations per network pass |
What metrics were used to measure the BIDAF-COMPOUND-DSS (single model) model in the paper on the SQuAD1.1 dataset? | EM, F1, Hardware Burden, Exact Match, Operations per network pass |
What metrics were used to measure the Iterative Co-attention Network model in the paper on the SQuAD1.1 dataset? | EM, F1, Hardware Burden, Exact Match, Operations per network pass |
What metrics were used to measure the newtest model in the paper on the SQuAD1.1 dataset? | EM, F1, Hardware Burden, Exact Match, Operations per network pass |
What metrics were used to measure the BIDAF-INDEPENDENT-DSS (single model) model in the paper on the SQuAD1.1 dataset? | EM, F1, Hardware Burden, Exact Match, Operations per network pass |
What metrics were used to measure the Dynamic Coattention Networks (single model) model in the Dynamic Coattention Networks For Question Answering paper on the SQuAD1.1 dataset? | EM, F1, Hardware Burden, Exact Match, Operations per network pass |
What metrics were used to measure the BIDAF-COMPOUND (single model) model in the paper on the SQuAD1.1 dataset? | EM, F1, Hardware Burden, Exact Match, Operations per network pass |
What metrics were used to measure the BIDAF-INDEPENDENT (single model) model in the paper on the SQuAD1.1 dataset? | EM, F1, Hardware Burden, Exact Match, Operations per network pass |
What metrics were used to measure the Match-LSTM with Bi-Ans-Ptr (Boundary) model in the Machine Comprehension Using Match-LSTM and Answer Pointer paper on the SQuAD1.1 dataset? | EM, F1, Hardware Burden, Exact Match, Operations per network pass |
What metrics were used to measure the Unnamed submission by ravioncodalab model in the paper on the SQuAD1.1 dataset? | EM, F1, Hardware Burden, Exact Match, Operations per network pass |
What metrics were used to measure the OTF dict+spelling (single) model in the Learning to Compute Word Embeddings On the Fly paper on the SQuAD1.1 dataset? | EM, F1, Hardware Burden, Exact Match, Operations per network pass |
What metrics were used to measure the Attentive CNN context with LSTM model in the paper on the SQuAD1.1 dataset? | EM, F1, Hardware Burden, Exact Match, Operations per network pass |
What metrics were used to measure the OTF spelling (single) model in the Learning to Compute Word Embeddings On the Fly paper on the SQuAD1.1 dataset? | EM, F1, Hardware Burden, Exact Match, Operations per network pass |
What metrics were used to measure the OTF spelling+lemma (single) model in the Learning to Compute Word Embeddings On the Fly paper on the SQuAD1.1 dataset? | EM, F1, Hardware Burden, Exact Match, Operations per network pass |
What metrics were used to measure the Dynamic Chunk Reader model in the End-to-End Answer Chunk Extraction and Ranking for Reading Comprehension paper on the SQuAD1.1 dataset? | EM, F1, Hardware Burden, Exact Match, Operations per network pass |
What metrics were used to measure the Fine-Grained Gating model in the Words or Characters? Fine-grained Gating for Reading Comprehension paper on the SQuAD1.1 dataset? | EM, F1, Hardware Burden, Exact Match, Operations per network pass |
What metrics were used to measure the RQA+IDR (single model) model in the paper on the SQuAD1.1 dataset? | EM, F1, Hardware Burden, Exact Match, Operations per network pass |
What metrics were used to measure the RQA+IDR (single model) model in the Harvesting and Refining Question-Answer Pairs for Unsupervised QA paper on the SQuAD1.1 dataset? | EM, F1, Hardware Burden, Exact Match, Operations per network pass |
What metrics were used to measure the Match-LSTM with Ans-Ptr (Boundary) model in the Machine Comprehension Using Match-LSTM and Answer Pointer paper on the SQuAD1.1 dataset? | EM, F1, Hardware Burden, Exact Match, Operations per network pass |
What metrics were used to measure the Unnamed submission by Will_Wu model in the paper on the SQuAD1.1 dataset? | EM, F1, Hardware Burden, Exact Match, Operations per network pass |
What metrics were used to measure the RQA (single model) model in the paper on the SQuAD1.1 dataset? | EM, F1, Hardware Burden, Exact Match, Operations per network pass |
What metrics were used to measure the RQA (single model) model in the Harvesting and Refining Question-Answer Pairs for Unsupervised QA paper on the SQuAD1.1 dataset? | EM, F1, Hardware Burden, Exact Match, Operations per network pass |
What metrics were used to measure the Match-LSTM with Ans-Ptr (Sentence) model in the Machine Comprehension Using Match-LSTM and Answer Pointer paper on the SQuAD1.1 dataset? | EM, F1, Hardware Burden, Exact Match, Operations per network pass |
What metrics were used to measure the UQA (single model) model in the paper on the SQuAD1.1 dataset? | EM, F1, Hardware Burden, Exact Match, Operations per network pass |
What metrics were used to measure the Unnamed submission by jinhyuklee model in the paper on the SQuAD1.1 dataset? | EM, F1, Hardware Burden, Exact Match, Operations per network pass |
What metrics were used to measure the Unnamed submission by minjoon model in the paper on the SQuAD1.1 dataset? | EM, F1, Hardware Burden, Exact Match, Operations per network pass |
What metrics were used to measure the UnsupervisedQA V1 (ensemble) model in the paper on the SQuAD1.1 dataset? | EM, F1, Hardware Burden, Exact Match, Operations per network pass |
What metrics were used to measure the UnsupervisedQA V1 (single model) model in the paper on the SQuAD1.1 dataset? | EM, F1, Hardware Burden, Exact Match, Operations per network pass |
What metrics were used to measure the QANet (single model) model in the paper on the SQuAD1.1 dataset? | EM, F1, Hardware Burden, Exact Match, Operations per network pass |
What metrics were used to measure the model in the paper on the SQuAD1.1 dataset? | EM, F1, Hardware Burden, Exact Match, Operations per network pass |
What metrics were used to measure the QANet (ensemble) model in the paper on the SQuAD1.1 dataset? | EM, F1, Hardware Burden, Exact Match, Operations per network pass |
What metrics were used to measure the superman-new-des model in the paper on the SQuAD1.1 dataset? | EM, F1, Hardware Burden, Exact Match, Operations per network pass |
What metrics were used to measure the WAHnGREA model in the paper on the SQuAD1.1 dataset? | EM, F1, Hardware Burden, Exact Match, Operations per network pass |
What metrics were used to measure the superman-des model in the paper on the SQuAD1.1 dataset? | EM, F1, Hardware Burden, Exact Match, Operations per network pass |
What metrics were used to measure the XLNet-deep (ensemble) model in the paper on the SQuAD1.1 dataset? | EM, F1, Hardware Burden, Exact Match, Operations per network pass |
What metrics were used to measure the BART (TextBox 2.0) model in the TextBox 2.0: A Text Generation Library with Pre-trained Language Models paper on the SQuAD1.1 dataset? | EM, F1, Hardware Burden, Exact Match, Operations per network pass |
What metrics were used to measure the BERT-LARGE (Single+TriviaQA) model in the BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding paper on the SQuAD1.1 dataset? | EM, F1, Hardware Burden, Exact Match, Operations per network pass |
What metrics were used to measure the BERT-Large 32k batch size with AdamW model in the A Large Batch Optimizer Reality Check: Traditional, Generic Optimizers Suffice Across Batch Sizes paper on the SQuAD1.1 dataset? | EM, F1, Hardware Burden, Exact Match, Operations per network pass |
What metrics were used to measure the DyREX model in the DyREx: Dynamic Query Representation for Extractive Question Answering paper on the SQuAD1.1 dataset? | EM, F1, Hardware Burden, Exact Match, Operations per network pass |
What metrics were used to measure the RuBERT model in the Adaptation of Deep Bidirectional Multilingual Transformers for Russian Language paper on the SQuAD1.1 dataset? | EM, F1, Hardware Burden, Exact Match, Operations per network pass |
What metrics were used to measure the Bing Chat model in the VNHSGE: VietNamese High School Graduation Examination Dataset for Large Language Models paper on the VNHSGE-Geography dataset? | Accuracy |
What metrics were used to measure the ChatGPT model in the VNHSGE: VietNamese High School Graduation Examination Dataset for Large Language Models paper on the VNHSGE-Geography dataset? | Accuracy |
What metrics were used to measure the TANDA-DeBERTa-V3-Large + ALL model in the Structural Self-Supervised Objectives for Transformers paper on the WikiQA dataset? | MAP, MRR |
What metrics were used to measure the RLAS-BIABC model in the RLAS-BIABC: A Reinforcement Learning-Based Answer Selection Using the BERT Model Boosted by an Improved ABC Algorithm paper on the WikiQA dataset? | MAP, MRR |
What metrics were used to measure the TANDA-RoBERTa (ASNQ, WikiQA) model in the TANDA: Transfer and Adapt Pre-Trained Transformer Models for Answer Sentence Selection paper on the WikiQA dataset? | MAP, MRR |
What metrics were used to measure the DeBERTa-V3-Large + ALL model in the Pre-training Transformer Models with Sentence-Level Objectives for Answer Sentence Selection paper on the WikiQA dataset? | MAP, MRR |
What metrics were used to measure the DeBERTa-Large + SSP model in the Pre-training Transformer Models with Sentence-Level Objectives for Answer Sentence Selection paper on the WikiQA dataset? | MAP, MRR |
What metrics were used to measure the RoBERTa-Base Joint MSPP model in the Paragraph-based Transformer Pre-training for Multi-Sentence Inference paper on the WikiQA dataset? | MAP, MRR |
What metrics were used to measure the RoBERTa-Base + SSP model in the Pre-training Transformer Models with Sentence-Level Objectives for Answer Sentence Selection paper on the WikiQA dataset? | MAP, MRR |
What metrics were used to measure the Comp-Clip + LM + LC model in the A Compare-Aggregate Model with Latent Clustering for Answer Selection paper on the WikiQA dataset? | MAP, MRR |
What metrics were used to measure the RE2 model in the Simple and Effective Text Matching with Richer Alignment Features paper on the WikiQA dataset? | MAP, MRR |
What metrics were used to measure the HyperQA model in the Hyperbolic Representation Learning for Fast and Efficient Neural Question Answering paper on the WikiQA dataset? | MAP, MRR |
What metrics were used to measure the PWIM model in the Pairwise Word Interaction Modeling with Deep Neural Networks for Semantic Similarity Measurement paper on the WikiQA dataset? | MAP, MRR |
What metrics were used to measure the Key-Value Memory Network model in the Key-Value Memory Networks for Directly Reading Documents paper on the WikiQA dataset? | MAP, MRR |
What metrics were used to measure the LDC model in the Sentence Similarity Learning by Lexical Decomposition and Composition paper on the WikiQA dataset? | MAP, MRR |
What metrics were used to measure the PairwiseRank + Multi-Perspective CNN model in the Noise Contrastive Estimation and Negative Sampling for Conditional Models: Consistency and Statistical Efficiency paper on the WikiQA dataset? | MAP, MRR |
What metrics were used to measure the Attentive LSTM model in the Neural Variational Inference for Text Processing paper on the WikiQA dataset? | MAP, MRR |
What metrics were used to measure the AP-CNN model in the Attentive Pooling Networks paper on the WikiQA dataset? | MAP, MRR |
What metrics were used to measure the LSTM (lexical overlap + dist output) model in the Neural Variational Inference for Text Processing paper on the WikiQA dataset? | MAP, MRR |
What metrics were used to measure the MMA-NSE attention model in the Neural Semantic Encoders paper on the WikiQA dataset? | MAP, MRR |
What metrics were used to measure the SWEM-concat model in the Baseline Needs More Love: On Simple Word-Embedding-Based Models and Associated Pooling Mechanisms paper on the WikiQA dataset? | MAP, MRR |
What metrics were used to measure the LSTM model in the Neural Variational Inference for Text Processing paper on the WikiQA dataset? | MAP, MRR |
What metrics were used to measure the Bigram-CNN (lexical overlap + dist output) model in the Deep Learning for Answer Sentence Selection paper on the WikiQA dataset? | MAP, MRR |
What metrics were used to measure the CNN-Cnt model in the WikiQA: A Challenge Dataset for Open-Domain Question Answering paper on the WikiQA dataset? | MAP, MRR |
What metrics were used to measure the Bigram-CNN model in the Deep Learning for Answer Sentence Selection paper on the WikiQA dataset? | MAP, MRR |
What metrics were used to measure the Paragraph vector (lexical overlap + dist output) model in the Distributed Representations of Sentences and Documents paper on the WikiQA dataset? | MAP, MRR |
What metrics were used to measure the Paragraph vector model in the Distributed Representations of Sentences and Documents paper on the WikiQA dataset? | MAP, MRR |
What metrics were used to measure the LM4VisualEncoding model in the Frozen Transformers in Language Models Are Effective Visual Encoder Layers paper on the SQA3D dataset? | AnswerExactMatch (Question Answering) |
What metrics were used to measure the ScanQA (w/ auxiliary loss) model in the SQA3D: Situated Question Answering in 3D Scenes paper on the SQA3D dataset? | AnswerExactMatch (Question Answering) |
What metrics were used to measure the ScanQA model in the SQA3D: Situated Question Answering in 3D Scenes paper on the SQA3D dataset? | AnswerExactMatch (Question Answering) |
What metrics were used to measure the MCAN model in the Deep Modular Co-Attention Networks for Visual Question Answering paper on the SQA3D dataset? | AnswerExactMatch (Question Answering) |
What metrics were used to measure the Bing Chat model in the VNHSGE: VietNamese High School Graduation Examination Dataset for Large Language Models paper on the VNHSGE-Physics dataset? | Accuracy |
What metrics were used to measure the ChatGPT model in the VNHSGE: VietNamese High School Graduation Examination Dataset for Large Language Models paper on the VNHSGE-Physics dataset? | Accuracy |
What metrics were used to measure the T5-11B model in the Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer paper on the SQuAD1.1 dev dataset? | EM, F1 |
What metrics were used to measure the LUKE model in the LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention paper on the SQuAD1.1 dev dataset? | EM, F1 |
What metrics were used to measure the XLNet+DSC model in the Dice Loss for Data-imbalanced NLP Tasks paper on the SQuAD1.1 dev dataset? | EM, F1 |
What metrics were used to measure the XLNet (single model) model in the XLNet: Generalized Autoregressive Pretraining for Language Understanding paper on the SQuAD1.1 dev dataset? | EM, F1 |
What metrics were used to measure the T5-3B model in the Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer paper on the SQuAD1.1 dev dataset? | EM, F1 |
What metrics were used to measure the T5-Large model in the Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer paper on the SQuAD1.1 dev dataset? | EM, F1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.