prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the RPTM model in the Relation Preserving Triplet Mining for Stabilising the Triplet Loss in Re-identification Systems paper on the VehicleID Medium dataset? | Rank-1, Rank-5, Rank1, Rank5, mAP |
What metrics were used to measure the Smooth-AP model in the Smooth-AP: Smoothing the Path Towards Large-Scale Image Retrieval paper on the VehicleID Medium dataset? | Rank-1, Rank-5, Rank1, Rank5, mAP |
What metrics were used to measure the ANet model in the AttributeNet: Attribute Enhanced Vehicle Re-Identification paper on the VehicleID Medium dataset? | Rank-1, Rank-5, Rank1, Rank5, mAP |
What metrics were used to measure the vehiclenet model in the VehicleNet: Learning Robust Feature Representation for Vehicle Re-identification paper on the VehicleID Medium dataset? | Rank-1, Rank-5, Rank1, Rank5, mAP |
What metrics were used to measure the CAL model in the Counterfactual Attention Learning for Fine-Grained Visual Categorization and Re-identification paper on the VehicleID Medium dataset? | Rank-1, Rank-5, Rank1, Rank5, mAP |
What metrics were used to measure the QD-DLF model in the Vehicle Re-identification Using Quadruple Directional Deep Learning Features paper on the VehicleID Medium dataset? | Rank-1, Rank-5, Rank1, Rank5, mAP |
What metrics were used to measure the Re2G model in the Re2G: Retrieve, Rerank, Generate paper on the KILT: Natural Questions dataset? | KILT-EM, R-Prec, Recall@5, EM, F1, KILT-F1 |
What metrics were used to measure the intersect model in the paper on the KILT: Natural Questions dataset? | KILT-EM, R-Prec, Recall@5, EM, F1, KILT-F1 |
What metrics were used to measure the KGI_0 model in the paper on the KILT: Natural Questions dataset? | KILT-EM, R-Prec, Recall@5, EM, F1, KILT-F1 |
What metrics were used to measure the Wikipedia model in the paper on the KILT: Natural Questions dataset? | KILT-EM, R-Prec, Recall@5, EM, F1, KILT-F1 |
What metrics were used to measure the RAG model in the paper on the KILT: Natural Questions dataset? | KILT-EM, R-Prec, Recall@5, EM, F1, KILT-F1 |
What metrics were used to measure the BERT + DPR model in the paper on the KILT: Natural Questions dataset? | KILT-EM, R-Prec, Recall@5, EM, F1, KILT-F1 |
What metrics were used to measure the BART + DPR model in the paper on the KILT: Natural Questions dataset? | KILT-EM, R-Prec, Recall@5, EM, F1, KILT-F1 |
What metrics were used to measure the Multitask DPR + BART model in the paper on the KILT: Natural Questions dataset? | KILT-EM, R-Prec, Recall@5, EM, F1, KILT-F1 |
What metrics were used to measure the TABi model in the paper on the KILT: Natural Questions dataset? | KILT-EM, R-Prec, Recall@5, EM, F1, KILT-F1 |
What metrics were used to measure the chriskuei model in the paper on the KILT: Natural Questions dataset? | KILT-EM, R-Prec, Recall@5, EM, F1, KILT-F1 |
What metrics were used to measure the GENRE model in the paper on the KILT: Natural Questions dataset? | KILT-EM, R-Prec, Recall@5, EM, F1, KILT-F1 |
What metrics were used to measure the Multi-task DPR model in the paper on the KILT: Natural Questions dataset? | KILT-EM, R-Prec, Recall@5, EM, F1, KILT-F1 |
What metrics were used to measure the Sphere model in the paper on the KILT: Natural Questions dataset? | KILT-EM, R-Prec, Recall@5, EM, F1, KILT-F1 |
What metrics were used to measure the BART model in the paper on the KILT: Natural Questions dataset? | KILT-EM, R-Prec, Recall@5, EM, F1, KILT-F1 |
What metrics were used to measure the T5-base model in the KILT: a Benchmark for Knowledge Intensive Language Tasks paper on the KILT: Natural Questions dataset? | KILT-EM, R-Prec, Recall@5, EM, F1, KILT-F1 |
What metrics were used to measure the multi-task small model in the paper on the KILT: Natural Questions dataset? | KILT-EM, R-Prec, Recall@5, EM, F1, KILT-F1 |
What metrics were used to measure the FiE model in the FiE: Building a Global Probability Space by Leveraging Early Fusion in Encoder for Open-Domain Question Answering paper on the Natural Questions dataset? | Exact Match |
What metrics were used to measure the R2-D2 \w HN-DPR model in the R2-D2: A Modular Baseline for Open-Domain Question Answering paper on the Natural Questions dataset? | Exact Match |
What metrics were used to measure the UniK-QA model in the UniK-QA: Unified Representations of Structured and Unstructured Knowledge for Open-Domain Question Answering paper on the Natural Questions dataset? | Exact Match |
What metrics were used to measure the UnitedQA (Hybrid) model in the UnitedQA: A Hybrid Approach for Open Domain Question Answering paper on the Natural Questions dataset? | Exact Match |
What metrics were used to measure the BPR (linear scan; l=1000) model in the Efficient Passage Retrieval with Hashing for Open-domain Question Answering paper on the Natural Questions dataset? | Exact Match |
What metrics were used to measure the UniK-QA model in the UniK-QA: Unified Representations of Structured and Unstructured Knowledge for Open-Domain Question Answering paper on the TQA dataset? | Exact Match |
What metrics were used to measure the BPR (linear scan; l=1000) model in the Efficient Passage Retrieval with Hashing for Open-domain Question Answering paper on the TQA dataset? | Exact Match |
What metrics were used to measure the Re2G model in the Re2G: Retrieve, Rerank, Generate paper on the KILT: TriviaQA dataset? | KILT-EM, R-Prec, Recall@5, EM, F1, KILT-F1 |
What metrics were used to measure the intersect model in the paper on the KILT: TriviaQA dataset? | KILT-EM, R-Prec, Recall@5, EM, F1, KILT-F1 |
What metrics were used to measure the Wikipedia model in the paper on the KILT: TriviaQA dataset? | KILT-EM, R-Prec, Recall@5, EM, F1, KILT-F1 |
What metrics were used to measure the KGI_0 model in the paper on the KILT: TriviaQA dataset? | KILT-EM, R-Prec, Recall@5, EM, F1, KILT-F1 |
What metrics were used to measure the Multitask DPR + BART model in the paper on the KILT: TriviaQA dataset? | KILT-EM, R-Prec, Recall@5, EM, F1, KILT-F1 |
What metrics were used to measure the RAG model in the paper on the KILT: TriviaQA dataset? | KILT-EM, R-Prec, Recall@5, EM, F1, KILT-F1 |
What metrics were used to measure the BERT + DPR model in the paper on the KILT: TriviaQA dataset? | KILT-EM, R-Prec, Recall@5, EM, F1, KILT-F1 |
What metrics were used to measure the BART + DPR model in the paper on the KILT: TriviaQA dataset? | KILT-EM, R-Prec, Recall@5, EM, F1, KILT-F1 |
What metrics were used to measure the TABi model in the paper on the KILT: TriviaQA dataset? | KILT-EM, R-Prec, Recall@5, EM, F1, KILT-F1 |
What metrics were used to measure the chriskuei model in the paper on the KILT: TriviaQA dataset? | KILT-EM, R-Prec, Recall@5, EM, F1, KILT-F1 |
What metrics were used to measure the GENRE model in the paper on the KILT: TriviaQA dataset? | KILT-EM, R-Prec, Recall@5, EM, F1, KILT-F1 |
What metrics were used to measure the Multi-task DPR model in the paper on the KILT: TriviaQA dataset? | KILT-EM, R-Prec, Recall@5, EM, F1, KILT-F1 |
What metrics were used to measure the Sphere model in the paper on the KILT: TriviaQA dataset? | KILT-EM, R-Prec, Recall@5, EM, F1, KILT-F1 |
What metrics were used to measure the BART model in the paper on the KILT: TriviaQA dataset? | KILT-EM, R-Prec, Recall@5, EM, F1, KILT-F1 |
What metrics were used to measure the T5-base model in the KILT: a Benchmark for Knowledge Intensive Language Tasks paper on the KILT: TriviaQA dataset? | KILT-EM, R-Prec, Recall@5, EM, F1, KILT-F1 |
What metrics were used to measure the Cluster-Former (#C=512) model in the Cluster-Former: Clustering-based Sparse Transformer for Long-Range Dependency Encoding paper on the SearchQA dataset? | EM, N-gram F1, Unigram Acc, F1 |
What metrics were used to measure the Locality-Sensitive Hashing model in the Reformer: The Efficient Transformer paper on the SearchQA dataset? | EM, N-gram F1, Unigram Acc, F1 |
What metrics were used to measure the Multi-passage BERT model in the Multi-passage BERT: A Globally Normalized BERT Model for Open-domain Question Answering paper on the SearchQA dataset? | EM, N-gram F1, Unigram Acc, F1 |
What metrics were used to measure the Sparse Attention model in the Generating Long Sequences with Sparse Transformers paper on the SearchQA dataset? | EM, N-gram F1, Unigram Acc, F1 |
What metrics were used to measure the DECAPROP model in the Densely Connected Attention Propagation for Reading Comprehension paper on the SearchQA dataset? | EM, N-gram F1, Unigram Acc, F1 |
What metrics were used to measure the Denoising QA model in the Denoising Distantly Supervised Open-Domain Question Answering paper on the SearchQA dataset? | EM, N-gram F1, Unigram Acc, F1 |
What metrics were used to measure the DecaProp model in the Densely Connected Attention Propagation for Reading Comprehension paper on the SearchQA dataset? | EM, N-gram F1, Unigram Acc, F1 |
What metrics were used to measure the R^3 model in the R$^3$: Reinforced Reader-Ranker for Open-Domain Question Answering paper on the SearchQA dataset? | EM, N-gram F1, Unigram Acc, F1 |
What metrics were used to measure the DrQA model in the Reading Wikipedia to Answer Open-Domain Questions paper on the SearchQA dataset? | EM, N-gram F1, Unigram Acc, F1 |
What metrics were used to measure the Bi-Attention + DCU-LSTM model in the Multi-Granular Sequence Encoding via Dilated Compositional Units for Reading Comprehension paper on the SearchQA dataset? | EM, N-gram F1, Unigram Acc, F1 |
What metrics were used to measure the AMANDA model in the A Question-Focused Multi-Factor Attention Network for Question Answering paper on the SearchQA dataset? | EM, N-gram F1, Unigram Acc, F1 |
What metrics were used to measure the Focused Hierarchical RNN model in the Focused Hierarchical RNNs for Conditional Sequence Processing paper on the SearchQA dataset? | EM, N-gram F1, Unigram Acc, F1 |
What metrics were used to measure the ASR model in the Text Understanding with the Attention Sum Reader Network paper on the SearchQA dataset? | EM, N-gram F1, Unigram Acc, F1 |
What metrics were used to measure the SpanBERT model in the SpanBERT: Improving Pre-training by Representing and Predicting Spans paper on the SearchQA dataset? | EM, N-gram F1, Unigram Acc, F1 |
What metrics were used to measure the UnitedQA (Hybrid) model in the UnitedQA: A Hybrid Approach for Open Domain Question Answering paper on the TriviaQA dataset? | Exact Match |
What metrics were used to measure the somebody model in the paper on the KILT: ELI5 dataset? | KILT-RL, R-Prec, Recall@5, ROUGE-L, F1, KILT-F1 |
What metrics were used to measure the Wikipedia model in the paper on the KILT: ELI5 dataset? | KILT-RL, R-Prec, Recall@5, ROUGE-L, F1, KILT-F1 |
What metrics were used to measure the arxiv.org/abs/2103.06332 model in the Hurdles to Progress in Long-form Question Answering paper on the KILT: ELI5 dataset? | KILT-RL, R-Prec, Recall@5, ROUGE-L, F1, KILT-F1 |
What metrics were used to measure the BART + DPR model in the paper on the KILT: ELI5 dataset? | KILT-RL, R-Prec, Recall@5, ROUGE-L, F1, KILT-F1 |
What metrics were used to measure the RAG model in the paper on the KILT: ELI5 dataset? | KILT-RL, R-Prec, Recall@5, ROUGE-L, F1, KILT-F1 |
What metrics were used to measure the TABi model in the paper on the KILT: ELI5 dataset? | KILT-RL, R-Prec, Recall@5, ROUGE-L, F1, KILT-F1 |
What metrics were used to measure the chriskuei model in the paper on the KILT: ELI5 dataset? | KILT-RL, R-Prec, Recall@5, ROUGE-L, F1, KILT-F1 |
What metrics were used to measure the GENRE model in the paper on the KILT: ELI5 dataset? | KILT-RL, R-Prec, Recall@5, ROUGE-L, F1, KILT-F1 |
What metrics were used to measure the Multi-task DPR model in the paper on the KILT: ELI5 dataset? | KILT-RL, R-Prec, Recall@5, ROUGE-L, F1, KILT-F1 |
What metrics were used to measure the BART model in the paper on the KILT: ELI5 dataset? | KILT-RL, R-Prec, Recall@5, ROUGE-L, F1, KILT-F1 |
What metrics were used to measure the T5-base model in the KILT: a Benchmark for Knowledge Intensive Language Tasks paper on the KILT: ELI5 dataset? | KILT-RL, R-Prec, Recall@5, ROUGE-L, F1, KILT-F1 |
What metrics were used to measure the Training Set Retrieval (top 1) model in the paper on the KILT: ELI5 dataset? | KILT-RL, R-Prec, Recall@5, ROUGE-L, F1, KILT-F1 |
What metrics were used to measure the multi-task small model in the paper on the KILT: ELI5 dataset? | KILT-RL, R-Prec, Recall@5, ROUGE-L, F1, KILT-F1 |
What metrics were used to measure the Input Copying model in the paper on the KILT: ELI5 dataset? | KILT-RL, R-Prec, Recall@5, ROUGE-L, F1, KILT-F1 |
What metrics were used to measure the Sphere model in the paper on the KILT: ELI5 dataset? | KILT-RL, R-Prec, Recall@5, ROUGE-L, F1, KILT-F1 |
What metrics were used to measure the Random Training Set Answer model in the paper on the KILT: ELI5 dataset? | KILT-RL, R-Prec, Recall@5, ROUGE-L, F1, KILT-F1 |
What metrics were used to measure the DrQA model in the Reading Wikipedia to Answer Open-Domain Questions paper on the SQuAD1.1 dataset? | EM |
What metrics were used to measure the DCN model in the Dynamic Coattention Networks For Question Answering paper on the SQuAD1.1 dataset? | EM |
What metrics were used to measure the MPCM model in the Multi-Perspective Context Matching for Machine Comprehension paper on the SQuAD1.1 dataset? | EM |
What metrics were used to measure the SPARTA model in the SPARTA: Efficient Open-Domain Question Answering via Sparse Transformer Matching Retrieval paper on the SQuAD1.1 dev dataset? | EM |
What metrics were used to measure the BERTserini model in the Data Augmentation for BERT Fine-Tuning in Open-Domain Question Answering paper on the SQuAD1.1 dev dataset? | EM |
What metrics were used to measure the BERTserini model in the End-to-End Open-Domain Question Answering with BERTserini paper on the SQuAD1.1 dev dataset? | EM |
What metrics were used to measure the Fourier Transformer model in the Fourier Transformer: Fast Long Range Modeling by Removing Sequence Redundancy with FFT Operator paper on the ELI5 dataset? | Rouge-L, Rouge-1, Rouge-2 |
What metrics were used to measure the QG model in the Closed-book Question Generation via Contrastive Learning paper on the ELI5 dataset? | Rouge-L, Rouge-1, Rouge-2 |
What metrics were used to measure the BART model in the BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension paper on the ELI5 dataset? | Rouge-L, Rouge-1, Rouge-2 |
What metrics were used to measure the E-MCA model in the Using Local Knowledge Graph Construction to Scale Seq2Seq Models to Multi-Document Inputs paper on the ELI5 dataset? | Rouge-L, Rouge-1, Rouge-2 |
What metrics were used to measure the Transformer Multitask + LayerDrop model in the Reducing Transformer Depth on Demand with Structured Dropout paper on the ELI5 dataset? | Rouge-L, Rouge-1, Rouge-2 |
What metrics were used to measure the Multi-Inrerleave model in the Improving Conditioning in Context-Aware Sequence to Sequence Models paper on the ELI5 dataset? | Rouge-L, Rouge-1, Rouge-2 |
What metrics were used to measure the EMDR2 model in the End-to-End Training of Multi-Document Reader and Retriever for Open-Domain Question Answering paper on the Natural Questions (short) dataset? | Exact Match |
What metrics were used to measure the UniK-QA model in the UniK-QA: Unified Representations of Structured and Unstructured Knowledge for Open-Domain Question Answering paper on the WebQuestions dataset? | Exact Match |
What metrics were used to measure the FiE+PAQ model in the FiE: Building a Global Probability Space by Leveraging Early Fusion in Encoder for Open-Domain Question Answering paper on the WebQuestions dataset? | Exact Match |
What metrics were used to measure the FiE model in the FiE: Building a Global Probability Space by Leveraging Early Fusion in Encoder for Open-Domain Question Answering paper on the WebQuestions dataset? | Exact Match |
What metrics were used to measure the EMDR2 model in the End-to-End Training of Multi-Document Reader and Retriever for Open-Domain Question Answering paper on the WebQuestions dataset? | Exact Match |
What metrics were used to measure the ERNIE 2.0 Large model in the ERNIE 2.0: A Continual Pre-training Framework for Language Understanding paper on the DuReader dataset? | EM |
What metrics were used to measure the ERNIE 2.0 Base model in the ERNIE 2.0: A Continual Pre-training Framework for Language Understanding paper on the DuReader dataset? | EM |
What metrics were used to measure the Evidence Aggregation via R^3 Re-Ranking model in the Evidence Aggregation for Answer Re-Ranking in Open-Domain Question Answering paper on the Quasar dataset? | EM (Quasar-T), F1 (Quasar-T) |
What metrics were used to measure the Denoising QA model in the Denoising Distantly Supervised Open-Domain Question Answering paper on the Quasar dataset? | EM (Quasar-T), F1 (Quasar-T) |
What metrics were used to measure the DecaProp model in the Densely Connected Attention Propagation for Reading Comprehension paper on the Quasar dataset? | EM (Quasar-T), F1 (Quasar-T) |
What metrics were used to measure the R^3 model in the R$^3$: Reinforced Reader-Ranker for Open-Domain Question Answering paper on the Quasar dataset? | EM (Quasar-T), F1 (Quasar-T) |
What metrics were used to measure the GA model in the Gated-Attention Readers for Text Comprehension paper on the Quasar dataset? | EM (Quasar-T), F1 (Quasar-T) |
What metrics were used to measure the BiDAF model in the Bidirectional Attention Flow for Machine Comprehension paper on the Quasar dataset? | EM (Quasar-T), F1 (Quasar-T) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.