prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the Alessandro_Tansel model in the paper on the KILT: FEVER dataset? | KILT-AC, R-Prec, Recall@5, Accuracy |
What metrics were used to measure the JuanTran model in the paper on the KILT: FEVER dataset? | KILT-AC, R-Prec, Recall@5, Accuracy |
What metrics were used to measure the Logistic Regression model in the paper on the KILT: FEVER dataset? | KILT-AC, R-Prec, Recall@5, Accuracy |
What metrics were used to measure the QDA model in the paper on the KILT: FEVER dataset? | KILT-AC, R-Prec, Recall@5, Accuracy |
What metrics were used to measure the SVM model in the paper on the KILT: FEVER dataset? | KILT-AC, R-Prec, Recall@5, Accuracy |
What metrics were used to measure the stupidTeam model in the paper on the KILT: FEVER dataset? | KILT-AC, R-Prec, Recall@5, Accuracy |
What metrics were used to measure the QDA_EMB2 model in the paper on the KILT: FEVER dataset? | KILT-AC, R-Prec, Recall@5, Accuracy |
What metrics were used to measure the SVM model in the paper on the KILT: FEVER dataset? | KILT-AC, R-Prec, Recall@5, Accuracy |
What metrics were used to measure the Marco Aurelio Sterpa model in the paper on the KILT: FEVER dataset? | KILT-AC, R-Prec, Recall@5, Accuracy |
What metrics were used to measure the its_all_greek_to_me model in the paper on the KILT: FEVER dataset? | KILT-AC, R-Prec, Recall@5, Accuracy |
What metrics were used to measure the multi-task small model in the paper on the KILT: FEVER dataset? | KILT-AC, R-Prec, Recall@5, Accuracy |
What metrics were used to measure the LogisticRegression model in the paper on the KILT: FEVER dataset? | KILT-AC, R-Prec, Recall@5, Accuracy |
What metrics were used to measure the galimaldo model in the paper on the KILT: FEVER dataset? | KILT-AC, R-Prec, Recall@5, Accuracy |
What metrics were used to measure the ProoFVer-SB model in the ProoFVer: Natural Logic Theorem Proving for Fact Verification paper on the FEVER dataset? | Accuracy, FEVER |
What metrics were used to measure the DREAM model in the Reasoning Over Semantic-Level Graph for Fact Checking paper on the FEVER dataset? | Accuracy, FEVER |
What metrics were used to measure the RoBERTa-Base Joint MSPP Flexible model in the Paragraph-based Transformer Pre-training for Multi-Sentence Inference paper on the FEVER dataset? | Accuracy, FEVER |
What metrics were used to measure the RoBERTa-Base Joint MSPP model in the Paragraph-based Transformer Pre-training for Multi-Sentence Inference paper on the FEVER dataset? | Accuracy, FEVER |
What metrics were used to measure the KGAT model in the Fine-grained Fact Verification with Kernel Graph Attention Network paper on the FEVER dataset? | Accuracy, FEVER |
What metrics were used to measure the RAG model in the Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks paper on the FEVER dataset? | Accuracy, FEVER |
What metrics were used to measure the GEAR model in the GEAR: Graph-based Evidence Aggregating and Reasoning for Fact Verification paper on the FEVER dataset? | Accuracy, FEVER |
What metrics were used to measure the ConvBERT + Pre + Multi model in the paper on the DialoGLUE full dataset? | Average, Banking77 (Acc), CLINC150 (Acc), HWU64 (Acc), Restaurant8k (F-1), DSTC8 (F-1), TOP (EM), MultiWOZ (Joint Goal Acc) |
What metrics were used to measure the mslm model in the paper on the DialoGLUE full dataset? | Average, Banking77 (Acc), CLINC150 (Acc), HWU64 (Acc), Restaurant8k (F-1), DSTC8 (F-1), TOP (EM), MultiWOZ (Joint Goal Acc) |
What metrics were used to measure the ConvBERT-DG + Pre + Multi model in the paper on the DialoGLUE full dataset? | Average, Banking77 (Acc), CLINC150 (Acc), HWU64 (Acc), Restaurant8k (F-1), DSTC8 (F-1), TOP (EM), MultiWOZ (Joint Goal Acc) |
What metrics were used to measure the ConvBERT-DG model in the paper on the DialoGLUE fewshot dataset? | Average, Banking77 (Acc), CLINC150 (Acc), HWU64 (Acc), Restaurant8k (F-1), DSTC8 (F-1), TOP (EM), MultiWOZ (Joint Goal Acc) |
What metrics were used to measure the ConvBERT-DG + Pre + Multi model in the paper on the DialoGLUE fewshot dataset? | Average, Banking77 (Acc), CLINC150 (Acc), HWU64 (Acc), Restaurant8k (F-1), DSTC8 (F-1), TOP (EM), MultiWOZ (Joint Goal Acc) |
What metrics were used to measure the mslm model in the paper on the DialoGLUE fewshot dataset? | Average, Banking77 (Acc), CLINC150 (Acc), HWU64 (Acc), Restaurant8k (F-1), DSTC8 (F-1), TOP (EM), MultiWOZ (Joint Goal Acc) |
What metrics were used to measure the ConvBERT + Pre + Multi model in the paper on the DialoGLUE fewshot dataset? | Average, Banking77 (Acc), CLINC150 (Acc), HWU64 (Acc), Restaurant8k (F-1), DSTC8 (F-1), TOP (EM), MultiWOZ (Joint Goal Acc) |
What metrics were used to measure the BanLanGen model in the paper on the DialoGLUE fewshot dataset? | Average, Banking77 (Acc), CLINC150 (Acc), HWU64 (Acc), Restaurant8k (F-1), DSTC8 (F-1), TOP (EM), MultiWOZ (Joint Goal Acc) |
What metrics were used to measure the HNN model in the A Hybrid Neural Network Model for Commonsense Reasoning paper on the PDP60 dataset? | Accuracy |
What metrics were used to measure the BERTLARGE model in the BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding paper on the PDP60 dataset? | Accuracy |
What metrics were used to measure the DSSM model in the Unsupervised Deep Structured Semantic Models for Commonsense Reasoning paper on the PDP60 dataset? | Accuracy |
What metrics were used to measure the HNN model in the A Hybrid Neural Network Model for Commonsense Reasoning paper on the WNLI dataset? | Accuracy |
What metrics were used to measure the BERTWiki-WSCR model in the A Surprisingly Robust Trick for Winograd Schema Challenge paper on the WNLI dataset? | Accuracy |
What metrics were used to measure the BERTWSCR model in the A Surprisingly Robust Trick for Winograd Schema Challenge paper on the WNLI dataset? | Accuracy |
What metrics were used to measure the BERTLARGE model in the BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding paper on the WNLI dataset? | Accuracy |
What metrics were used to measure the BERT model in the LexGLUE: A Benchmark Dataset for Legal Language Understanding in English paper on the LexGLUE dataset? | ECtHR Task A, ECtHR Task B, SCOTUS, EUR-LEX, LEDGAR, UNFAIR-ToS, CaseHOLD |
What metrics were used to measure the Legal-BERT model in the LexGLUE: A Benchmark Dataset for Legal Language Understanding in English paper on the LexGLUE dataset? | ECtHR Task A, ECtHR Task B, SCOTUS, EUR-LEX, LEDGAR, UNFAIR-ToS, CaseHOLD |
What metrics were used to measure the CaseLaw-BERT model in the LexGLUE: A Benchmark Dataset for Legal Language Understanding in English paper on the LexGLUE dataset? | ECtHR Task A, ECtHR Task B, SCOTUS, EUR-LEX, LEDGAR, UNFAIR-ToS, CaseHOLD |
What metrics were used to measure the BigBird model in the LexGLUE: A Benchmark Dataset for Legal Language Understanding in English paper on the LexGLUE dataset? | ECtHR Task A, ECtHR Task B, SCOTUS, EUR-LEX, LEDGAR, UNFAIR-ToS, CaseHOLD |
What metrics were used to measure the Longformer model in the LexGLUE: A Benchmark Dataset for Legal Language Understanding in English paper on the LexGLUE dataset? | ECtHR Task A, ECtHR Task B, SCOTUS, EUR-LEX, LEDGAR, UNFAIR-ToS, CaseHOLD |
What metrics were used to measure the RoBERTa model in the LexGLUE: A Benchmark Dataset for Legal Language Understanding in English paper on the LexGLUE dataset? | ECtHR Task A, ECtHR Task B, SCOTUS, EUR-LEX, LEDGAR, UNFAIR-ToS, CaseHOLD |
What metrics were used to measure the DeBERTa model in the LexGLUE: A Benchmark Dataset for Legal Language Understanding in English paper on the LexGLUE dataset? | ECtHR Task A, ECtHR Task B, SCOTUS, EUR-LEX, LEDGAR, UNFAIR-ToS, CaseHOLD |
What metrics were used to measure the Optimised SVM Baseline model in the The Unreasonable Effectiveness of the Baseline: Discussing SVMs in Legal Text Classification paper on the LexGLUE dataset? | ECtHR Task A, ECtHR Task B, SCOTUS, EUR-LEX, LEDGAR, UNFAIR-ToS, CaseHOLD |
What metrics were used to measure the MT-DNN-SMART model in the SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization paper on the GLUE dataset? | Average |
What metrics were used to measure the BERT-LARGE model in the BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding paper on the GLUE dataset? | Average |
What metrics were used to measure the MARVAL model in the A New Path: Scaling Vision-and-Language Navigation with Synthetic Instructions and Imitation Learning paper on the RxR dataset? | ndtw |
What metrics were used to measure the EnvEdit-PT model in the EnvEdit: Environment Editing for Vision-and-Language Navigation paper on the RxR dataset? | ndtw |
What metrics were used to measure the HAMT model in the History Aware Multimodal Transformer for Vision-and-Language Navigation paper on the RxR dataset? | ndtw |
What metrics were used to measure the CLEAR-CLIP model in the How Much Can CLIP Benefit Vision-and-Language Tasks? paper on the RxR dataset? | ndtw |
What metrics were used to measure the Monolingual Baseline model in the Room-Across-Room: Multilingual Vision-and-Language Navigation with Dense Spatiotemporal Grounding paper on the RxR dataset? | ndtw |
What metrics were used to measure the Multilingual Baseline model in the Room-Across-Room: Multilingual Vision-and-Language Navigation with Dense Spatiotemporal Grounding paper on the RxR dataset? | ndtw |
What metrics were used to measure the ORAR + junction type + heading delta model in the Analyzing Generalization of Vision and Language Navigation to Unseen Outdoor Areas paper on the map2seq dataset? | Task Completion (TC) |
What metrics were used to measure the ORAR model in the Analyzing Generalization of Vision and Language Navigation to Unseen Outdoor Areas paper on the map2seq dataset? | Task Completion (TC) |
What metrics were used to measure the Gated Attention model in the Analyzing Generalization of Vision and Language Navigation to Unseen Outdoor Areas paper on the map2seq dataset? | Task Completion (TC) |
What metrics were used to measure the Rconcat model in the Analyzing Generalization of Vision and Language Navigation to Unseen Outdoor Areas paper on the map2seq dataset? | Task Completion (TC) |
What metrics were used to measure the human model in the paper on the VLN Challenge dataset? | success, length, error, oracle success, spl |
What metrics were used to measure the Lily model in the paper on the VLN Challenge dataset? | success, length, error, oracle success, spl |
What metrics were used to measure the Airbert model in the Airbert: In-domain Pretraining for Vision-and-Language Navigation paper on the VLN Challenge dataset? | success, length, error, oracle success, spl |
What metrics were used to measure the Global Normalization model in the paper on the VLN Challenge dataset? | success, length, error, oracle success, spl |
What metrics were used to measure the explore@40 beam-search model in the paper on the VLN Challenge dataset? | success, length, error, oracle success, spl |
What metrics were used to measure the VLN-Bert model in the Improving Vision-and-Language Navigation with Image-Text Pairs from the Web paper on the VLN Challenge dataset? | success, length, error, oracle success, spl |
What metrics were used to measure the BEVBert model in the paper on the VLN Challenge dataset? | success, length, error, oracle success, spl |
What metrics were used to measure the GMap model in the paper on the VLN Challenge dataset? | success, length, error, oracle success, spl |
What metrics were used to measure the Gloabl Normalization pre-explore model in the paper on the VLN Challenge dataset? | success, length, error, oracle success, spl |
What metrics were used to measure the FOAM-Beam Search model in the paper on the VLN Challenge dataset? | success, length, error, oracle success, spl |
What metrics were used to measure the Lily model in the paper on the VLN Challenge dataset? | success, length, error, oracle success, spl |
What metrics were used to measure the ReadNet model in the paper on the VLN Challenge dataset? | success, length, error, oracle success, spl |
What metrics were used to measure the Active Exploration (Beam Search) model in the paper on the VLN Challenge dataset? | success, length, error, oracle success, spl |
What metrics were used to measure the Self-Supervised Auxiliary Reasoning Tasks (Beam Search) model in the Vision-Language Navigation with Self-Supervised Auxiliary Reasoning Tasks paper on the VLN Challenge dataset? | success, length, error, oracle success, spl |
What metrics were used to measure the HOC model in the paper on the VLN Challenge dataset? | success, length, error, oracle success, spl |
What metrics were used to measure the metaexplore model in the paper on the VLN Challenge dataset? | success, length, error, oracle success, spl |
What metrics were used to measure the sponge model in the paper on the VLN Challenge dataset? | success, length, error, oracle success, spl |
What metrics were used to measure the SERL (Beam_Search) model in the paper on the VLN Challenge dataset? | success, length, error, oracle success, spl |
What metrics were used to measure the lxyict model in the paper on the VLN Challenge dataset? | success, length, error, oracle success, spl |
What metrics were used to measure the DUET+PASTS model in the paper on the VLN Challenge dataset? | success, length, error, oracle success, spl |
What metrics were used to measure the Single-run model in the paper on the VLN Challenge dataset? | success, length, error, oracle success, spl |
What metrics were used to measure the Active Exploration (Pre-explore) model in the paper on the VLN Challenge dataset? | success, length, error, oracle success, spl |
What metrics were used to measure the ADad model in the paper on the VLN Challenge dataset? | success, length, error, oracle success, spl |
What metrics were used to measure the null model in the Learning to Navigate Unseen Environments: Back Translation with Environmental Dropout paper on the VLN Challenge dataset? | success, length, error, oracle success, spl |
What metrics were used to measure the CVPR22 model in the paper on the VLN Challenge dataset? | success, length, error, oracle success, spl |
What metrics were used to measure the CMC-AAL2 model in the paper on the VLN Challenge dataset? | success, length, error, oracle success, spl |
What metrics were used to measure the EnvEdit+PT model in the paper on the VLN Challenge dataset? | success, length, error, oracle success, spl |
What metrics were used to measure the Self-Supervised Auxiliary Reasoning Tasks (Pre-explore) model in the Vision-Language Navigation with Self-Supervised Auxiliary Reasoning Tasks paper on the VLN Challenge dataset? | success, length, error, oracle success, spl |
What metrics were used to measure the DDL model in the paper on the VLN Challenge dataset? | success, length, error, oracle success, spl |
What metrics were used to measure the CMG-AAL model in the paper on the VLN Challenge dataset? | success, length, error, oracle success, spl |
What metrics were used to measure the VLN-TreeTrans model in the paper on the VLN Challenge dataset? | success, length, error, oracle success, spl |
What metrics were used to measure the sliu_team model in the paper on the VLN Challenge dataset? | success, length, error, oracle success, spl |
What metrics were used to measure the Single-Run, No Pre-Explore model in the paper on the VLN Challenge dataset? | success, length, error, oracle success, spl |
What metrics were used to measure the TD-STP model in the paper on the VLN Challenge dataset? | success, length, error, oracle success, spl |
What metrics were used to measure the VLN-BERT-Aug model in the paper on the VLN Challenge dataset? | success, length, error, oracle success, spl |
What metrics were used to measure the Fortest model in the paper on the VLN Challenge dataset? | success, length, error, oracle success, spl |
What metrics were used to measure the ESceme Single-run model in the paper on the VLN Challenge dataset? | success, length, error, oracle success, spl |
What metrics were used to measure the WIN model in the paper on the VLN Challenge dataset? | success, length, error, oracle success, spl |
What metrics were used to measure the WIN + RecVLN BERT model in the paper on the VLN Challenge dataset? | success, length, error, oracle success, spl |
What metrics were used to measure the single-run model in the Vision-Language Navigation with Random Environmental Mixup paper on the VLN Challenge dataset? | success, length, error, oracle success, spl |
What metrics were used to measure the Single-Run model in the paper on the VLN Challenge dataset? | success, length, error, oracle success, spl |
What metrics were used to measure the HAMT model in the paper on the VLN Challenge dataset? | success, length, error, oracle success, spl |
What metrics were used to measure the coefficient model in the paper on the VLN Challenge dataset? | success, length, error, oracle success, spl |
What metrics were used to measure the bin model in the paper on the VLN Challenge dataset? | success, length, error, oracle success, spl |
What metrics were used to measure the Greedy, No Pre-explore model in the paper on the VLN Challenge dataset? | success, length, error, oracle success, spl |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.